5 Discussion

Digital technologies are progressively shaping the lives that we live, and their power to do so will only increase in the foreseeable future. High quality scientific evidence is necessary to understand how these changes will affect us and our society. It would also ensure that critical stakeholders such as governments, regulators, designers, programmers, parents and digital technology users are provided with the information necessary to make informed decisions. Yet psychological research into how these new technologies influence us has a uniquely short-term and problem-based focus. It therefore lacks a methodological framework and proper quality control mechanisms. It is apparent from studying past technology panics that research in the area routinely fails to efficiently deliver answers to important and divisive research questions (Grimes, Anderson, and Bergen 2008).

These issues impact the current panic about digital technologies and adolescent well-being (Pew Research Centre 2018). Research in the area is clearly stagnant: something illustrated by the over 80 competing meta-analyses and systematic reviews about digital technology use’s effect on psychological outcomes (Dickson et al. 2018). Yet the field remains popular. Large amounts of money and time have been invested to support technology research (House of Commons Science and Technology Select Committee 2019; Davies et al. 2019), and there is promise of more funding to come. To date these investments have not engendered much research progress. After a decade of investigation, academics are still debating the answers to acutely simple and similar research questions, providing space for non-scientific commentators to dominate a public conversation that is searching for answers (Bell, Bishop, and Przybylski 2015). Throughout this thesis I have been uncovering why this is the case, and what can be done to improve our approach to emergent technology research.

In this thesis’ introduction I addressed the political, societal and academic factors that allow technology panics to repeatedly reincarnate and examined the reasons behind the lacklustre research progress in the field. In a ‘Sisyphean cycle of technology panics’ the lack of clear methodological and theoretical frameworks forces new groups of scientists to build their research area from scratch for each novel technology that gains popularity and needs to be considered. With the prospect of technological innovation accelerating in future (Valkenburg and Piotrowski 2017), psychology must provide better research at faster rates in order to play a constructive role in addressing, questioning and probing our inherent concerns about new technologies. If it fails to innovate, psychology might be left at the side-lines, becoming a mere accomplice to current and future technology panics.

This thesis therefore set out to provide a gold-standard methodological framework to guide research efforts in the area. It introduced novel approaches of interest to both the specific research field (Scharkow 2019) and academia more broadly (Nature 2019; Rai 2019). In writing this thesis, I aimed to make research on the psychological effects of digital technologies fit for purpose. The resulting work has already contributed substantially to the current understanding of digital technology and social media effects.

In this discussion, I will summarise these methodological improvements and highlight what my research allows us to conclude about digital technology and social media use’s influence on adolescent well-being. I will then move on to consider the lack of a theoretical framework in the study of technology panics, and how this could be addressed to truly break the Sisyphean cycle of technology panics. Proposing improved approaches, this thesis therefore rewrites the broken rule-book currently guiding research addressing emergent technologies. This kind of synthesis, which brings together improved methods and societally meaningful research questions, presents a way forward for psychological study in the area. This innovation will not only be of importance to academics, but also to funders, government officials and media organisations: both now and in the future.

5.1 Overview of Results

This thesis introduced more transparent analytical methods, diverse measurement and better data to the important and conflicting debate about adolescents and their relationship with digital technology and social media. It therefore set a new gold-standard for scientific evidence in the field and contributed to our current understanding of technology effects. In all, I analysed six high-quality datasets from the United Kingdom, Ireland and the United States to determine the extent to which the previous reviews’ approximations of digital technology and social media use’s associations with well-being of around r = -0.15 to r = -0.10 are accurate.

Chapter 2 introduced Specification Curve Analysis to improve the transparency of data analysis in the area. Instead of reporting one analysis pathway, it instead maps the results of all theoretically defensible analysis pathways available due to the garden of forking paths (Gelman and Loken 2014). While this presents no panacea for researcher degrees of freedom, it is a substantial improvement. Applied across three datasets, I found a wide range of possible associations between digital technology use and well-being. Researchers could have written thousands of different scientific articles about digital technology use and well-being using the same dataset, showing anything from overarchingly negative to substantially positive relations. In the three different datasets examined, the median correlations found linking well-being and digital technology use ranged from r = -0.04 to r = -0.01. The relations became less negative if only specifications with control variables were included. To put the effect sizes into perspective, I created and applied a new method called comparison specifications which contrasts the association between digital technology use and well-being to the associations of other activities with well-being. I found that wearing glasses was 1.45 times more negatively associated with adolescent well-being than digital technology use, demonstrating that the associations reported were exceedingly small. This conclusion was supported by other comparisons with activities such as eating potatoes or attributes such as height.

To provide a more transparent overview of effects I also focused on measurement. Chapter 3 diversified the measurements of digital technology use currently found in large-scale scientific work by including both time-use diary and retrospective measures of the activity. These measures did not correlate to a great degree, giving further weight to the concerns that the quality of measurement in the area is low (Scharkow 2016). To examine the relation between digital technology use and well-being, Chapter 3 applied the same Specification Curve Analysis framework as Chapter 2, but also explicitly differentiated between exploratory and confirmatory tests. The study was one of the first to examine digital technology use before bed-time and its association with well-being. The inconclusive evidence found for such a link, highlighted how removed some of the public debate is from the current state of scientific evidence.

Adolescent well-being was most negatively correlated with the retrospective digital technology use measures routinely used throughout the research area ( = -0.15 to r = 0.01). Like in Chapter 2, the range of correlations found neared zero (and even found positive relations in some of the datasets examined), and the most negative median correlation was close to the correlations reported in previous systematic reviews. Yet examining other measures of digital technology use provided different results. Well-being was less correlated with total time spent engaging with digital technologies, a measure calculated by summing the time on digital technologies reported in the time-use diaries, both on weekdays ( = -0.06 to r = -0.01) and weekends ( = 0.00 to r = 0.06).

Chapter 4 conceptually replicated and extended the inquiry pursued in previous chapters by differentiating within- and between-person relations through the analysis of a multi-wave longitudinal dataset. This allowed me to focus on the potential bidirectionality of social media effects. The cross-sectional between-person association between social media use and life satisfaction was r = -0.13. This effect size was in line with results of previous reviews, which predominantly synthesised cross-sectional evidence. Such evidence however cannot provide an indication about whether a person increasing or decreasing their social media use will experience changes in well-being in the long-term. Chapter 4 provided evidence that such within-person longitudinal effects are much smaller than their between-person cross-sectional counterparts. An increase in a participant’s social media use in one year, compared to their own average, predicted a small decrease in their life satisfaction one year later (\(\beta\) = -0.05). A decrease in a person’s life satisfaction in one year, compared to their own average, predicted an increase in social media use one year later (\(\beta\) = -0.02). Yet the effects are small, and the inclusion of control variables decreased the effect sizes found. The study also highlighted a possible gender difference, with girls showing more significant specifications than boys.

5.2 Overestimation

This thesis’ research therefore provided evidence that previous studies possibly overestimated the association between digital technology use and well-being. Firstly, the cross-sectional relations found when averaging the results of Specification Curve Analyses were smaller than those summarised by previous review papers. While review paper estimates of the association were routinely in the range of r = -0.15 to r = -0.10, my studies found effect sizes around about r = -0.10 to r = -0.05. In other words, the strength of the association I found linking technology use to well-being outcomes using rigorous methodological approaches was roughly one third smaller than the size suggested by the literature. These differences could be due to the more transparent analytical practices used; especially as previous papers have been shown to implement certain analytical decisions that result in a larger than average effect of interest (Twenge et al. 2017). Such exhibits of biased practices also discredit the argument that the effect sizes found using Specification Curve Analysis were only less negative because ‘worse’ analytical practices were used. The results found in my thesis therefore question whether systematic reviews and meta-analyses really provide the best quality evidence in the area: they are only as good as the sum of their parts.

Secondly, Chapter 3 showed that retrospective measures of digital technology use are more negatively correlated with well-being than other measures devised from time-use diaries. The vast majority of the current research relies solely on such retrospective self-report measures, and if they do overestimate associations, this would create a systematic bias in the literature. It is necessary to note here, however, that time-use diaries have not undergone validation as a measure of digital technology use. This should however not quiet concerns about the field’s current over-reliance on a single type of digital technology measure that has routinely been shown to be biased (Wilcockson, Ellis, and Shaw 2018). As these measures might also be overestimating the negative associations between digital technology use and well-being, my thesis highlighted the immediate need for the research field to improve its measurement approaches.

Finally, in Chapter 4, I found that the between-person cross-sectional relations between social media use and life satisfaction were more negative than the within-person longitudinal effects of social media use on life satisfaction and vice versa. Again, as most evidence in the area is based on cross-sectional data, this indicates that past work could have overestimated the link between social media use and well-being outcomes. Furthermore, taking into account the bidirectionality of effects in psychological research is crucial (Kievit et al. 2013), especially as the use of observational data does not allow for a clear differentiation of cause and effect (Rosenbaum 2017). Current cross-sectional results, which can only tell us about between-person relations, are often incorrectly interpreted in a longitudinal framework, misleading large proportions of the literature and the public. With psychology increasingly moving towards the use of random-intercept models that can differentiate within- and between-person effects, research on technologies should adopt such methodological innovations rapidly to improve its inferences.

5.3 Interpretation

Taken as a whole, the results derived from the investigations detailed in this thesis suggest that previous systematic reviews overestimated the negative association between digital technology use and adolescent well-being. The associations found in this thesis hint at a smaller negative correlation, which nears zero insofar as high-quality control variables are included or when within-person effects are examined. This raises two questions: 1) is this small association obtaining an outsized share of public attention? and 2) is the size of the association proof that we do not have to worry about new technologies like social media affecting teenagers?

The first question is more difficult to answer. While I used various methods to examine whether a statistically significant effect was practically significant, judging what effect sizes are important is still fraught with difficulty. While comparison specifications (Chapter 2), Smallest Effect Sizes of Interest (Lakens, Scheel, and Isager 2018, Chapter 3) and traditionally used cut-offs (Cohen 1992, Chapter 4) supplied three windows into interpreting the importance of various effects, they could not provide the whole picture. Furthermore, the picture was not overarchingly clear (Funder and Ozer 2019). While I found that wearing glasses was more negatively associated with well-being than digital technology use in Chapter 2, the same data found that activities like smoking marijuana were only 1.14 times more negatively associated with well-being than digital technology use as well. It is inherently difficult to specify what effect sizes are important, and there is still no universally agreed method to approach this issue. If a whole population is affected, should we care about a tiny effect (Rose 2008)? Policy and behavioural change are costly and time consuming, so should we not focus on those effects that are most substantial (Ferguson 2009)?

As little work has been done in the area, my approaches are open to criticism and conflicting interpretations. Further research could help provide the necessary guidance to judge the effect sizes found. For example, one method to determine a smallest effect size of interest might be to examine the smallest possible difference in the outcome that participants would notice (Bauer-Staeb et al. 2019). For example, I could implement such a method for the median effects found analysing the MCS dataset in Chapter 3, assuming that well-being needs to decrease by 0.50 standard deviations for participants to take note. Adolescents who reported using technology would need to report 63 hours and 31 minutes more technology use a day in their time-use diaries to perceive such a decrease. The above calculation is based on the median of calculated effect sizes, however, if we only consider the specification with the maximum effect size, the time an adolescent needs to spend on technology to experience the relevant decline in well-being decreases to 11 hours and 14 minutes. The 0.50 standard deviation in change is often referenced as a cutoff for those effects that participants would become subjectively aware of (Miller 1956; Norman, Sloan, and Wyrwich 2003). Yet it has not been directly tested on the questionnaires and populations used in this study. Furthermore, it is still up for debate whether smaller effects, even when not noticeable, are still important because digital technology is used by a large majority of the population (Rose 2008). Such problems are evident throughout psychology, as multiple efforts are currently being made to help researchers when choosing smallest effect sizes of interest (Anvari and Lakens 2019), interpreting effects sizes (Funder and Ozer 2019) and visualizing their effects (Ho et al. 2019).

Whether the effect sizes found are practically significant or not, this thesis does not provide evidence that adolescents are not affected by digital technology or social media use. Instead, it highlights that when all uses of digital technology and all adolescent users are averaged across a large population there is little to no relation between digital technology use, social media use and long-term well-being outcomes. Yet, as seen in Chapter 4, that does not preclude that there isn’t a certain subset of adolescents that are affected – either positively or negatively – by digital technology and social media use. The research area therefore needs to invest more time into researching individual differences, highlighting those that are most affected by digital technologies, instead of trying to average across a very diverse pool of adolescents. Preliminary research has highlighted, for example, that the negative effects of digital technology use on psychological well-being might be greatest for those adolescents least privileged in society (Odgers 2018) or adolescent girls (Twenge et al. 2017). Furthermore, there might be certain uses of technology that are more harmful than others (Burke, Marlow, and Lento 2010), so averaging across all types of digital technology or social media use might hide such effects. This can only be addressed once detailed trace data stored by technology companies become accessible to academics, something discussed further in the next section. This thesis therefore provides the necessary transparent and robust evidence to show that the relation between time spent using digital technologies and adolescent well-being is smaller than previously expected, and that the research field needs to redirect its research questions to find more coherent results. In particular, research should start taking into account the complex interplay of risk factors, technology uses and individual differences that makes each adolescent unique.

5.4 Other Limitations

It is important to acknowledge some further limitations of the work presented in this thesis before examining potential next steps. While most substantial limitations were noted throughout the last chapters, I would like to refocus attention on measurement.

How to quantify concepts represents an enduring problem to the psychological discipline. This thesis utilized high-quality secondary datasets that enabled me to make more robust inferences without having to find the millions of pounds required to collect the necessary data (Understanding Society 2018). Yet the nature of secondary data meant that some measures were not harmonized across datasets, even when impressive amounts of harmonization were achieved in cases like the time use diaries analysed in Chapter 3. The digital technology use measures were the most diverse. They queried different technology uses, over different time frames, and in different years, while technology could have developed or changed in the meantime. Previous reviews have however used similar data to make overarching observations about emergent technologies (Dickson et al. 2018), and the aim of this thesis was to do the same. I therefore decided to strike a balance by using the best quality secondary data available but acknowledging the need to exert care in generalizing between the variety of technology measures present.

The nature of measurements available was also a limiting factor in my research. While Chapter 3 introduced more diverse procedures to address the low-quality of technology use measures available in the area, this thesis still predominantly relied on self-report questionnaires to measure digital technology use. Such measures are known to have inherent limitations and biases (Andrews et al. 2015; Wilcockson, Ellis, and Shaw 2018). Furthermore, they constrain the research questions that I was able to ask; limiting my investigation to examining time spent using digital technologies. This puts questions about specific uses of digital technologies out of the scope of this thesis, even though they represent promising avenues for research. My colleagues and I have therefore called for more collaboration between researchers, policy makers and technology companies throughout my doctorate. Such collaborations would need to be preceded by an in-depth discussion with diverse stakeholders about the ethics and transparency of digital technology trace data sharing. Once agreed and accessed, however, such resources would allow academics to answer crucial research questions still untouched by the scientific field.

Lastly, I note throughout this thesis that high quality control variables are important and lessen the negative associations found between digital technology use and well-being. The inclusion of such control variables allowed me to account for an adolescent’s environment, to ultimately determine a more direct relationship between digital technology use and well-being. This is because environmental factors can affect both variables of interest. Research has shown that disadvantaged teenagers use more digital technology (Pew Research Centre 2018) and also that they score worse on well-being questionnaires (The Children’s Society 2018). We would therefore expect a negative correlation between digital technology use and well-being even without any underlying causal structure linking the two (Odgers 2018). Moving forward, improved experimental and causal modelling approaches will be necessary to further understand this relationship (Pearl and Mackenzie 2018). While backed up by evidence, some of the control variables I used can be disagreed upon, for example adolescents’ closeness to the primary caretaker or the time the primary caretaker spends with the adolescent. With little theory to inform control variable inclusion, such variables can either be seen as important third-factors that are necessary to control for or outcomes that if controlled for will confound results. Currently my solution is transparency: I report both the results with and without controls and provide the code and materials necessary for researchers to repeat and alter my analyses. My methods and approaches therefore try to address the limitations I have detailed throughout this thesis, but many of those limitations represent problems in the psychological discipline that still need to be addressed more broadly.

5.5 Next Steps

While this thesis does much to increase the efficiency and transparency of research into emergent technologies, it does not provide a solution, in and of itself, to fully disrupt the continuous Sisyphean cycle of technology panics. The cycle is fuelled by more than researcher degrees of freedom and publication biases. For this reason, the solution requires more than improved methodological frameworks. Academics have little power to change the governmental or scientific system, so ideas around reforming political and academic incentives are out of the scope of this thesis. The development of better theories and theoretical approaches could also lessen the impact of the Sisyphean cycle of technology panics. A strong theoretical base would make it possible to integrate research on older technologies into more current research considering recent technological developments. New technologies would challenge previous theories, requiring them to be revised but not replaced with a whole new theoretical framework. It would be a momentous task, however, to reverse-engineer a complete theoretical framework for an established research area. Furthermore, as research into technological innovations is mainly funded to solve a concrete practical problem, such a long-term theoretical task will be difficult to find the necessary political and academic backing for (Grimes, Anderson, and Bergen 2008).

Another more viable approach is to streamline and focus current psychological research from the outset, so that the work done to investigate emergent technologies has enough nuance to provide cohesive and replicable outcomes. Such a framework for technology research would ensure that work is produced at the highest possible standard and that the research area learns from the issues that have plagued it in the past. It would allow research to provide the robust raw empirical insights needed to develop meaningful theory and build a more cohesive field, focusing on the depth and not the breadth of claims (Kaelin Jr 2017). With a more detailed focus, it would also open up opportunities to develop new methodological approaches to promote triangulation, a process that could substantially improve inferences in the area (Munafò and Davey Smith 2018). The creation of such a framework would be an initiative worthy of large-scale collaboration, and could in time attract the attention of funders, media organisations and governments.

I have spent the last year of my DPhil reflecting on what form such a framework could take. I will detail a preliminary version of such a framework below. My UnITED framework for technology research outlines aspects that should be considered in all future research and funding applications investigating emergent technologies. It therefore also provides an outline for what research should be conducted next to address the concerns about digital technologies and social media.

5.5.1 The UnITED Framework of Technology Research

Research on emergent technologies should consider the following factors:

  1. Unique use: Research should not investigate the effects of a technology as a whole (e.g. digital technology use or social media use) but should focus instead on a unique use that sets the specific technology of interest apart. It should also consider the mechanisms necessary for this unique use to affect the outcome of interest. For example, if interested in social media use’s effect on well-being, research should focus on an aspect or feature of social media use that makes it unique (for example, the non-direct nature of communication, Altman and Taylor 1973). Furthermore, if using a theory like the displacement hypothesis to explain a possible mechanism linking social media use and decreased well-being, research needs to consider what activity social media use can be displacing.

  2. Individual: Research should specify a specific population when making claims about emergent technology effects. Furthermore, the work should clearly demarcate whether it is examining within-person or between-person effects. In the process, the researcher should highlight how they attempted to control for other individual factors that could affect both the technology use measure and the outcome of interest.

  3. Time frame: In a longitudinal study, the time frame of the effects measured should be specified and the theoretical impact should be discussed. A study should clearly communicate whether, for example, it examined the changes predicted by social media one minute after use, rather than one year after use, and why such a time scale is theoretically plausible.

  4. Effect size: The study should report the size of the effect or association that it investigated and provide an interpretation of what the size means for stakeholders. The key question is: statistical significance aside, why should I believe this effect is important? While this thesis used frameworks like comparison specifications or smallest effect sizes of interest, there are multiple other approaches to providing such information.

  5. Direction: Cross-sectional studies on emergent technologies should acknowledge that media effects are inherently bidirectional, and that correlational work is unable to decipher which direction the effects of interest are in. Longitudinal work should interpret and highlight the bidirectional paths of the effects of interest.

If implemented, this framework could improve the quality of research and substantiate the depth of understanding about social media and digital technology effects. The UnITED framework sets out areas that need to be addressed to ultimately promote the production of robust evidence capable of influencing and informing academia, the public and policy. The framework is not the only component that needs to change to stop the Sisyphean cycle of technology panics, but its implementation would add another dimension to research done in the area. Paired with a widespread adoption of the methodological framework presented in this thesis, it has the potential to support a more critical and scientific probing of technology effects in the psychological discipline. This would move psychology away from being an accomplice of technology panics and allow it to become an integral scientific actor that provides debates about emerging technologies with robust and improved scientific evidence.

5.6 Conclusion

Science’s aim is ultimately to find truth. Researchers are expected to do their best to understand complexity and they are expected to sort the meaningful signals of nature from the background of statistical noise. Ever too often, the psychological study of emergent technologies loses sight of this ultimate goal. There are biases and inclinations that all humans share, which make us cautious about new technologies and their potential to change society. There are opportunities to sell books, give talks, be seen and heard, not just by academia but by society as a whole. There is a need for researchers to sustain themselves in a time of scientific industrialisation and increasing precariousness. The lack of a methodological framework in this research area – created on the basis of a societal problem, not a scientific theory – means there is little support and structure in place. Scientific progress is slow, and the research output produced is routinely conflicting. This makes providing clear advice to policy and the public difficult. There is however little impetus for the field to reflect about its own methodology and the apparent problems with it. Yet “the first principle [of science] is that you must not fool yourself – and you are the easiest person to fool” (Feynman 1974).

To not fool ourselves, and to ensure that psychology does not become an accomplice to a never-ending Sisyphean cycle of technology panics, the research area has to recognise the need for change. Psychology’s credibility and reproducibility revolution has established multiple methodological improvements that are applied and developed in this thesis to make research more efficient and transparent. This improved methodological framework can advance the way we investigate emergent technologies. Through their transparency and openness, the studies presented in this thesis have progressed our understanding about adolescents and digital technologies, having already informed and shaped policy proposals (House of Commons Science and Technology Select Committee 2019; Davies et al. 2019). Yet their focus on presenting a new gold-standard approach to the research area could prove even more impactful. If adapted further, a revised methodological framework for the discipline could empower psychology to steer predictable public concerns about emergent technologies into a more productive and evidence-based future.