Wi4impact #4 – Approaches and challenges in measuring knowledge transfer by linking data

An illustration showing various colorful speech bubbles with icons representing knowledge transfer, research, blogs, and communication. The design features human silhouette profiles and symbols related to digital engagement and science communication.

The final Wi4impact blog post reflects on approaches and challenges in measuring research-based knowledge transfer by linking web and survey data. Building on an evaluation framework for science communication, we propose an approach for assessing the effectiveness of digital knowledge transfer using the case of German academic science blogs and podcasts. We also highlight key challenges, including limited data access, issues of comparability and data quality, and the rapidly changing digital data environment that must be considered when analyzing these forms of knowledge transfer.

Der abschließende Wi4impact-Blogbeitrag reflektiert Ansätze und Herausforderungen bei der Messung forschungsbasierten Wissenstransfers durch die Verknüpfung von Web- und Umfragedaten. Aufbauend auf einem Evaluationsmodell für Wissenschaftskommunikation schlagen wir einen Ansatz für die Bewertung der Effektivität von digitalem Wissenstransfer am Beispiel deutscher akademischer Wissenschaftsblogs und -podcasts vor. Zugleich weisen wir auf zentrale Herausforderungen hin, darunter eingeschränkter Datenzugang, Fragen der Vergleichbarkeit und Datenqualität sowie eine sich schnell verändernde digitale Datenlandschaft, die bei der Analyse solcher Formen des Wissenstransfers berücksichtigt werden müssen.

DOI: 10.34879/gesisblog.2025.102


Political and institutional demands for knowledge transfer to enhance the societal impact of research are increasingly focused on actively engaging public audiences and other stakeholders.1 Digital media play a key role in this development as they enable dynamic and participatory modes of science communication. In the research project “Wi4impact”, we analyzed science blogs and podcasts produced by German academic actors as they have been established as digital knowledge transfer media in the last 15-20 years (see also blog post #2 of this series). We aimed to assess the effectiveness of these digital formats in contributing to participatory communication. This blog post presents our approach to linking data for measuring academic knowledge transfer and discusses challenges that arose along the way, aiming to encourage and assist researchers who are interested in pursuing similar approaches.

Measuring knowledge transfer

To address the demand for effective knowledge transfer and to invest in evidence-based activities, it is essential to measure the effectiveness and impact of such activities. While traditional approaches rely on clearly measurable indicators, the current paradigm shift in knowledge transfer impedes the assessment of the effects of research on society and stakeholders (see also blog post #1 of this series). In the context of science communication, this becomes even more complex, as the changes resulting from these activities often manifest at a temporal and logistical distance from the initial activities.2 While most evaluation systems leave communicative transfer activities largely unconsidered, scholars in communication science have addressed this gap.

Building on an evaluation model for science communication3 4, we systematically assessed the effectiveness of German science blogs and podcasts and identified indicators that influence this effectiveness. The model includes stages that linearly build on each other. The first evaluation stage, input, refers to the resources invested in developing and producing the formats. The second stage, internal output, refers to activities that are published. The latter three stages account for different forms of effectiveness. External output refers to the target groups that are reached. Outcome refers to short-term effects, in our case, engagement in the social media context. Finally, impact refers to long-term effects that we understand as sustained forms of social media engagement. Data for these analyses were derived from different stages of the research project, including a manual web search, two online surveys, and social media data collection. Linking these different data promises a comprehensive evaluation and nuanced measurement of the effectiveness of science blogs and podcasts as a means for research-based knowledge transfer.5

Data collection and linking approaches

Applying a multimethod approach, we collected and linked different datasets. First, we identified relevant science communication activities through a manual web-based search, which resulted in an initial sample of 669 active blogs/podcasts affiliated with public higher education and non-university research institutions in Germany. We also manually collected publicly available information on start dates, disciplinary focus, and contact details. Each blog/podcast was assigned a unique ID to enable data linking. Subsequently, producers of 268 blogs/podcasts (response rate: 39.9%) participated in an online survey on production conditions, resources, and objectives (for detailed results, see blog post #3). Individual survey links ensured matching responses to the corresponding blog/podcast. To assess reach metrics, we conducted a second survey, completed by 60 producers (response rate: 35.9%). They reported the number of visits for blogs and the number of plays for podcasts based on the analytical tool of their choice, such as Jetpack, Matomo, or Google Analytics. Producers could provide these data either by sharing screenshots or export files (data donation) or by manually entering the reach metrics.

To complement the reach data, we explored social media engagement as another indicator of effectiveness. We manually collected follower numbers for unique blog/podcast accounts on Facebook, Instagram, YouTube, and LinkedIn. For engagement rates (likes, shares, comments), we automatically collected posts by blog/podcast accounts and posts by affiliated institutional accounts (e.g., universities) from the platforms Facebook and Instagram (using CrowdTangle), as well as X (using zeeschuimer). We also used these platforms and tools to track social media accounts that shared links to the blogs/podcasts, indicating sustained engagement. In addition, we collected podcast-specific data (e.g., the number and length of episodes, ratings), using the Spotify API and manual searches.

Challenges in collecting and linking data

Our approach revealed several challenges inherent to such approaches. Besides a time-consuming initial web search that could have been simplified by central overview pages or open repositories, considerable difficulties occurred regarding limited data access, issues of comparability and data quality, and the rapidly changing digital data environment. To begin with, reach data were not always readily available, particularly for blogs, as they often had not installed analytics tools. Moreover, these tools differ significantly in terminology and functionality, requiring clear definitions and instructions. As many participants did not provide all requested variables, we focused on one variable that was shared most frequently. Most participants opted for manual entry of reach metrics rather than data donation, preventing independent verification and limiting data quality. Finally, the response rate was rather low, potentially due to the effort required.

Collecting social media data posed additional difficulties. Our initial plan to analyze network building on Twitter had to be revised after the Twitter API was shut down in April 2023 and alternative attempts to access data were unsuccessful: archived datasets (tweetsKB, twarchiv by DNB) did not cover sufficient data for our case, and an application for research access to public X data was denied. We were able to collect alternative social media engagement data, but follower counts and interaction rates were available only for a few cases, as many formats lack unique social media accounts. Additional engagement rates from institutional accounts and data on accounts sharing links were also limited to subsamples. Moreover, many communicators have adapted their social media presence in response to changing online landscapes, often using multiple platforms, making it difficult to comprehensively access all relevant data. Similarly, podcast data were difficult to capture due to varying platform use: while many are available on Spotify or other streaming services, some are published exclusively on their own websites. Consequently, the linked dataset contained uneven case numbers and only a limited number of formats with complete information.

Our approach highlights the potential of linking web and survey data to capture digital knowledge transfer, while also revealing key challenges. To address these, we suggest the following considerations for researchers pursuing similar approaches:

  • Plan for high effort in identifying relevant transfer activities and associated communication channels (e.g., social media accounts).
  • Be aware of heterogeneous and evolving online platform use, which complicates comparability and requires flexible planning.
  • Consider data availability, both from content producers and platform providers, as access can be limited or change over time.
  • Remain flexible to changes in data access and actively monitor emerging solutions from the research community.
  • Ensure data quality and select robust analytical methods to handle incomplete or uneven datasets.
  • Engage with open science practices, for example, by using open-source tools for accessing digital data or sharing datasets on identified transfer activities, to facilitate research efforts and to improve transparency and reproducibility.

References

  1. Weingart, P., Joubert, M., & Connoway, K. (2021). Public engagement with science—Origins, motives and impact in academic literature and science policy. PLOS ONE, 16(7), e0254201. https://doi.org/10.1371/journal.pone.0254201
  2. Reale, E., Avramov, D., Canhial, K., Donovan, C., Flecha, R., Holm, P., Larkin, C., Lepori, B., Mosoni-Fried, J., Oliver, E., Primeri, E., Puigvert, L., Scharnhorst, A., Schubert, A., Soler, M., Soòs, S., Sordé, T., Travis, C., & Van Horik, R. (2018). A review of literature on evaluating the scientific, social and political impact of social sciences and humanities research. Research Evaluation, 27(4), 298–308. https://doi.org/10.1093/reseval/rvx025
  3. Sörensen, I., Volk, S. C., Fürst, S., Vogler, D., & Schäfer, M. (2024). “It’s Not so Easy to Measure impact”: A Qualitative Analysis of How Universities Measure and Evaluate Their Communication. International Journal of Strategic Communication, 18, 93–114. https://doi.org/10.1080/1553118X.2024.2317771
  4. Volk, S. C. (2024). Assessing the Outputs, Outcomes, and Impacts of Science Communication: A Quantitative Content Analysis of 128 Science Communication Projects. Science Communication, 10755470241253858. https://doi.org/10.1177/10755470241253858
  5. Stier, S., Breuer, J., Siegers, P., & Thorson, K. (2020). Integrating Survey Data and Digital Trace Data: Key Issues in Developing an Emerging Field. Social Science Computer Review, 38(5), 503–516. https://doi.org/10.1177/0894439319843669

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from GESIS Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading