Exploring the Challenges of Evaluating mHealth Engagement

The promise of digital health interventions is partly rooted in their flexibility and convenience. While traditional, face-to-face modalities (e.g., group sessions, individual counseling) often have a set, prescribed exposure (e.g., 12 weekly sessions), mHealth interventions (e.g., apps, websites, chatbots) are accessible 24/7 and are used within the context of participants’ daily lives.

User engagement with mHealth interventions is universally considered critical to their efficacy. However, the amount of engagement needed to improve health and behavioral outcomes is likely to vary based on the type of intervention and the individual user. For example, some users may require a single in-depth session with an app, while others may need ongoing prompts via notifications or SMS.

Hence, understanding how user engagement relates to intervention outcomes is essential to assessing and optimizing their effectiveness. Yet, despite this, most mHealth studies aren’t addressing these research questions. For example, a recent systematic review of digital interventions for hypertension reported that only four of twenty-one included studies clearly defined engagement, and only three directly evaluated the relationship between engagement and health outcomes.

So why is engagement evaluation neglected in these studies? In our experience, we have seen two primary factors driving this phenomenon: (1) a need for a shared understanding between technology developers and researchers about analytics requirements and (2) the relative complexity of processing engagement metrics data.

Regarding the first point, mHealth platforms have unique analytics requirements, which many developers are unaware of. This unfamiliarity with the unique needs of research tools often leads to them proposing unsuitable analytics solutions. For example, a web developer will likely recommend using Google Analytics (GA). This free platform is easy to integrate and has a host of integrated dashboards and reporting capabilities. Unfortunately, using GA (or any other third-party analytics platform) is bound to lead to missing data. The most severe consequences are seen when using the platforms ‘out of the box’ without advanced configuration. This leads to the inability to link usage data to individual users, thereby obfuscating the impact engagement has had on an individual’s behavior outcomes. However, even if care is taken to connect users to their data, the prevalence of analytics-blocking software means that engagement data will be unavailable or incomplete for many users.   

The second factor we see driving the lack of engagement evaluation is the complexity of processing usage data. Databases for digital platforms are not typically stored in flat tables but rather in relational databases such as SQL or MongoDB. Depending on the data architecture strategy, app usage data may be in highly nested “documents” or “collections.” These data structures are often unfamiliar to health researchers and, therefore, require either enlisting assistance or developing new skills. For example, these data are not easily cleaned in traditional statistical analysis packages (e.g., SPSS). Instead, we highly recommend using a programming language like Python, as it has excellent packages to interact with backend databases and external APIs.

In this blog, we’ve very briefly laid out two major problems we have encountered while working with digital health engagement data. To avoid these issues, one of the most important things is to develop precise specifications for your digital platform, including a detailed analytics and data processing plan. In addition, establishing clear lines of communication and a shared understanding of the unique requirements of health research apps with your developer is vital.

Pin It on Pinterest

Share This