Creating a mobile app can be an effective way for marketers and retailers to maximize their reach with the valuable mobile audience. But because mobile apps are such a new tactic, marketers are still trying to figure how to best handle that advertising asset.
The challenges range from gaining downloads through mobile app install campaigns, to simply getting the app built and launched. Although audiences that do choose to engage with the brand on a mobile app, can be considered core to the company, especially since they went to the trouble of downloading the app. And the data that can be derived from this dedicated app audience can be invaluable.
However, mobile experience management firm Apptimize CEO Nancy Hua previously pointed out to Marketing Dive that many marketers make the mistake of spending too much time trying to expand an app’s user base instead of working to improve the app. That improvement can come from user feedback, analyzing app user data, and what might be the most powerful way to improve the app, A/B testing.
Hua said one other valuable aspect of testing company apps that brands should keep in mind is in-app messaging.
“Messaging is a key part of the app experience and requires smart decisions, re: timing and content,” she told Marketing Dive, a sister publication of Retail Dive. “The context in which you show the in app message to the user can have a dramatic impact on the user experience.”
That context can include:
- Is it when they normally would be browsing?
- Or is it when they'd normally be ready to buy something?
- Are they on the go, or are they in bed?
- What were they doing the last time they saw this view?
"These questions can inform how you connect with your user and deliver relevant content and be the difference between your user hating or loving your app," Hua explained.
In-app messaging testing – a case study
Mobile dating app Paktor, which targets the Asia market, has carried out in-app messaging tests, managing to increase average revenue-per-user 17%. Anmol Mohan, head of data analytics for Paktor, told Marketing Dive the company had wanted to test driving subscriptions by sending pop-up messages to users who viewed more than 10 prospective matches.
Mohan said there were two main reasons for the test.
First, to educate users about benefits of the membership which can help their profiles stand out. Also, use the opportunity to tell them about recently launched Paktor Guarantee Program, which ensures that users buying three-month and six-month subscriptions get offline dates. And second, to increase the subscription purchases by highlighting them for all users (previously only 60-70% of users were visiting subscription purchase page).
Mohan added that Paktor regularly uses A/B testing on its product – the development team rolls out a new feature each week on average – as well as on its marketing with all campaigns getting A/B tested.
“For email campaigns, we test imagery, creatives and even subject lines. Our advanced prediction algorithms identify opportunities of various offers that we have in our repository. We regularly update the offer pool and recommend them to suitable users with precision,” Mohan said.
For mobile app testing, he said Paktor’s process involves a series of steps, beginning with identifying opportunity by looking at the data – in this case about various pop-up screens and messages. The UX team then develops a concept that can achieve defined targets based on that data. Meanwhile, the development team works on creatives generated by the design team and the client team rolls out the feature with the latest version of the app.
Paktor uses Apptimize to switch the feature on, and the tool handles dividing users into control and target groups. After the test reaches a significant amount of data, it is collected to determine test results. With those results in hand, the product team debuts the edited feature based on data team recommendations.
The value of collecting data in app testing
Part of any test is creating a hypothesis and test protocol.
On these two points, Mohan said the data team handles hypothesis and experiment setup by uncovering areas where the app could be improved. From there the product team describes the new feature to be tested, and the data team defines the procedure. This helps the development team understand how to code features and variations. The data team also provides guidelines and metrics for analyzing the test results.
Once the test has begun, data is collected in main metric areas, including monetization, engagement and retention for the in-app messaging test.
"Based on observations we try to develop and business logic behind the difference or indifference that we are observing. We then try to verify the logic through separate data points. If everything checks out, we decide the fate of that feature," Mohan explained.
Testing does, however, come with challenges. Some hurdles Mohan described include the tested feature not being perceived by users in the intended way and short-term differences uncovered by the test, which don't always translate to long-term benefits. With the first challenge, it can mean skewed results that make it difficult to understand the underlying cause of the findings.
For the in-app messaging test Mohan discussed, users were presented different messaging based on their indicated likes and dislikes. There were also a number of key performance indicators within each of the three main metric areas chosen for the test:
- Monetization – customer lifetime value, average revenue per user, average revenue per paying user, conversion rate
- Engagement – session duration, number of sessions, swipes, likes, dislikes, attractiveness levels, friendliness levels
- Retention – active users, dormant users, re-activated users, 1-day/3-day/7-day windowed retention, MoM retention
For this test, Paktor experienced a 10.35% increase in subscriptions. And although the pop-up messages resulted in a decrease in a la carte purchases, it still led to a 17% increase in average revenue-per-user.
Apptimize's Hua says these such tests, like the one Paktor carried out with in-app messaging, are invaluable given how personal and prominent mobile has become for audiences. "Mobile requires the most testing because mobile is the most complicated and critical experience for users," she said.
To get the most value, Hua recommends starting at the top of the funnel by testing on-boarding. And the end goal?
"Hook the user from the very beginning when they're at their highest point of curiosity and excitement about your app while being skeptical about what the value is," she said.