Episode 66: Mastering Mobile Growth Marketing – Testing, Pilot Testing, and Cross-Industry Skills with Saikala Sultanova
Join us for another episode of the Mobile Games Playbook as we dive into the world of mobile growth marketing and user acquisition. Guiding us through these crucial aspects of mobile games development is industry expert Saikala Sultanova.
Saikala, accompanied by host Jon Jordan, will break down the importance of structured processes in UA, including effective methods for testing and pilot testing to drive success. We’ll also explore the transferable skills and experiences UA professionals gain when moving from the fast-paced gaming space into non-gaming environments.
Watch the video recording
Episode Transcript
Introduction
Jon Jordan: Hello and welcome to the Mobile Games Playbook! Thanks for tuning in for another episode. This is the podcast all about what makes a great mobile game, what is and isn’t working for mobile game designers and mobile game marketers, as we’ll be talking about today, and all the latest trends.
I’m your host Jon Jordan and joining me today to talk about mobile growth marketing and other things we have Saikala Sultanova. How’s it going?
Saikala Sultanova: Yeah, it’s going well. Thanks for having me!
Jon Jordan: We’ve talked a lot about UA, but it’s interesting in the discussion around growth marketing, which you think is a slightly different thing than UA. We’re going to get into that a little bit. We’ll also talk about the difference between game marketing and non-game marketing.
But to set the scene, I think it would be good to hear about your quite interesting career because it sets up what we’ll be talking about in this podcast. Would you mind spending some time just telling us what you’ve been up to in your career recently?
Saikala Sultanova: Yes, so what I’ve been up to over maybe 14 years or so is helping mobile games and apps grow at places like Huuuge Games, Product Madness, Ace Games, and some early-stage startups as well. I’ve learned how to bring teams together and solve real-world growth challenges.
With every product that goes live, so much preparation needs to be done, whether it’s launching a major IP, building data infrastructure with data teams, or figuring out fresh ways to tackle new markets. This includes new problems that might come up with measurement, creative production, or anything like that.
Saikala’s career journey in games and apps
Jon Jordan: You also have a video series which is related to that. Tell us a little bit about that.
Saikala Sultanova: I watch so many videos and podcasts out there, and I was curious how people actually do it. So I wanted to experiment myself: interviewing interesting people that I meet. Sometimes they’re not very easy to find time with, but they have interesting stories, inspiring stories; they’ve had challenges and failures. They have a journey in their life, both personal and business.
The idea was to connect with those people, give them a platform where they can share their story, and at the end of each episode, I ask them to give their two cents of advice to whoever’s listening. You hear their stories, wins, failures, and lessons learned, and what they could do better next time. Anyway, that’s me pretty much experimenting. I don’t know how to do podcasts.
Jon Jordan: Join the club! Exactly. We just have a good conversation. So, thanks for that introduction. We’ll pull out some of your expertise hopefully over the next 30 minutes or so.
Pilot testing in UA: Where to begin
Jon Jordan: We’re going to start off with something quite specific. We’ve spoken in the podcast about UA a lot in various contexts, but I don’t think we’ve ever spoken about how you do pilot testing and testing in user acquisition.
This is the first time you’re introducing a product and trying to find an audience, and that really sets the scene. When I think about user acquisition, I think about these big campaigns and the more established stuff, but setting the parameters for how you’re going to find an audience for your product is actually one of the important things. If you head up in the wrong direction, you’re going to find it difficult to fulfil your goal.
So when you’re working with teams, when you’re thinking about starting UA campaigns, where do you start? What have you found works best?
Saikala Sultanova: Everyone starts testing, and everyone goes with data first because in performance marketing, which is my strongest background, we believe that data tells you the truth. That’s true, but only partially. Because lots of gaming and app companies launch a product and ask, “Who is going to like my product?”
Some companies start with a hypothesis and do qualitative testing first, maybe interviews or surveys, trying to understand where the product-market fit is, or if their product hypothesis is a good fit for the market and can become a business.
Once that’s done, I always like to start with a clear hypothesis: What am I actually trying to prove or learn? In pilot testing, which is a bit earlier than the rollout testing, we work closely with teams to set up initial campaigns with controlled budgets, focusing on well-defined segments, whether it’s creative variations or different art concepts.
To me, the key milestones are, number one, getting clean data early so that you can see if it’s showing you some indication. Number two is hitting statistical significance, because you might see great results, but if it’s just five conversions, that doesn’t make sense. You need something that’s going to scale as a business.
And then finally, number three is understanding user behaviour, not just from the surface-level metrics, but how it goes a little bit further and how it’s going to help you make decisions.
You’re going to have multiple follow-up questions with the wider team, and it’s important to make sure there’s enough data to ensure you have only one hypothesis or you are answering one question. Once you have the results, share them with the wider team in the company; marketing, product, it doesn’t matter. Everyone should understand and learn, because that way you get everyone on board and can make a joint decision on the next step.
The role of qualitative insight in a data-driven world
Jon Jordan: It’s interesting you mentioned things like user interviews. Obviously, that doesn’t really scale, so do you still think that’s something people should be doing for every product, just to get some anecdotal ideas?
You say the big data is where a lot of the focus is, so maybe people think these days that sort of qualitative talking to potential users doesn’t really stack up in the same way; maybe it’s not so actionable. But do you think it still has a value in getting an idea of how you’re going to be doing marketing?
Saikala Sultanova: The application is twofold. One, in the earlier stages, it’s about product-market fit, and it’s predominantly on the product side to figure out whether their idea could have legs to walk on and become a business.
That’s one. But when it comes to marketing, people often jump straight into getting enough data and the statistical significance that I mentioned earlier. That is the correct way of doing it because it’s something measurable, something you can rely on, and something that you can make conclusions from.
However, one of the things that I’ve learned, and I’ve always been a quant person, is that once you run all the quantitative tests, it gives you an answer: this works, this doesn’t work, yes, no. It never tells you why.
So the qualitative part gives you the answer or can give you an indication of why.
I’ll give you one example. When I was at a previous company, we were preparing to run a TV campaign and to go big on a big screen in the US, for example, you need to do lots of preparation and testing, and it’s very expensive. So, the stakes and expectations from a business perspective are high.
How can you make sure that everything is as optimal as it can be, not just with the channel mix, placements, day parting, or the length of the creative, but how can you make sure that actually people in the US, your target audience, are going to engage with it?
Are they going to hear the voiceover, enjoy your creative, and actually follow up with their response or search by themselves to go in and download your product and play the game?
We did all the testing, we prepared everything. Everything was done. And then there was this gut feeling. Something in our creative just needed another reassurance. We’d done the quant testing like three times, and everything checked out apart from that gut feeling.
The advice was to go and ask people what they really think. And it was eye-opening. Literally, we were asking people who play competitors’ games, our games, and similar things, to watch the video trailer and tell us how they feel, what they think, and what they understand. Is it clear?
And the comments we got were:
“That voiceover was annoying.”
“Yeah, that’s great.”
“Here could be better, here could be better.”
“That voiceover was annoying.”
“Yeah, that’s great.”
“Here could be better, here could be better.”
But the voiceover was annoying. And this is something I couldn’t understand straight away because I’m not American, I don’t live there. So, for me, it just sounded good.
Then we realized a couple of other bits after listening to people’s qualitative feedback. The sample size was less than 10 people. And then it was like, “Okay, bingo, now it makes sense.”
So, we went back to the drawing board, adjusted the creative, made it better, tested again, and only then went live.
Crafting effective test plans and avoiding common pitfalls
Jon Jordan: It’s amazing that you don’t have to talk to 200 people. That would be quite a big effort. If you’re saying just start with a few people, maybe not for every product, you might get that feedback quite quickly that everyone’s saying the same thing.
Saikala Sultanova: Yeah, but you see it’s a different application. For a product, you need a bit earlier interviewing, a smaller sample size, and getting that quality information.
But for marketing, it’s about the application scalability. When it’s something like a big TV, or big screen, the stakes are high, and you have to do every kind of testing to make sure that the money you spend is going to be worth it.
Jon Jordan: There is still a place for those sorts of issues, but obviously the main thing is getting the data, getting the quantitative stuff. So when you’re looking at a test plan, what’s a good test plan? What sort of things are you looking for?
Obviously, there are tons of different KPIs that you could be using. What’s the sort of stuff that you think is best practice?
Saikala Sultanova: Well, a typical solid test plan is mostly about clarity. You want to have your test objectives, audience segmentations, creative variance, channel mix, timing, you name it. It’s all the same thing. It’s not rocket science, and we’ve known about it for more than a decade.
Defining metrics is pretty much similar across the board in gaming: CPI, retention (Day 1, 3, 7), ROAS, LTV, PLTV as well.
From my recent experience working with non-gaming companies, they don’t call it CPI; for them, it’s CAC (Customer Acquisition Cost) because it’s not just an Apple launch. It’s a little bit more registration and maybe someone onboarding with payments and things like that.
The most important thing is defining the goal that you’re aiming to work towards, and then applying all of that.
Also, make it a bit of a habit and go through due diligence. If there are data gaps, attribution logic issues, or anything to do with goals or projections that you received from the product, based on whatever information they found online, or sometimes the information is outdated, it’s important to run a sanity check and flag these things.
Everyone is an expert in their field, and growth marketing’s responsibility is to make sure that the latest and most accurate information available is communicated to the whole wider team.
Jon Jordan: It’s interesting that a couple of times already you’ve sort of come back to there’s what the growth marketing team are doing or the UA team—they’re getting all this data—but in and of itself, that’s not useful unless it’s feeding back into the wider organisation.
Things have to be joined up. You can’t just have a great UA team who’s doing all the work and coming up with these numbers if they’re not getting support or if they can’t talk to the rest of the product team. Then I guess it’s a bit useless.
Saikala Sultanova: It is useless, but it also makes it much harder for both teams. Hypothetically, the product team thinks the growth team does black magic or whatever with algorithms and bidding, which isn’t true, because there is no black magic.
And then the marketing team doesn’t speak enough to product and engage on expectations, or give enough information on what reality is. This is when misalignment happens.
What I’ve learned from past experience, good or bad, is that it’s really important just to have another conversation and align on expectations, and clearly, explicitly explain what expectations from both sides are, and then align.
Because most of the time they’re not aligned; both teams think different things, and then the data team thinks something else. It’s really important to come to a middle ground because nobody can have their cake and eat it.
It’s going to be a compromise on every single part to come to a joint agreement so that you can shake hands and go live.
Jon Jordan: So clearly another part of doing this is A/B testing, or the more complicated multivariate testing as well. What are your views on that? I guess A/B testing is something everyone has a good idea of.
Is that a bit outdated now, or are these new, more complex ways of testing taking over?
Saikala Sultanova: In growth marketing on advertising, UA, whatever you call it, I’m a fan of simple A/B testing. It gives an answer quick enough, and it’s a yes or no, basically.
It’s also a bit more cost-effective, and then you can move on, because time is the most important currency you have. Especially if you are an early-stage startup, you have your run rate, and time is the most expensive thing you have.
Of course, multivariable testing is possible, and it’s more complex and sophisticated and looks cool, whatever. But you only do that when you have a lot more money to spend on advertising or marketing testing in general, because multivariable testing is more complex, and it just takes a lot more money.
Jon Jordan: In this early phase, have one question that goes back to having it simple.
It’s very easy to get overwhelmed with all this data that’s coming in, and a lot of it is that you can change one variable and then that moves the other variable the wrong way, and then you find yourself in a bit of a pickle.
Saikala Sultanova: You know what the classic thing is that happens all the time? I’ve seen that in gaming all the time, and I didn’t know that other industries do the same, and now I’ve learned that they do.
Greediness. “Oh, we’re going to run this test, and we’re going to bring these 20,000 users, and we are going to analyse, and with these 20,000 users, we are going to answer questions A, B, C.”
It’s not going to give you the same sample size. And probably for testing something like this, you need a different mix of network, sample size, and maybe a different country, platform, and all of these kinds of things.
And then people go, “But why can’t we just do that? Because other companies do it.”
It’s like, “Okay, the other company is not doing it the right way.”
Because if you want to have a clear, conclusive answer—does it work? does it not work? and maybe why sometimes if you do a bit more research—then you need to have just one thing.
And it’s really hard to stick to it, because then marketing and product teams will start multiplying questions and costs of finding answers to those.
It becomes a lot, but this is when everyone needs to prioritise which questions are the most important and the most impactful on your product launch or making it a business, because some questions are just not that useful.
Jon Jordan: Coming to the end of starting off this marketing pilot testing of UA, what are the key learnings you think people should be aware of, and the mistakes that you’re seeing the most that people naturally fall into if they’re not thinking ahead?
Saikala Sultanova: One of the most common mistakes is testing too many variables at once. As I just said, this muddles their results, and you may end up with no conclusive answer, then you’ve lost time and have to do it again.
Other mistakes are attribution gaps, skipping pre-launch tech checklists, and launching without QA-ing your advertising flow. Having the product ready and having your campaigns ready doesn’t really mean that one of the campaigns isn’t going to fail because something has happened with configs.
And it’s very, very common. Or sometimes people forget to put the payment method into their ad account, or whatever.
So, you need to make sure all these small checks are done and that you communicate results and caveats clearly to the wider team so that nobody in other teams thinks it’s black magic, because it’s not.
It’s just very tedious, repetitive work.
I mean, thanks to it being 2025, there’s a lot of machine learning, lots of automation, and I’m going to say the AI word as well, which is a bit helpful for some of the mundane work. But still, there are lots of things we have to do ourselves, and doing the last tiny, annoying, small, mundane checks is always helpful.
Growth marketing across gaming vs. non-gaming apps
Jon Jordan: That’s sort of the first bit of the podcast today, looking at that. And now we’re going to sort of move on.
Again, something you mentioned: you’ve spent a lot of time in games, but also advising non-gaming apps, and you’ve worked on non-game apps as well.
Obviously, we generally talk about games in the podcast, but it’s interesting to see how the other side lives and maybe learn a bit from that as well.
So, I guess the core skills that you’ve been working on in the last 14 years are broadly the same in non-gaming apps. I guess the apps themselves obviously have a slightly different use case depending on what the app is.
They can have very different applications, from banking to social apps to travel, whatever.
What can you tell us about that difference?
Saikala Sultanova: I thought there would be a lot of differences, and when I’ve worked with some of the projects, one of them was mental health, and another one was a FinTech here in the UK, I found out the foundation of operational growth marketing—data-driven decision-making, creative production, ad tech integration, reporting, and budget management—translates directly.
The marketing mix with ad networks is pretty much copy-paste.
There are some differences because non-gaming is a lot more advanced with CRM and web-to-app marketing, and web in general. They are a lot more advanced, but everything mobile is pretty much copy-paste.
And I don’t think there is a big difference, no matter the industry. Probably e-commerce would be a lot more different; I haven’t worked with e-commerce yet.
But yeah, gaming pushes you to test fast and learn quickly from results, and deal with campaigns with agility. The difference with those non-gaming apps is that it takes a little bit longer.
You need to look at the conversion ratio, conversion windows that are a little bit longer, whether it’s FinTech, lifestyle, or health. And this is something that is different.
Other than that, pretty much everything is recyclable.
Jon Jordan: And I guess to some degree, over the last 10 years, we’ve had this idea of gamification.
So even health, and particularly health, but even sort of FinTech apps, have been influenced by game-type features, which probably also plays a little bit into the marketing so that they’ve converged in some aspects.
Saikala Sultanova: In the last couple of years, it became almost like a trend with subscription apps to gamify the experience because that brings engagement.
If it is a subscription app, you pay your monthly subscription or whatever it may be.
The conversion to your net purchase isn’t necessary for you because you don’t sell single packages all the time.
Therefore, one of the most important measurements for you would be engagement: How often is your app used by your customers? Is it helpful? Is it engaging? Do they feel valued as a customer?
Transferable skills and advice for marketers exploring new verticals
Jon Jordan: When you’ve worked on non-gaming apps compared to gaming apps, what have you thought are the opportunities that people with experience in gaming can bring to a new role if they’re trying to do this growth marketing on non-gaming apps?
What really shines?
Saikala Sultanova: You mean what really shines from non-gaming, or…?
Jon Jordan: Yeah. Well, if people have been working in gaming and are trying to say, “I’ve got these transferable skills that I’ve learned in gaming over the last few years, and I can help you with your non-gaming apps,” what things do you think they can really leverage?
Saikala Sultanova: Quite a lot, actually. I was pleasantly surprised when I tried for the first time.
It is quite a lot, especially on user acquisition. Go-to-market plan is very similar; I would say 70% is the same, and 30% is something that is outside of gaming.
When it comes to creative production, it’s exactly the same—creative production, testing, and improving iterations and all of that. So it’s exactly the same.
Channels are exactly the same, maybe for non-gaming it’s a little bit wider than for gaming, which is great.
The only difference in creative production and testing is—well, testing is the same, actually—but in creative production for non-gaming, static creatives are important.
It could be as important as 40 to 50% of the total ad spend, and they convert.
That’s pretty good because the way the product is perceived is different, and the level of trust or level of information, level of accuracy, is a lot more important.
What else?
Ad tech, attribution, measurement, anything to do with creative labelling, AI tools—pretty much all of that is transferable 100%.
Non-gaming apps like to have fancy tools as well, so anything to improve their work life and make it more efficient, of course they will utilise that.
Reporting is exactly the same.
The only thing with non-gaming is that they go further down the funnel because CRM is part of the marketing team, and then there is additional product marketing, web-to-app search, and SEO is important.
So that becomes a much wider scope and, therefore, a much longer time and funnel where you need to report for a long period of time, and not just regular ROAS or whatever it may be.
It’s more than that.
But apart from that, I would say probably 70 to 75% of what we learn and do every day in gaming is pretty much transferable.
Jon Jordan: I guess for games, as you say, you have your D1, D7, maybe D30, but games are often disposable in a way that non-gaming apps are not.
They just don’t have that same relationship. A bank, hopefully, has a different, long-term relationship with its users than maybe a game, where people are just cycling through games all the time.
Saikala Sultanova: Right. It depends what kind of game. If it is hyper-casual, the one that has a return on investment target of 30 days, 60 maximum, and retention is terrible after that, then of course, those are throwaway games, and they are lots of fun.
But then there are some games that people play for years and years, and those are the games that may not have advertising in them.
If you’ve already spent a thousand bucks in that game, buying in-app purchase packages and progressing in your level, you want to stick to it.
You might be there for a year or two until something better comes along.
Jon Jordan: What sort of advice would you give to people who are marketing professionals, UA professionals, and have experience in the games industry?
How have you found it working with those two different things?
I mean, I guess everyone always thinks games are the really fun part of the industry, and obviously, to some degree, I agree with that being a gaming podcast.
But most of the world is not gaming, and I’m sure there are lots of fun companies doing non-gaming apps.
So what advice would you give people about that?
Saikala Sultanova: I’d say keep an open mind and lean into what makes each vertical unique, respecting the pace of learning.
Don’t just assume what worked for games will transfer as is. It wouldn’t be 100%, but lots of those skills are transferable, especially if you’re curious.
It’s important to find mentors in the industry on both sides, in gaming and non-gaming, and experiment a lot.
Make friends with your analytics and product teams, making sure that you’re really into something that matters.
Celebrating wins is important, even if it’s a small win. With every test, you learn something. It’s really important.
And don’t be afraid to challenge the playbook because playbooks get written all the time, and they also get outdated all the time.
Therefore, refreshing your knowledge is really, really important.
To your comment, actually, that gaming is more fun. Gaming is more fun! I can say that now.
There are fewer limitations, less boundaries, and you can be a bit edgy and you can do stuff according to brand guidelines, or if there are no brand guidelines, even better.
With FinTech, you can’t, because you’re a regulated business. With some of the health apps, you can’t as well; it’s regulated because you need to make sure the advice or whatever the product is communicating to the user that is to do with your health is careful.
So gaming is more fun.
You know what else is more fun right now and trending among subscription apps?
The AI apps, AI face apps, and lots of other things like even podcast-making apps.
The AI apps, AI face apps, and lots of other things like even podcast-making apps.
Those, I think, are on the rise already, and I’ll be looking forward to learning from them because I think very quickly they’re going to be much better than gaming.
Jon Jordan: You got it. But it is like you say, no matter how much experience you have in this, there’s always new stuff coming up.
We always have to be learning.
Obviously, the more experience you have, the better foundation you have, and maybe you can understand what the important or new things are, but it’s never a case of just sitting back and going, “Well, I’ve done this for 10 years, so I’ll just keep doing the same old thing.”
Saikala Sultanova: It’s okay for some, and it’s okay too because we need those kinds of people also.
You can’t just have a hundred percent of your team or a hundred percent of your company always wanting to run wild.
You need to have a diversity of different thoughts and people and culture, and it’s important to have a bit of everyone.
Somebody needs to do the mundane thing every day, and you need those people; somebody’s comfortable with that, and somebody’s not.
I think the most important thing is to stay curious and keep learning.
Closing
Jon Jordan: Good advice. Well, thank you so much for your time. That’s really fascinating. We covered a lot of different stuff there. I think there were some good headline things: make sure you’re not just a team on your own; you have to talk to your entire team, not just the UA team; keep things simple, certainly in the early stages; and keep learning, keep curious.
So thank you so much for your time.
Saikala Sultanova: Thank you for having me.
Jon Jordan: No, it’s been a pleasure, and thanks to you for watching, listening, however you are consuming the podcast. Remember, every episode we are delving into what’s going on with mobile games, which is increasingly expansive in the topics that we cover, and having to be curious about it because it’s all changing all the time. So don’t miss out. Do subscribe, and we will see you next time. Bye-bye.


