The ability to reproduce success is something hard to evaluate with common built-in market intelligence tools cause usually you just look at the revenue curve and don't see "how old it's revenue source”. But is it possible to measure? And why does it even matter?
Psst, guys, if you don't need all that hassle about methods and limitations, just use this table of contents and jump to the fun part with results →
Theory
My basic premise is that reproducibility is crucial, and often, it's more important than last year's revenue or profit. Imagine 2 different kinds of companies:
- X published a giga-hit 5 years ago, still earns XX mills/month with it, yet had nothing distantly successful from then;
- Y released several grossing games in the past few years but earning half of what X does;
So, in case X, you have guys with much money, in another — guys with much less money, yet probably better knowing how to make them. Whom will you choose if you're a potential investor, partner, or employee? — I will guess it's Y.
But you may disagree :) actually, there was a long tread in my FB (in 🇷🇺), where Denis Shevchenko (CEO of GreenPixel) and Nikita Guk (CEO of Hoopsly) fairly pointed out where my premise wouldn't be true.
Their main idea is that there are teams with expertise in creating games and others in operating: so it's fair to say that company X from my example, is less perspective or knows less about making money, it might be just an “operating team” that focuses on maximizing one game’s revenue. For such teams, it might be better not to invest in creating new products at all, instead, acquire other teams or products that are already growing and bring them liveOps expertise — so new hits would be larger than average, but also rare to happen (at least because M&A often is a longer process than in-house production).
And it's a fair argument, but:
- This argument works for self-publishing studios but is not applied to classic publishers, whose business model implies a lot of partners and product diversification. So you can at least compare publishers fairly and it's most of my list :)
- The bigger timeframe we consider — the less likely company X will continue to grow with the same one hit. Every product has a point from which its growth turns to logarithm, which means each consequence dollar invested begins to return less and less. For most mobile products, that point lies within the first years after the scale starts. Big hits might earn a lot long after that point passed, and some would even continue to grow — but the company can't reinvest its profits with the same efficiency anymore and is forced to think about the next products to create or buyout. Thus we can assume that while 2 years w/o new hits doesn't mean a lot for a "one-hit-studio" — in a distance of 10 years it should be a signal that something goes wrong and question why it's still nothing?
- And the last point: don't forget that any analysis considers “average” or “expected” scenarios, despite there might be a lot of exceptions and limitations. And on average:
- by base conditions, each product of Y (several-hit) is relatively younger, so there is still a chance that any of those already created might become as big as the one that X possesses;
- in case of several successes, we can assume that it's more of a skill, than a fluke; but it's not the same with one big success: it might be one good decision like choosing the right reference at the right time, w/o any reproducible skill behind it;
- Y has a more diversified portfolio, so less depends on one product's performance that could be ruined by any external market changes: new competitors, services dependencies, regulations, etc.
Anyway, I decided to do a small research on reproducibility and compare ~50 companies along this measure: all their hits scoring, release frequency & recency, dynamics and ranking changes over time, etc.
Methodology
My initial goal was just to compare several companies I'm interested in and make it as simple as possible — so don't expect a deep scientific approach. There is mostly third-grade arithmetic with some explanations, but I put it under toggles to make this text a bit easier.
So, the basic formula that I use to score each game in the publisher's account is:
iap_revenue
, downloads
= overall revenue and downloads earned by the game till now;
10m
(millions), 35m
= target numbers (why those? explanations below);
Really simple, right? It's okay to be concerned if that will show something at least distantly close to reality — and, surprisingly, yes, it does.
(also, it's not full formula, see below)
There is a well-known problem: no tool is able to evaluate your competitor's ad revenue much better than a random number generator. And I'm not even trying. But my goal is much easier: I'm evaluating just a ‘scale’ while ignoring the metrics it consists of, like full (ad+iap) revenue or marketing costs.
We know that games with hundreds of millions of downloads and around zero iAP revenue are a) has almost 100% share of ad revenue b) should be successful; otherwise, there is no way they get hundreds of millions of downloads.
Also, we know that games with a larger share of ad revenue have, on average more installs cause there is a larger audience (like tier 2+ countries) they could still be profitable on.
So, my guess was: maybe if we simply weight iap_revenue and downloads (assuming simplest linear dependence), the result would be at least somehow correlated with actual profit earned?
Yep, that's where 10m and 35m came from:
So, I gathered data on several games with their real gross profits (+approximated to get up-to-date values) and calibrated my target values of revenue and download to get the highest correlation coefficient between my hit_score (measured on external data) and gross profit (known as insider fact or calculated based on it).
All in all, I compared known data of 11 games with different revenue structures (both iap- and ads-based, yet there was no 100% HC) with gross profits from 100k to ~50m total.
What I've got:
Gross Profit correlation vs: | hit_score | iap revenue | downloads |
Full Sample | 99.99% | 98.21% | 88.35% |
Small Sample | 94.32% | 79.04% | 50.14% |
Full Sample: all 11 games compared. Small Sample: I've excluded 5 top and bottom percentiles, 9 games left, so the rest had much less deviated values (therefore, CC is smaller for each comparison).
Anyway, my hit_score happens to be a more correct predictor compared to only revenue or only downloads.
Finally, I've ranked all games by known gross profit and then compared it with ranking by hit_score: there were two shifts per 1 position. Not perfect, but good enough.
hit_score = iap_revenue + downloads / 3.5
But the results would be too long in digits and not comfortable to read (and calibrating just the ratio would be less intuitive).
Honestly, I strongly feel that 1/3.5 is kinda overestimating installs, and it should be fairer to use an interval from 1/5 to 1/10, but my numbers can't confirm this, and I can't rely just on feelings. So, let's proceed:
In all of the above, we get the hit_score based on total current data and compare it with the same total current, but imagine a case where games X
and Y
have the same total revenue and downloads, but X
was released 3 years ago and Y
just 6 months ago.
Which one would probably be a huger hit?
Y
” is not always right, cause it might be that Y
was one-day-hit and already passed its peak; meanwhile, X
has a hockey stick graph in the last few months. But I have no option except to ignore those rare cases: it's too hard to gather and process data to take those into calculation :)So, what I am doing is just dividing the score by age (years from release), so the full formula would be this:
Examples:
GAME 1 | GAME 2 | GAME 3 | GAME 4 | |
iAP Revenue | $10m | $10m | $20m | $10m |
Downloads | 35m | 35m | 35m | 17.5m |
Age (years) | 2 | 4 | 2 | 2 |
HIT_SCORE | 1.00 | 0.50 | 1.50 | 0.75 |
One last thing I like to mention: you may notice that “age” has a really huge impact on the result, and I had thoughts to lower its weight, so just adding +1y to 1-year-old game won't cut results in half. But I decided to keep it that way because: - age really matters in terms of “time when the game still profitable and might grow” (see eg. above); - it also matters as an indicator of the ability to keep up with the current market conditions, eg “decay of the expertise”: those who made a hit recently probably know what the market needs right now, and others have a higher risk of losing product-market-fit sense the older their hit
One thing left is to decide what hit_score should be out of the threshold to define a game as hit and exclude everything else from dashboards and calculations. In my draft files, you can set any value you like. By default, I've chosen the lowest possible: 0.1, which defines those examples as minimum hits:
~Core | ~HC | ~Hybrid | Just new | |
iAP Revenue | $2m | $0m | $1m | $500k |
Downloads | 100k | 7.0m | 3.5m | 1.6m |
Age (years) | 2 | 2 | 2 | 1 |
HIT_SCORE | 0.10 | 0.10 | 0.10 | 0.10 |
Why 0.1? Simply cause for me (and for my calibrating sample), it defines “games that had delivered at least somewhat gross profit: $100k+/lifetime”.
Yep, It's a very low threshold, and those games are hardly able to wield the status of 'hit'. But the idea was that we use hit_score to compare the scale of success anyway, so the filter's purpose is only to exclude those games that didn't succeed for sure, nothing more.
Disclaimers
I think we are all smart enough to understand the limitations and "superficiality” nature of this approach, but I want to highlight a few major points:
- Already mentioned premise limitations (see above)
- Limitations of data I'm working with (see eg. under “Age and final formula” toggle): in case of more time and API access, I think it's important to include not only total values but also their dynamics into calculations. Also, I have relatively small data points to calibrate my “model” on (see “Calibrating and checking”).
- An important fact about the data sample for comparison: I didn't have the goal of analyzing the whole market or a specific segment; therefore, I've just included companies I'm personally interested in because of my subjective and biased reasons, and the same way I've excluded others. So, it's not “market top” or something like that, it's just some random companies comparison, nothing more. So, yeah, it might not even be close to a representative sample; thus, all conclusions we come up with might be incorrect. But you can create your own “market top” by using my drafts (see below) with the same approach or anyhow modified for your own purposes :)
- And the last one: don't take it personally, please. I can imagine someone would see my data and think: “wow, why is my company ranked so low?! the author must been biased and made this to make us look bad!”. Nope, I just didn't care. But if you are still feeling frustrated, just tell yourself (and your investors) that this analysis was made by a looser, who recently failed his own business, so no one should care about it 🤗
Supersonic Studios LTD
Super Awesome Inc.
Green Panda Games
Lessmore UG
Digital Things
Eastside Games
Wazzapps global limited
AZUR GAMES
SayGames Ltd
Homa
CrazyLabs LTD
VOODOO
Scopely
Easybrain
Century Games Pte. Ltd., FunPlus International AG
Zynga
Dream Games, Ltd.
AppQuantum
Kolibri Games
MAD PIXEL GAMES LTD
Playkot LTD
Mamboo Games (Mamboo Games Co, Mamboo Games Entertainment)
Ararat Games
LilithGames
ELECTRONIC ARTS
Supercell
King
Niantic, Inc.
Long Tech Network Limited
Bandai Namco Entertainment Inc.
Big Fish Games, Inc
37GAMES
Plarium Global Ltd
Jam City, Inc.
Melsoft Games Ltd
Nexters (GDEV)
Game Veterans
Tilting Point
MOONEE PUBLISHING LTD
Rollic Games
Outfit7 Limited
Popcore Games
AMANOTES PTE LTD
Kwalee (Ltd)
1SOFT
Freeplay Inc
BoomBit Games
Guru Puzzle Game
Supercent
Habby
Playrix
Results
At least we came to the fun and visual part:
It's a list of 43 companies out of 53 analyzed, filtered by those who have 3 or more hits over the past 6 years (hit_score ≥ 0.1, see “threshold” above). This list is ranged by a sum of hit_score of all hits released in the last 6 years combined.
Headers:
- Y6…Y1: years ago as 12-month periods, where Y1 = 0 to 12 months from now; Y2 = 13-24 months, etc (there is today()
function, so it's always relative and recalculated);
- HIT PER YEAR: total hits released in each 12-month period;
- HIT SCORE: sum of score for all hits released within this period.
Example:
24
hits over the last 6 years (2+4+4+6+6+2), and 2
of those in the last 12 months. All hits combined provided a 91
hit score over 6y (largest within the sample, though it's in the 1st place), and 20.3
of those 91 came from hits released 12 months ago. Now it's easy, right?So, what interesting can we measure, or what insight can we get?
Global Tendencies
First, we can clearly see global tendencies and their impact, and here we clearly see how the market peaked 3-4 years ago but dropped for the last two years.
Y6 | Y5 | Y4 | Y3 | Y2 | Y1 | |
Total Score: | 221 | 126 | 181 | 160 | 123 | 89 |
Total Hits: | 80 | 137 | 187 | 225 | 180 | 86 |
Score per Comp: | 5.1 | 2.9 | 4.2 | 3.7 | 2.9 | 2.1 |
Hits per Comp: | 1.9 | 3.2 | 4.3 | 5.2 | 4.2 | 2.0 |
Score per Hit: | 3.4 | 1.1 | 1.6 | 0.8 | 1.4 | 1.7 |
Still, it might partly be explained by not having enough time for hits to grow, despite my formula already strongly pessimizing age.
Also, keep in mind that almost 20% of those companies just were started within the last 4-5 years, so they didn't have the option to contribute in the period of 5-6 years ago, which still somehow looks better than the last 12 and 24 months.
Another possible explanation is survivorship bias. Those relatively new companies that got to the list were those who drew my attention, which means they had at least something interesting in their portfolio; thus, it's reasonable to suggest that all similar calculations would tend to peak at the middle of the timeframe, regardless of the market state. But, new companies weight relatively low in total data, plus we see the same picture for top 20 companies:
Y6 | Y5 | Y4 | Y3 | Y2 | Y1 | |
Total Score: | 216 | 115 | 148 | 134 | 110 | 74 |
Total Hits: | 64 | 109 | 142 | 180 | 136 | 63 |
Score per Comp: | 9.8 | 5.2 | 6.7 | 6.1 | 5.0 | 3.4 |
Hits per Comp: | 2.9 | 5.0 | 6.5 | 8.2 | 6.2 | 2.9 |
Score per Hit: | 5.4 | 1.5 | 2.0 | 1.0 | 2.5 | 2.3 |
Also, a companies funnel per period:
Y6 | Y5 | Y4 | Y3 | Y2 | Y1 | |
Hits ≥1 | 25 | 26 | 33 | 37 | 33 | 25 |
Hits ≥3 | 13 | 17 | 19 | 17 | 19 | 11 |
Score ≥1 | 15 | 21 | 25 | 26 | 19 | 14 |
Score ≥3 | 11 | 10 | 18 | 16 | 9 | 7 |
Direct Comparison
Next interesting thing is the comparison of direct competitors, like classic publishers:
Few things surprised me despite the fact I've worked at publishing since 2018, for eg:
- such low position of old and well-known companies like Tilting Point and Green Panda;
- the fast growth of Supercent and Supersonic (mb, I should've named my publishing as super-smth too?); Rollic and PlayDucky are also very impressive.
- sad fact that almost a third of the sample released ZERO hits for the last 12 months, including such giants as Voodoo;
- also it's possible (but not clearly) to see another confirmation of the hypercasual fall; because those who shifted to hybrids earlier than others clearly feel better for the last years (eg. SayGames, Homa, partially Supercent compared to Azur and VOODOO). Though, focus on classic HC doesn't bother MOONEE or Supersonic somehow (no idea about the first one, but there is a guess about Supersonic, which might use its advantage of close relations with ad networks).
Dynamics & Ranking
For a better understanding of dynamics and the latest changes, I've made sorting and filtering of companies based only on the last X year's performance with different min. hits requirement:
For more informative results, I've made a ranking comparison: all companies ranked by total score within the last 6 and 3 years (”New Era”), next we see the changes in their ranks among competitors (”New Era Rank Δ"):
Actually, let us draw a line here: there are still a lot more or less reliable conclusions we may come to with this data, but if you are really interested in competitors, it should be more use of me to share my tools instead of opinions :)
DIY: gSheet Draft
To work with it just open and make yourself a copy (apps script attached). And yeah, it works really slow, too many data and calculations for gsheets :(
How to use it:
CompYoY
Main info by 12-month periods. Filter allow to choose date range (LastYears) that affect minimum hits variable and used for sorting companies.
Comparison
Tab to draw changes in ranking within two selected date ranges.
PubProfile
Tab to see detailed profile for the selected company: list of hits ranged by release day, graph of hit's score over time, and hits distribution over score intervals.
How to add more companies
- Open publisher's profile in AppMagic and download .csv
- Return to gSheet
- Now go to "List” Tab and click on checkbox at A1 (final state of the checkbox doesn't matter, you just need to click it after new sheet/tab added to run the script that extract sheet names).
If you don't have AppMagic: just add new list, name it with pattern “Publisher Page — <COMPANY NAME>
's applications”, fill with data in the same format as AppMagic's .csv and proceed from step 3.
In case of merging data from 2 or more publishers (in case it's same concern or smth): add first company as mentioned above, open it's tab and add second .csv with option “append to current sheet”
Wait and unsure that your company was added to the list in columns B:C. That's all! your company(s) added to the analysis. :)
if you get to the bottom of this post: wow! I'm really impressed and glad to have you on my social networks! :) please, let me know about yourself in the comments (Li or FB) so I will know that all that posting wasn't a wasted effort 💕