So we came up with a hybrid variation based on HC practices but without HC itself:
- ideas should be both relatively simple (1-4 months from 0 to testable MVP) AND possess the potential of high margin, which are:
- scouting should change: from searching games and evaluating "could-it-be-hit” → to searching studios and evaluating “could-they-do-what-we-want”; Plus we lowered thresholds, trying to maximize the number of studios we work with and expecting to filter out all that incapable later.
- In addition, we've enriched our value compared to typical HC publishers by providing documentation, balances, level design, assets or art direction, and so on. Not always though.
- idlers & tycoons (~55% cause we had the best expertise there plus market research still reveals idlers leading in success/release ratio)
- simple casual: puzzles, m2, and alike (~25% — looking back it was not only wasted but also misleading efforts cause of the quasi-success fallacy I'll mention further)
- casual-core, hybrid-casual, and else (rest ~20% — still believe those, but we failed there)
And then we've reached studios either directly offering codev or suggesting a test of their latest release and codev later (after the test failed). BTW, our offer wasn't a sweet one, more like:
Hey, guys, it seems you don't have successful games yet. Wanna make one? You will get months of unpaid work under our supervision… but you know what? — it might earn much with better chances then all those hypercasual shitgames you used to.
Sure it's a rough description, and of course, we were selling more than idea; nonetheless it had a great response! So many studios are tired of hypercasual and starved to create games of higher complexity, better quality, closer target audience, and real potential that might be operated in years.
That's how within two years we have cooperated with almost 150 studios and produced 40+ games simultaneously at the peak.
Sure, our business model is not unique, it's just adapted to circumstances.
For example, Say Games focused on similar complex hybrids, highly contributing to the probability and volume of success; however, unlike us, they rely on a limited amount of high-capable partners. Sure, it's much more effective than our dozens of random guys often lacking track records, but there is no other way to get "few high-capable" except by filtering out dozens or hundreds of "randoms".
Meanwhile, Azur slowly shifting from HC-codev to hybrids (too slow if you ask me) and also they suck into partnerships with almost every studio on the market by offering a few months paid exclusive to everyone capable of signing the papers (and prolonging with those who delivers results). It's probably the best tactic for aggressive expansion, unfortunately expensive.
Tilting Point's primary model is codev (sometimes more like outsourcing). Still, they are targeting extremely high-cost + high-margin products, often backed with brand's IPs — so it's almost like a totally different universe.
Ducky provides much value to increase individuals' chances to succeed, sometimes practically making their games by others' hands, yet stuck in hypercasual for reasons I can't explain.
What We Did Right
So, let's get back to us. This model wasn't some genius move, it was strictly practical and almost obvious considering market status, our expertise, and resources. And of course, it doesn't exclude conservative publishing mechanisms like monitoring & testing everything noteworthy.
The decisions I'm proud of the most — are the tiny ones and more about operations.
We've created a department for part-time entry-lvl intern-like specialists firstly designed to fulfill simple scripted tasks (within studios scouting, database filling, and so on), but lately became competitive training grounds with the primary purpose to selecting and promoting best of them: smart, hard-working and hungry.
It worked so well that up to the end, most of our producers, gamedesigners and bizdevs were inhouse-cultivated young specialists, each of those priorly hired as part-timers and within 2-3 months were best by KPIs within the group of 10-15 similar competitors.
As one who interviewed hundreds of candidates, worked with hundreds of specialists, and practiced management for more than 10 years, mark my words: our juniors had much better performance and usefulness than most seasoned professionals with 10y experience, mastered interview-passing, chair-sitting, and report-compiling. And, yeah, I get the irony :)
Still, it was a challenge to build an effective process. To optimize production we grouped several studios around one basic idea with variations and launched new projects with similar ideas at the same timeframe — so, one producer was able to operate dozens of studios, cause it requires expertise in just 1-3 concepts which is possible even for entry specialist; - plus most of the projects are in similar production stages and require similar actions; - plus it's easy for prod to compare studios’ results and for company heads to compare prod's.
One of the best ideas I'm proud of was to launch first MVPs in two versions: - one closest to the primary reference (as ‘clone’ as possible), - another with a new feature: potentially-high-impact + easy-to-implement.
That practice gave a lot of valuable insights by utilizing failed launches. A typical failed launch for any other pub/dev means yet another fail without actual knowledge and only successes mean something: cause launched games just tested and never split-tested, so results are incomparable. For us instead, failure often was an insight.
For example, do you know Gold & Goblins? (If not, you're probably not from mobile gamedev) — one of our team made frankly dirty MVPs: one copying mechanics and levels of the original, the other with a slight variation of the core gameplay that delivers a new feeling out of it and a split-test showed for the test version 25% better conversions from 1 to 4 level (there wasn't content for d2, but the assumption was that it should be linear increasing).
Both versions failed though, but the quality of that knowledge doesn't depend on the quality of the MVP cause they share the same issues and were different only in one feature that delivered 20% uplift. So everything left to get a hit was to find a team able to create a good enough clone to implement this feature next (later we found one but they didn't like the feature and we ran out of time before finding a new one).
In case such a test fails compared to the control — it's again valuable, cause now we know for sure what we shouldn't do further.