Predicting Board Game Collections

Gyges’s Collection

Author

Phil Henrickson

Published

10/31/24

About

This report details the results of training and evaluating a classification model for predicting games for a user’s boardgame collection.

Note

To view games predicted by the model, go to Section 5.

Collection

The data in this project comes from BoardGameGeek.com. The data used is at the game level, where an individual observation contains features about a game, such as its publisher, categories, and playing time, among many others.

I train a classification model at the user level to learn the relationship between game features and games that a user owns - what predicts a user’s collection?

username status games
Gyges ever_owned 1496
Gyges own 580
Gyges rated 1425

I evaluate the model’s performance on a training set of historical games via resampling, then validate the model’s performance on a set aside set of newer relases. I then refit the model on the training and validation in order and predict upcoming releases in order to find new games that the user is most likely to add to their collection.

username years type Own
no yes
Gyges -3500-2020 train 24097 421
Gyges 2021-2022 valid 9786 111
Gyges 2023-2028 test 9051 48

Types of Games

What types of game does the user own? The following plot displays the most frequent publishers, mechanics, designers, artists, etc that appear in a user’s collection.

Show the code
collection |>
        filter(own == 1) |>
        collection_by_category(
                games = games_raw
        ) |>
        plot_collection_by_category()+
        ylab("feature")

The following plot shows the years in which games in the user’s collection were published. This can usually indicate when someone first entered the hobby.

Games in Collection

What games does the user currently have in their collection? The following table can be used to examine games the user owns, along with some helpful information for selecting the right game for a game night!

Use the filters above the table to sort/filter based on information about the game, such as year published, recommended player counts, or playing time.

Show the code
collection |>
        filter(own == 1) |>
        prep_collection_datatable(
                games = games_raw
        ) |>
        filter(!is.na(image)) |>
        collection_datatable()

Modeling

I’ll now the examine predictive models trained on the user’s collection.

For an individual user, I train a predictive model on their collection in order to predict whether a user owns a game. The outcome, in this case, is binary: does the user have a game listed in their collection or not? This is the setting for training a classification model, where the model aims to learn the probability that a user will add a game to their collection based on its observable features.

How does a model learn what a user is likely to own? The training process is a matter of examining historical games and finding patterns that exist between game features (designers, mechanics, playing time, etc) and games in the user’s collection.

I make use of many potential features for games, the vast majority of which are dummies indicating the presence or absence of the presence or absence of things such as a publisher/artist/designer. The “standard” BGG features for every game contain information that is typically listed on the box its playing time, player counts, or its recommended minimum age.

Note

I train models to predict whether a user owns a game based only on information that could be observed about the game at its release: playing time, player count, mechanics, categories, genres, and selected designers, artists, and publishers. I do not make use of BGG community information, such as its average rating, weight, or number of user ratings. This is to ensure the model can predict newly released games without relying on information from the BGG community.

What Predicts A Collection?

A predictive model gives us more than just predictions. We can also ask, what did the model learn from the data? What predicts the outcome? In the case of predicting a boardgame collection, what did the model find to be predictive of games a user has in their collection?

To answer this, I examine the coefficients from a model logistic regression with ridge regularization (which I will refer to as a penalized logistic regression).

Positive values indicate that a feature increases a user’s probability of owning/rating a game, while negative values indicate a feature decreases the probability. To be precise, the coefficients indicate the effect of a particular feature on the log-odds of a user owning a game.

The following visualization shows the path of each feature as it enters the model, with highly influential features tending to enter the model early with large positive or negative effects. The dotted line indicates the level of regularization that was selected during tuning.

Show the code
model_glmnet |> 
        pluck("wflow", 1) |>
        trace_plot.glmnet(max.overlaps = 30)+
        facet_wrap(~params$username)

Partial Effects

What are the effects of individual features?

Use the buttons below to examine the effects different types of predictors had in predicting the user’s collection.

Assessment

How well did the model do in predicting the user’s collection?

This section contains a variety of visualizations and metrics for assessing the performance of the model(s). If you’re not particularly interested in predictive modeling, skip down further to the predictions from the model.

The following displays the model’s performance in resampling on a training set, a validation set, and a holdout set of upcoming games.

Show the code
metrics |>
        mutate_if(is.numeric, round, 3) |>
        pivot_wider(
                names_from = c(".metric"),
                values_from = c(".estimate")) |>
        gt::gt() |>
        gt::sub_missing() |>
        gt_options()
username wflow_id type .estimator mn_log_loss roc_auc pr_auc
Gyges glmnet resamples binary 0.070 0.870 0.147
Gyges glmnet test binary 0.041 0.829 0.048
Gyges glmnet valid binary 0.053 0.882 0.120

An easy way to visually examine the performance of classification model is to view a separation plot.

I plot the predicted probabilities from the model for every game (during resampling) from lowest to highest. I then overlay a blue line for any game that the user does own. A good classifier is one that is able to separate the blue (games owned by the user) from the white (games not owned by the user), with most of the blue occurring at the highest probabilities (left side of the chart).

Show the code
preds |>
        filter(type %in% c('resamples', 'valid')) |>
        plot_separation(outcome = params$outcome)

I can more formally assess how well each model did in resampling by looking at the area under the ROC curve (roc_auc). A perfect model would receive a score of 1, while a model that cannot predict the outcome will default to a score of 0.5. The extent to which something is a good score depends on the setting, but generally anything in the .8 to .9 range is very good while the .7 to .8 range is perfectly acceptable.

Show the code
preds |>
        nest(data = -c(username, wflow_id, type)) |>
        mutate(roc_curve = map(data, safely( ~ .x |> safe_roc_curve(truth = params$outcome)))) |>
        mutate(result = map(roc_curve, ~ .x |> pluck("result"))) |>
        select(username, wflow_id, type, result) |>
        unnest(result) |>
        plot_roc_curve()

Top Games in Training

What were the model’s top games in the training set?

Show the code
preds |>
        filter(type == 'resamples') |>
        prep_predictions_datatable(
                games = games, 
                outcome = params$outcome
        ) |>
        predictions_datatable(outcome = params$outcome,
                remove_description = T, 
                remove_image = T, 
                pagelength = 15)

Top Games in Validation

What were the model’s top games in the validation set?

Show the code
preds |>
        filter(type %in% c("valid")) |>
        prep_predictions_datatable(
                games = games,
                outcome = params$outcome
        ) |>
        predictions_datatable(
                outcome = params$outcome,
                remove_description = T, 
                remove_image = T, 
                pagelength = 15)

Top Games by Year

Displaying the model’s top games for individual years in recent years.

Show the code
preds |>
        filter(type %in% c('resamples', 'valid')) |>
        top_n_preds(
                games = games,
                outcome = params$outcome,
                top_n = 15,
                n_years = 15
        ) |>
        gt_top_n(collection = collection |> prep_collection())
Rank 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022
1 Le Havre A Brief History of the World Labyrinth: The War on Terror, 2001 – ? Ora et Labora Terra Mystica Concordia La Granja Trans-Siberian Railroad Agricola (Revised Edition) Spirit Island The Edge: Dawnfall Tainted Grail: The Fall of Avalon Guild Master Experior Existencial Agricola 15
2 Space Alert Endeavor Earth Reborn Ascending Empires Archipelago BattleCON: Devastation of Indines Roll for the Galaxy Pandemic Legacy: Season 1 One Deck Dungeon This War of Mine: The Board Game Nemesis Western Empires Pandemic Legacy: Season 0 The Great Wall Horizons of Spirit Island
3 Battlestar Galactica: The Board Game Vasco da Gama Dust Tactics A Game of Thrones: The Board Game (Second Edition) Tzolk'in: The Mayan Calendar Disc Duelers Fields of Arle Pixel Tactics Deluxe Exceed Fighting System Folklore: The Affliction Concordia Venus Pax Pamir: Second Edition Etherfields Bloodborne: The Board Game Frosthaven
4 Byzanz Kuhhandel Master Navegador The New Era Among the Stars Gearworld: The Borderlands Evolution Grimslingers Terraforming Mars Pandemic Legacy: Season 2 Cosmic Encounter: 42nd Anniversary Edition Bios: Origins (Second Edition) Eclipse: Second Dawn for the Galaxy Ankh: Gods of Egypt Dire Alliance: Horror
5 Prussian Rails Hansa Teutonica 20th Century Dungeon Petz We Didn't Playtest This: Legacies Eight-Minute Empire: Legends Dogs of War Pixel Tactics 4 Scythe Gloomhaven Orc-lympics Core Space Gloomhaven: Jaws of the Lion Dirge: The Rust Wars ISS Vanguard
6 Ghost Stories We Didn't Playtest This Either Catacombs King of Tokyo Keyflower Glass Road A Fistful of Kung Fu: Hong Kong Movie Wargame Rules Through the Ages: A New Story of Civilization Reign of Cthulhu Gaia Project Newton Aftermath Rush M.D. Blitzkrieg!: World War Two in 20 Minutes Gateway Island
7 Sorry! Sliders Win, Lose, or Banana Cadwallon: City of Thieves Risk Legacy BattleCON: War of Indines Tash-Kalar: Arena of Legends Pixel Tactics 3 Meow The Others 878 Vikings: Invasions of England Lords of Hellas Living Planet: Deluxe Edition Switch & Signal Kemet: Blood and Sand Undaunted: Stalingrad
8 Duck Dealer Imperial 2030 Glen More Dreadfleet Pixel Tactics Pixel Tactics 2 Blue Moon Legends Pixel Tactics 5 The Manhattan Project: Energy Empire One Deck Dungeon: Forest of Shadows Dungeon Alliance Dungeon Universalis Cosmic Encounter Duel Nicaea Libertalia: Winds of Galecrest
9 Okko: Era of the Asagiri Chaos in the Old World Sneaks & Snitches Belfort The Great Zimbabwe 7-Card Slugfest Power Grid Deluxe: Europe/North America BattleCON: Fate of Indines Arkham Horror: The Card Game Lazer Ryderz Tsukuyumi: Full Moon Down Cloudspire Unmatched: Little Red Riding Hood vs. Beowulf Canvas アンドーンテッド:ノルマンディー・プラス (Undaunted: Normandy Plus)
10 Sixis Revolution! 51st State Colonial: Europe's Empires Overseas Android: Netrunner Northern Pacific Doomtown: Reloaded Star Wars: X-Wing Miniatures Game – The Force Awakens Core Set Millennium Blades Zpocalypse 2: Defend the Burbs Mage Knight: Ultimate Edition HATE Gatefall Assassin's Creed: Brotherhood of Venice Squadron Leader
11 Combat Commander: Pacific Eden: Survive the Apocalypse Zombie in My Pocket Dungeon Fighter Ginkgopolis Eldritch Horror Thunderstone Advance: Worlds Collide 7 Wonders Duel Game of Thrones: The Iron Throne Startups Kick-Ass: The Board Game Middara: Unintentional Malum – Act 1 New York Zoo Gutterfall: Bounties Marvel Zombies: Heroes' Resistance
12 Mutants and Death Ray Guns Claustrophobia The Hobbit Lancaster Libertalia Going, Going, GONE! Arcadia Quest The King Is Dead Mansions of Madness: Second Edition Wasteland Express Delivery Service Critical Mass: Patriot vs Iron Curtain Ancient Civilizations of the Inner Sea Dwellings of Eldervale Steamwatchers Nemesis: Lockdown
13 Roll Through the Ages: The Bronze Age Dungeon Lords Warhammer: The Island of Blood Last Will Grimoire Shuffle Caverna: The Cave Farmers Antike II Bottom of the 9th Mechs vs. Minions Fallout Champions of Hara Dungeon Brawler Europe Divided Core Space: First Born One Deck Galaxy
14 Bushido: Der Weg des Kriegers Dungeon Twister 2: Prison Sid Meier's Civilization: The Board Game Puerto Rico Galaxy Trucker: Anniversary Edition This Is Not a Test: Post-Apocalyptic Skirmish Rules The Witcher Adventure Game Empires: Age of Discovery Iberia Flick 'em Up!: Dead of Winter Shadowrun: Crossfire – Prime Runner Edition Blitzkrieg!: World War Two in 20 Minutes BattleCON: Wanderers of Indines Mint Bid Sniper Elite: The Board Game
15 Magnifico Shipyard Firenze The Ares Project Uchronia Francis Drake Galaxy Defenders Piratoons A Feast for Odin First Martians: Adventures on the Red Planet Trapwords EXO: Mankind Reborn Lost Ruins of Arnak Eastern Empires Pisces: A High-Stakes Fishing Competition

Predictions

New and Upcoming Games

What were the model’s top predictions for new and upcoming board game releases?

Show the code
new_preds |>
        filter(type == 'upcoming') |>
        # imposing a minimum threshold to filter out games with no info
        filter(usersrated >= 1) |>
        # removing this goddamn boxing game that has every mechanic listed
        filter(game_id != 420629) |>
        prep_predictions_datatable(
                games = games_new,
                outcome = params$outcome
        ) |>
        predictions_datatable(outcome = params$outcome)

Older Games

What were the model’s top predictions for older games?