TL;DR: A relatively benign communication problem in a virtual TTRPG game sent me down memory lane and into the pits of the Theory of Fairness, and I saw no good reason to spare you that.
Years ago, I taught an introduction to moral philosophy and ethics to aspiring civil servants at a French University. The philosophy department had strongly suggested a standard French approach—Plato, some historical stuff, no math, and an open word salad buffet. Instead, I thought: “Civil service, eh? What about fairness?”
I did my homework, learned some game theory, and the rest is history. Little did I expect that, fifteen-odd years later, I’d use the experience in a TTRPG game, to accommodate players’ preferred playstyles. Now, if the last sentence makes you go “what the f…?”, please bear with me. It’s all going to come together in the end.
Trust me, I got this.
Preliminary: A Fairness Story
The Queer Clockpunk Fantasy campaign started as a real-life Fate game, migrated online—Fari as VTT, Discord for voice chat—and switched to Cortex Prime after a few months. It was primarily a gameplay choice (see that post), but Cortex was also, in some respects, friendlier to the players’ thinking styles.
Then, we welcomed a new player—let’s call them Alice—who quickly found their Cortex bearing. After a session or two, we began to split role-play (RP) between text and voice, which accommodated Alice’s pacing preferences, because everybody agreed Alice had good points about me:
- when I struggle with word choice, I bogart the voice channel beating around the bush.
- using text, I can polish while freeing communication space and speeding the overall pace.
Then, we welcomed a new player. Let’s call them Bob. They listened in a couple of sessions, then joined in. Their first session was somewhat of a letdown, my bad—poor choices of game mechanics and a poor job at allocating speaking time.
A discussion ensued, with a mix of DMs and public discussion on the game’s Discord server. Bob suggested publicly the game would be smoother if we’d switched back to voice-only RP and put mechanics on the back seat. In private, though, Bob made a more substantial suggestion, that I could:
- simply read the situation: “Alice wants text, Bob wants voice. Different player needs.”
- interpret needs based on Bartle’s taxonomy of players’ types.
(1) is spot-on, so I did my homework on (2), pondered over it, then canceled the next game, and began working on this post’s draft, because (1) is accurate, but Bartle’s taxonomy does not cut it. So instead of needs, let’s consider preferences. Alice’s and Bob’s preferences are not on the same level—but we’ll see they’re still comparable.
- Bob prefers voice: Bob’s much more comfortable with voice RP-ing, which suits their play style better; still, they’re comfortable with one-on-one play-by-post RP-ing—we had a smooth parallel PbP run.
- Alice prefers text: Alice is autistic, can listen in on, but not as easily express in, speech communication. Voice games hurt their RP; and processing other folks’ chatter for too long is straining, limiting the time they can play in one sitting.
Now, and to be fair (because that’s the point, innit), when Bob suggested rolling back text chat and downplaying mechanics, they didn’t know much about the game’s history (The Fine Prints). So, I thought about how to bring that up constructively and remembered my old introductory course.
A Bargaining Model
The game theory for the Alice-Bob problem is non-technical and was introduced by philosopher John Rawls (1921-2002). Below is an exposition (my favorite) by British economist, game-theorist, moral philosopher (he’d probably object to that one), and overall awesome bloke, Kenneth Binmore.
If you had one hour to learn about fairness, there’d be worse ways to spend it than watching Binmore’s lecture. But, after the fuss I made about speech processing, I can’t merely drop a YT link. So here’s a transcript I made for Alice when discussing the post.
Here are Adam and Eve, and they’re not completely naked, though you probably can’ tell from where you’re sitting. And, err, they’re discussing—and here I’m taking Rawls’ position—they’re discussing a marriage contract. What would be a fair marriage contract?
Rawls says that if they should decide on a question like that, they should imagine themselves behind a veil of ignorance, and what the veil of ignorance conceals is who they are. So when they go behind the veil of ignorance or imagine themselves behind the veil of ignorance, Adam…
(I’m calling Adam John here, for John Von Neumann, and Eve I’m calling Oscar, for Oscar Morgenstern, the two founders of game theory.)
And in this state of ignorance, we have to imagine them, err, we have to imagine them bargaining about what social contract, what marriage contract to operate when they come out from behind the veil of ignorance.
And the idea is that if neither of them knows who is going to be whom, neither will want to agree to a situation where one of the parties is disadvantaged because they themselves might be the disadvantaged party.
And usually, when people hear this for the first time, they are quite enthusiastic. The younger they are, the more enthusiastic they are.
Pending one major simplification—I’m abstracting the other players’ preferences, a real-world issue I can ignore here—Rawls’ Veil of Ignorance (VoI) applies to the Alice-Bob problem, once reframed as below.
(VoI) What game set-up would Alice and Bob agree upon if they didn’t know if they will play as Alice or Bob?
(VoI) is answerable if Alice and Bob can put themselves in one another’s shoes. It may seem a stretch with someone farther from neurotypicity (Alice) and someone closer to it (Bob). Only, it’s not. All that matters is that Alice and Bob know one another’s preferences, not the reasons for them or “what it is like to be Alice (Bob).”
(If you watch Binmore’s lecture: that’s what Harsanyi’s theory of interpersonal comparison proved possible, earning him the 1994 Nobel Memorial Prize in Economics.)
What ifs and what abouts
Here I leave the real world and get into hypotheticals. I’ll answer two what if…? and one what about…?
What if Bob doesn’t agree to a text-based game?
Would that be neurotypical ableism? Well, no. If in good faith, Bob didn’t agree to a text-based game under a Veil of Ignorance, we would learn more about Bob’s preferences. In particular:
- that Bob’s preferences for one-on-one and multiplayer RP are different, and expectations about one should not be based on the other;
- that Bob’s preference for a voice-based multiplayer RP is strong enough that going against it would degrade Bob’s gaming experience as a voice-based game degrades Alice’s.
Again, this assumes good faith. But I always assume good faith from real-world players until proven wrong. And in the real-life case, I have all the evidence I need to back my good-faith assumption.
What if a gaming group does not have time for all this bargaining?
As long as the group plays in good faith, Binmore and Harsanyi have them covered. Rather, evolution: we are genetically wired for Rawls’ bargaining solution. That is, if we are close enough to it, we can find it through trial and error.
Play long enough with a TTRPG group, pay attention to folks’ preferences, and you’ll converge to a Rawlsian solution (however, massive caveat incoming). It’s pretty much what happened in real-life (The Fine Prints). Bargaining is mostly for newcomers whose preferences did not shape a game’s history (as with Bob) because they’re stumbling upon a subculture.
What about the caveat?
Well, (sub)cultural norms deeply affect Rawlsian bargaining. Even under the Veil of Ignorance, folks tend to give preferences from their (sub)culture more weight. Some because they think they know better; others without thinking. That’s not unique to preferences: it’s how privilege works in general.
That’s a serious topic worthy of a serious discussion, but it’s an issue even in “light” cases like TTRPGs. Also, why I did not look too much into Bartle’s taxonomy—or any other existing model of player needs or style of play—but I’ll leave that for The Fine Prints.
The Fine Prints
A history of adjustments. Before Alice joined, we had switched from Fate to Cortex to ease storytelling, because of how we had played Fate (not what Fate is designed for). Alice adapted to our Cortex by developing a particular playstyle. If asked about their dice pool in voice chat, they’d risk losing their train of thought mid-building and making suboptimal choices. So instead, they used trait selections as anchors, developing spontaneously into a writing technique I had formalized in my first Fate hack, described it to QCF game players, but not shared explicitly nor enforced it in the game. So, Alice’s playstyle reinforced implicit norms. The discussion with Bob made those norms explicit—and from there, we had to decide which one we’d stick to. The actual process was more gradual than (VoI) Rawlsian Veil of Ignorance.
Made-up Norms. Bartle proposed his taxonomy in the mid-1990s for online MUDs (Multi-User Dungeons). It’s heavy on analysis and light on data, and Bartle was skeptical about any application beyond MUDs, even to MMORPGs. So applying it to TTRPGs is a stretch. Also, the recipe for a self-fulfilling prophecy. The short of it is: test enough players on it, it goes viral, and before you know it, everybody expects everybody else to define themselves in those terms. You end up with a subculture with dubious norms (remember Bartle’s own caveat). That’s how questionable intuitions become groupthink—and Trolley Problems, rather than the Veil of Ignorance, end up on Netflix’s Good Place. And before you get too far down that rabbit hole, there’s a 15 sec. solution on Twitter.
Wrapping Up: Rawls, the dice
Fifteen-odd years ago, I picked Rawls’ bargaining solution as the second-most important topic for my introductory course—the first was Hume’s Guillotine. I didn’t expect that choice to send ripples in my TTRPG life. Then again, it was my first brush with game theory, and I didn’t expect to get into it that much in general, either.
Fifteen-odd years later, a relatively benign communication problem made me realize two things. First, the hardest theoretical problem I’ve ever solved was solved with essentially the same reasoning as Rawls’ bargaining solution. Second, being fair at a TTRPG table is a much harder problem.
So I canceled a game session to put my thoughts in order and wrote that post instead of playing. Now, it’s time to head back to that virtual table and solve that problem. At least, now, it’s well-posed—and a well-posed problem is usually half-solved.
And that will be all for today, folks.