I've been brainstorming how to make the spell system more customizable, and one of the ideas I had was to explore using gestures to cast spells.
So let's see how that went down!
Why Explore Gestures
The current spell system design (see also first implementation) is based on pressing 2 element-buttons to select a spell from a lookup table.
I've been noodling on what a system closer to Magicka's "combine as many elements as you like1 to cast a spell" would look like - casting spells you've designed could be more fun, and could possibly take better advantage of the game's atom physics.
Reason One: Free Up Buttons
This game is supposed to be fast-paced, so ideally spell-casting should be "spontaneous": you should never be stuck in a menu waiting for John - your mate who over-optimizes everything - to tweak his spell loadout for the Nth time.
Without menus, any combining of spell components requires a good control-scheme to choose which components you want to combine - for example, how does a player input "mix Fire and Water and Force and then cast it as a Ranged Attack in that direction"?
It's a surprisingly hard design problem to solve for both keyboard and gamepad at once. And unfortunately I can't just copy Magicka's controls because this game is not top-down perspective. 2
So maybe a good gesture system could help the player input how to cast their spell or which elements to include - and thereby free up some buttons.
Reason Two: Support Unique Spells
There are (or rather, will be!) necessary "unique" spells that don't fit neatly into a gamepad-friendly procedural spellcasting system, such as Revive Your Friend.
Magicka has a separate "Magicks" mechanism for these, but viewing available Magicks is very fiddly. 3
Maybe a gesture mechanism would be a more fun way to allow players to cast those unique spells?
Dollar Q
There is a family of easy-to-implement-but-surprisingly-decent gesture recognition algorithms called The $-family, so I started there.
Of those, $Q
(interactive demo) seemed to be the newest, fastest, shiniest and most heroic, so I implemented it: 4
You'll notice that most of the gestures are more circular than square - that's so that they can also be drawn on a gamepad.
The accuracy of drawing gestures on gamepads was initially terrible because I had bad gamepad-stick-deadzone handling,5 but after a fix there it worked surprisingly well:
But, after experimenting a bit I realized that I should have read the fine print in the research paper...
Turns out $Q
is "stroke direction invariant", which is fancy speak for it can't tell the difference between a pen moving left and a pen moving right.
I was hoping to do things like "draw a circle to shield around yourself", "draw down to cast on yourself", "draw left/right to cast rightwards", etc - but $Q
can't differentiate right from left or up from down.6
Dollar One
So I turned to $1
(interactive demo), the very first $-family algorithm,7 which specifically does support remembering stroke direction.
This was gonna be better, right?
Unfortunately, while $1
is stroke direction variant, it is also "rotation invariant", meaning it will rotate your gestures until it matches a predefined gesture - so you still can't distinguish "stroke to the left" from "stroke to the right". And, it stretches your gestures to a square shape, so it is really unreliable for non-square gestures.8
You combine those 3 properties together, and the result is that $1
matches some totally unexpected gestures, as you saw in the demo above.
Actually, as I played with both options more, I realized I had bigger problems:
- You don't get any feedback about which gesture (if any) you're matching as you draw.
- The dollar family almost always claims some gesture was matched, even if you just scribble randomly. 9
What I really wanted was an "incremental" gesture recognizer, which gives feedback as you go.
Maybe something like this?
But I've already written enough for this week, so we'll cover that another time!
Playable web build
Here is the game, configured to let you experiment with the $Q
recognizer using the mouse.
If you want to try them on gamepad, press F4 to exit edit mode, then hold LB (the top-left shoulder button) and move the right stick.
Notes:
- Gestures don't correspond to spells or anything yet! This is just experimentation to see if that sort of thing might be good.
- These gestures are the default gestures from the
$Q
recognizer, so they might be difficult to draw on a gamepad! The actual gestures would be drawn to work well on desktop and gamepad.
As long as you only like to combine up to 5 elements (or at least, that's the max for Magicka).
Magicka has 8 elements, so I'm assuming I'd need 8 too.
Magicka's keyboard controls use QWER/ASDF to summon elements, which doesn't work for us because we use WASD to move (in Magicka, you move by clicking with the mouse, since it's top down).
Magicka 1's gamepad controls uses a too-slow gesture-like system for selecting elements using the right gamepad stick.
Magicka 2's gamepad controls use face buttons (ABXY) and a shoulder button (LB) to choose from 8 elements - but for us our jump button has to be on the "A" button by long-standing gamer convention, so we're short a button.
And that's not even getting into the "cast mechanism", which is 4ish other buttons.
You use the gamepad's Directional Pad or mouse wheel to cycle through known Magicks one by one, and the "recipe" for that Magick is shown under your character in the form of the elements you need to summon. It's slow!
And while I do like that it allows players to memorize Magicks and reduce the reliance on that 'spell book', I don't think it's well suited to "new player joins the game partway through".
I've finished Magicka several times plus the DLC, and I only ever memorized 2 spells: Thunderbolt, and then Revive to apologize for my thunderstrikes.
I had unknowningly implemented cross-shaped deadzones, which were giving terrible precision when gamepad sticks were only just engaged. Now we use radial deadzones + precision scaling. This fix applies for the whole game, not just gestures - so it's much easier to move at low speeds now. See Doing Thumbstick Dead Zones Right.
Or rather, Claude Opus transcribed the core algorithm from JavaScript to Rust, and then I tediously debugged and fixed all its mistakes.
You can sort of work around this by putting a recognizable feature on one end; e.g. ⌐
and ¬
let you tell left and right apart. But that felt a bit unnatural - and for most spells I'm aiming for a quick-to-use casting system, so additional friction isn't great.
Which someone else had fortunately already ported to Rust for me, in the form of Guessture - which is possibly the best name ever for a gesture recognition library.
There is a Google-developed refinement to $1
called Protractor, which does actually have a "rotation variant" version, and also doesn't require the stretch-to-square of $1
(and, fun fact: Protractor is what Android's gesture recognition API uses).
However, the Guessture library doesn't have Protractor implemented, and the Javascript reference implementation of $1
-with-Protractor (including the online demo where you tick Use Protractor
) incorrectly scales and rotates gestures.
I suspect $1
-with-Protractor would have been the best of the 3 options.
In theory you get a score from 0.0 to 1.0, where closer to 1.0 means it was a closer match. But my best drawings of some gestures still only got 0.4, and some scribbles got 0.5, so setting a sane threshold seems tricky. (Maybe this could have been improved by providing multiple samples of each gesture, along with adding a special 'the user is scribbling' gesture.. but that approach seemed like a wack-a-mole)