All posts by boo

A Better Gamepad Keyboard: Part 2, Slashcard’s Dual-Stick Input

(This article is also available as a devblog on Gamasutra.)

Slashcards: Learn Japanese! (other languages forthcoming) is a local co-op language-learning action game. This is a game that asks gamepad-using players to respond to free response language prompts (among other types of prompts) — and that demands a faster input method then the onscreen keyboard you configure a PS4 with.

This design blog follows part 1, an exploration of extant gamepad text input designs.

After researching extant gamepad text input methods, I found that none of those options would work to type in even a few letters in a real-time action game situation. (Moreover, I’m not counting on accelerometers or IR sensors for any Wii or PS4 fanciness.) If Slashcards is to be workable as an action game, I need a better solution. I was sure I’d at least need a dual-stick input method, so I began with the design described in the Microsoft paper.

Iteration 0: +Dual-stick, hunt-and-peck, divided keyboard

Dual stick keyboard

Dual stick input seemed a natural extension from single hunt-and-peck keyboard input, and I was glad that someone had given it a shot over the past thirty years. But I was disappointed to see that their results showed such a modest increase in input speed.

Mean WPM from Microsoft study

This design consists of dividing the keyboard in two and putting the cursors where your respective middle fingers would rest on the keyboard — the left cursor defaulting to “d” and the right to “k”. To type “q” would mean pressing left, left, up on the left stick and then a left shoulder button. “m” would be down and then left on the right stick, and then pressing a right shoulder button.

Like the single-cursor, hunt-and-peck keyboard, cursor positions are persistent. So once the left cursor is on “q”, it takes seven inputs to type “b” (left-stick-right, right, down, right, right, down, left-shoulder.)

Iteration 1: +Elastic cursor

I played with having the cursor positions reset after an input. On the one hand, this behavior is less obvious than leaving the cursor where you left it. But in exchange for a slightly steeper learning curve, this choice provides the user with an opportunity to develop muscle memory for every letter. “w” is then left-stick-up, left-stick-left, left-shoulder every time, regardless of the previous letter. After a few minutes of use, I could feel myself getting faster. But typing whole words still felt like a tedious amount of input. If every letter is two to five inputs, a five letter word ends up being around sixteen inputs.

When I tried mocking it up myself, I found that my own performance was already a bit better than that of Microsoft Research’s test group. But the hunt-and-peck approach with its concomitant repeated directional inputs — even with the keyboard divided — won’t reliably work in any kind of time-constrained, action game context.

Iteration 2: +Free selection (non-hunt-and-peck)

Wouldn’t it be nice if I could choose a letter with a single stick-movement? Dispensing with the stateful cursor position, I yoked cursor position directly to the stick position.

Under this system, on the left stick, leaving the stick idle would select “d”, pressing all the way to the left would select “a”, pressing to the upper-right would select “t”. The cardinal directions and their diagonals are easy enough…

Naive stick->keyboard mapping

…but that leaves the intermediate letters to be related to somewhat tricky intermediate positions.

Iteration 3: +Optimized free selection

The difficulty here can largely be addressed by optimizing the mapping between stick and virtual keyboard. The guiding principle is to give each option as much selection space as possible on the stick. So if we map out the space on the stick that can be selected, we can see that the original, naive mapping is obviously, needlessly difficult:

Naive mapping (perception)

And while it’s tempting to contrive a mapping where each option literally has equal area on this mapping, like above, it ignores obvious and massive optimizations the controller gives in context. For one, the neutral position needs zero area — the stick reliably snaps back to (0.00, 0.00). The cardinal directions are also essentially a zero-thickness line, wherein each has a coordinate of 1.00 (north, or “e” on the left stick, always has y = 1.00.)

Naive mapping with minimized central sector

In practice, I found that the other coordinate of the cardinal directions was always very close to 0.00. And indeed, all the keys around the edge of the selection area (QWERT-G-BVCXZ-A, to go around clockwise) had a distance from the center that was reliably greater than 0.9.

Improved mapping

Testing and experimentation gave me the final values I used to divide up the mapping, and the end result looks something like this:

Perceptually optimal mapping

The result is a massively improved input rate–provided the user is willing to put up with an initial learning curve of this new interaction. I encountered testers who, having spent countless hours hunting-and-pecking over Xbox Live and whatnot, were initially quite frustrated by this system. But even in their ire, they were nevertheless actually typing faster than they had been.

Free selection demo

Iteration 4: +Quadrants

Far and away the most common errors were in the corners, such as NNE and NE. My anecdotal suspicion is that gamers are used to gross input, be it little taps to line up sniper sights or the directional inputs for fighting game power-moves. Therefore I wanted to accommodate an input gesture less precise but more reliably repeatable.

My solution for this was to divide each split-keyboard half into quadrants.

The user would first choose a quadrant by pressing in one of the cardinal directions and then (optionally) turn to another direction to select the key within the quadrant.

Pressing right on the left stick would select the “FTGB” quadrant, and moving up would select “T”; moving down, “B”, pressing all the way to the right would select “G”, and a stick position not on the right edge of the joystick would be “F”.

Free selection with quadrants demo

Therefore typing “B” would reliably be one motion consisting of flicking the left stick right, then down. In practice, that basically feels like rubbing just below the center of the right side of the stick edge, and it’s endlessly reliably repeatable.

typing with the quadrant keyboard

Fighting the quadrants can be frustrating, however. If you accidentally select the right quadrant and want to get down to the bottom, sliding down from the right edge won’t work; the stick needs to be relaxed, or the cursor needs to otherwise return to the center first. Therefore some users, whose grip on the gamepad sticks is lighter and more precise, will prefer iteration 3.

That said, my limited test group performed best on the quadrant keyboard. And here’s the split quadrant elastic dual stick on-screen keyboard (SQuEDSOSK) in context:

Slashcards typing in game

(You can see how I’ve taken steps to keep the on-screen keyboards from occluding too much — more work to be done, there, to be sure.)

Try it out!

If you’d like to try Slashcards: Learn Japanese, you can give the pre-release preview a spin by downloading a build from this itch.io page: https://bigblueboo.itch.io/slashcards-learn-japanese.

Why stop at English? Keep an eye out for part 3, where I’ll talk about the development of the Japanese-specific kana keyboard.

(TIGsource forum post version)

A Better Gamepad Keyboard: Part 1, A Survey of Extant Gamepad Keyboards

(This article is also available as a devblog on Gamasutra.)

Slashcards: Learn Japanese! Gameplay

Slashcards is a game about learning language. And part of rehearsing new language knowledge is responding with English equivalents of pronunciation. The most thorough way of testing players is for them to input that pronunciation letter by letter. So, one interface challenge I embraced for Slashcards’ design was to make a game-friendly gamepad-compatible onscreen keyboard.

Part 1: A survey of extant gamepad typing interfaces.

In an effort to avoid re-inventing the wheel — and falling into old ruts — I did some research on to typing on consoles. Perhaps the most naive approach of all was to make a matrix of buttons where each button corresponded to a key on the keyboard. The user would then navigate through the matrix like any kind of on-screen GUI, going from GUI element to GUI element by pressing the d-pad or left stick in the corresponding direction. This has remained ubiquitous ever since 8-bit consoles. Here’s The Legend of Zelda (NES, 1986):

The Legend of Zelda

…and here’s the Xbox 360, 20 years layer: (around its launch, ~2006)

XBox 360 ABC Keyboard

The Xbox 360 keyboard originally defaulted to an ABC layout. I guess ABC-order is defensible insofar as we all know the alphabet, so, given a letter, we would intuit the neighboring left and right letters. But what about the letters above and below? It’s frustrating that, for example, P is to the left of L. Moreover using the wrap-around topology of the keyboard — pressing left on the left edge to go to the right edge, and vice versa — would similarly be a convenience only available to those who carefully studied this unnatural layout.

Eventually — perhaps from release, I don’t recall — you could change the layout of the Xbox 360 on-screen keyboard to a QWERTY layout, and leverage the familiarity every modern person has from thousands of hours of typing. Now you could hunt-and-peck like you were typing on your laptop as you would while you ate cereal or nudged some stir-frying onions with the other hand. I believe this is now the default layout on the Xbox One.

Xbox Keyboard QWERTY

Sony has offered a bizarre multi-step interface, whereby users hunt-and-peck a letter family on the left. Then a number of options including auto-complete suggestions are presented on the right. Finally a user selects one of those options by again navigating across them. The result is a mixed bag. Letters that were far from each other in the above keyboards are closer in terms of button press-count, but previously-adjacent letters might be five or six button-presses in the PSP system. Maybe this compromise plus the autocomplete suggestions turns out to give more words per minute — I remember my time typing on the PSP as an exercise in frustration where I had to relearn the system every time I encountered it. Either way, the bottom line is that any design that relies on autocomplete suggestions is not going to be appropriate for Slashcards.

PSP keyboard

Fortunately for PSP fans (and the PSP’s successor, the Vita) eventually the PSP also offerred a full keyboard layout (known as the “fullscreen keyboard” on the PSP.

A far better performing option is the approach taken by the Wii and the PS4 on-screen keyboards. They offer a QWERTY keyboard whereby the player can more or less point to a key to select it. (The PS4 requires players to tilt, not exactly point, but the result feels responsive and intuitive.)

PS4 accelerometer keyboard

Steam Big Picture mode has an interesting hierarchical keyboard. The user presses a direction with the left stick and selects one of the four action buttons to select a character.

Steam Big Picture Keyboard

The more you use this keyboard, the faster you’ll type — and the skill ceiling is far higher than the hunt-and-peck keyboards above. It also has the virtue of being alphabetical but without the compromise of arbitrary rows that computer-keyboard-lookalike layouts have. At every letter you can easily see if your next layer is 1) in the current button group, 2) counter-clockwise (previous to the current letter) or 3) clockwise (after the current letter.) This is the first system that really tempted me towards implementation in Slashcards. I struggled mightily to compress it to a size that wouldn’t be so demanding of screen real estate. I couldn’t come up with a workable solution but you’ll see that it has something in common with the Japanese input solution I devised.

My first thought was to take the extant QWERTY hunt-and-peck keyboard that we see on the Xbox/PS and add another cursor. Players could hunt-and-peck with the left stick and right stick. This seemed obvious enough that I wondered if this had been attempted before — sure enough, a Google search hit showed me that the approach is explored to some extent in a Microsoft research paper.

Their research finds study participants went from 5.8 words per minute on the single-stick hunt-and-peck QWERTY keyboard to 6.4 words per minute with a dual-stick split hunt-and-peck QWERTY keyboard. A 10% gain is still far from the kind of quantum leap I’m looking for.

In the end, I found that none of these options would work to type in even a few letters in a real-time action game situation. Slashcards demands better!–or rather, a more precisely-designed solution.

In the next part, I’ll show how I mixed and matched the best of the above approaches to iterate towards a functional, consistent, and radically more efficient on-screen gamepad keyboard.

Slashcards Unity Quick Tip 2: Bringing a face to life

This is a continuation of Quick Tip 1, which discusses the coding approach to the faces in Slashcards: Learn Japanese.

Character still.

Let’s bring our character to life.

Plop her in your fantasy world and what would she be doing? She’d be looking around, thinking and reacting to what she saw. That means she’s blinking. She’s raising her eyebrows or furrowing them as she thinks about this or that. She’s pursing her lips and relaxing them. And none of this is happening in a rigid, repeating pattern. It’s all semi-random. Sometimes she blinks three times in three seconds. Sometimes once in ten seconds. Something will worry her for just the briefest moment…etc.

For my character, I’ve broken facial activity into four processes:

  • Blinking
  • Looking-at
  • Eyebrow movement/shape
  • Eye shape

Moreover, the character has a Mood and a PointOfInterest.

Blinking is a coroutine that blinks every so often (from .5 to 5 seconds). It’ll blink more often if the mood is Afraid or Sad.

LookingAt is a coroutine that aims the pupils at the PointOfInterest if there is one. Otherwise it just picks a different thing/point in space to gaze at every several seconds.

Eyebrow movement/shape shifts the eyebrows up and down at random intervals. The eyebrows’ shapes are determined by the mood.

And eye shape — angry, relaxed, afraid, and so on…

null
Eye shapes by deviantartist sharkie19.  You can imagine how you might break out eyebrow, pupil, and eye shape.

…are controlled by a coroutine that varies them according to the mood. Certain moods use different sets of eyeshapes. For example, in Mood.Relaxed, the character might momentarily be pensive/worried — an eyeshape also used in Mood.Afraid — but it won’t use that expression as often.

Slashcards facial animation

Crucially, this approach shows the player that the character has an inner life. Keeping the control in code makes it easier to vary. And the varied behavior means the designer/animator doesn’t have to handcraft every little reaction.

Slashcards Unity Quick Tip 1: Fast Faces

One thing I wanted to achieve in Slashcards (check out the playable preview at itch) was characters that had some character. I wanted them to feel alive. One game that does that superbly well is The Legend of Zelda: Wind Waker.

Because of the specific artistic choices in that game, the characters faces were basically drawn on as if cartoons. That permitted a ton of expression with a bit less work than, say, modeling a humanoid face, rigging it, and using motion capture or hand-crafted animation for each little expression.

To make expressive faces, you first and foremost need eyes and eyebrows. Eye shape and eyebrow position tell you just about everything you need to know about someone’s emotional state. I wanted to be able to control the eye pupil position by script, so rather than have canned images fr looking left, looking right, etc., I chose to have the pupil be a separate image from the white of the eye. And that meant masking the eye, which means shaders, which means more draw calls…

Two eye brows, two eyes with two pupils, and we’re already up to six draw calls per character, and we haven’t even gotten to the mouth, yet. The trick to keeping this cheap is (in Unity) using MaterialPropertyBlocks.

The drawback of MPBs is that you have to set them in code. And if you want to see the results in the editor, you’ll need to have a script that sets their values in OnValidate.

So the recipe is: use the same texture, use the same shader, and to vary parameters across different facial elements, use MateriapPropertyBlocks. Next up, I’ll get into how they’re animated.

looks painful..
Hm, looks painful…

Slashcards: Learn Japanese! Alpha Preview

Learning your first kana

So, the Slashcards 7DRL morphed away from a roguelike RPG and more into an action game with some light RPG progression elements.  Slashcards: Learn Japanese!: 一緒に勉強しよう! is an action-RPG where your mastery of reading (and writing!) Japanese is your character’s power. You’ve been summoned to go on an epic journey that will have you battling monsters, mastering the power of the Slashcards, and rescuing the denizens of a land accursed.

The focus of this project is to make the otherwise tedious task of rote learning (memorization) a seamless and central part of an action game.

Even if you plow through the dialog and introductions, and go in blind, the game will still give you hints, let you review in-game, and you can finish a level having really learned something.

Enemies have characters or kana right on their bodies. If you scope out enemies you “know” well, you can prioritize your attack — or step back to find some safety so you can review in-game.


(in-game review – you’re seeing 上手 being written mid-stroke animation)

Or just barrel in and give it a shot — a miss will give you a hint to the right answer, if you survive being stunned (and you probably will.)

Attack an enemy and the faster and more accurately you respond, the more damage you’ll do, the more experience you’ll earn, and the better you’ll be able to evade the monsters that are still chasing you! And by “responding”, I mean choosing from multiple choices, typing in English or Japanese, or even drawing a character stroke-by-stroke.

I’m a learner myself. So I’ve gotten help putting together the lessons, the level order, and I’ve even male + female voice recordings for the words and letters in the game. They’re not all in there yet, but most of them are.

On top of that, Slashcards has been built for multiplayer co-op and competitive modes from the very beginning. So you can adventure across the land with your friends…


some co-op gameplay

…or try one of the hectic versus modes (with CPU opponents, if you want!)


Bingo Battle!

There’s tons more for me to talk about (and to work on), but I’m hoping I’ll be able to update this regularly.

I’m anxious to get this rolling on Greenlight, but first I’d like to get a working demo going. Do me a favor give the preview a try and let me know what you think!

>> Download the pre-release demo: Slashcards: Learn Japanese. <<

You can play the first couple levels and try one of the versus modes, Bingo Battle. You’ll be treated to three original music tracks and one recycled one from an old game of mine.

7DRL 2017: Slashcards [Jam Mini Post-Mortem]

My attempt at the Seven Day Roguelike competition (7DRL) this year was to take another crack at an old project of mine, Slashcards.

The concept of Slashcards is to use the variety and depth of roguelike mechanics to teach language.  The thousands of monsters and varieties of loot plus the freshness of randomized levels offers a great way to go through the rote learning.

The action-y nature of Slashcards is a response to the  existing language learning games.  This is a game that keeps your eyes on the action and, except for equipping items, hands on the keyboard.  Going from exploring to battle to unlocking treasure to casting spells is seamless.  Maybe it’s even possible for the language challenges to be even more exciting than the exploration.

My ambitions were a bit too much to pack into the 7DRL this year — I did give it a good try, spending time each day of the week on the project.  Unfortunately, the game isn’t really playable yet.  But the bones of a really exciting project are there, and I’m definitely thinking about what to do with it next!

I got lost down various non-roguelike rabbit holes.  I drew some facial features, wrote an eye mask shader, and whipped up a FSM for facial expressions:

I also messed around with an interface-free inventory.

I designed and built a first take on the eponymous collectible character/vocab cards…

I worked up a shader that (ahem) uses the entity’s object space to anchor a billboard that is then masked by the fresnel of the model.  It only kinda works in context given inevitable and difficult occlusion issues.

The idea is that you can eventually prioritize enemies by whether or not you know the character/word they’re associated with — among other possibilities.

Anyway, it’s notable that I modeled, textured and rigged everything above, so there was plenty of time not spent on 7DRLish things.   One entire day was spent on a multi-story dungeon generator, but that went nowhere.

Whew… Next year I’m sticking to ASCII, I swear!

 

How to Make Throwing in VR Better

Throwing–it’s one of the first things you do when you try out VR.  You take that virtual coffee cup and chuck it.   The cup, donut or ball goes off spinning crazily.  Before you know it you’re trying to hurl potted plants at the tutorial-bot.

Here, I’m trying to throw the soda cans at the blank screen facing me.

Some of your throws go hilariously far.  Some go pathetically low.  One or two actually ding the beleaguered NPC.  Maybe our first lesson in VR is that it’s actually kind of hard to throw accurately. When I had this experience, I’d assumed it was because I’m bad at VR. We accept that mastering a control scheme is part of a game’s learning curve.  But when throwing with the same motions yields wildly different results, you’ve got a recipe for frustration.

A hard overhand pitch falls short…

Trying to chuck a water bottle over-hand.

…while a flick of the wrist might send things flying.

Just moving my wrist…

Rescuties: a catch-and-kiss baby tossing game

Over the summer, I’ve been working on a casual action game called Rescuties. It’s a VR game about throwing — and catching — babies and other cute animals.

In VR, if the world reacts to us in a way that contradicts our intuition, we’re either breaking immersion or frustrating the player. As a game that’s all about throwing, Rescuties couldn’t rely on a mechanic that was inherently frustrating.

The frustration is built into the design of these games. In a nutshell, it’s hard to throw in VR because you can’t sense the weight of the virtual object. Approaches vary — but most games have tried to respect the physics of the virtual object you’re holding as faithfully as they can. You grab an object, apply some virtual momentum to it in the game, and off it goes.

Here’s the problem: there’s a disconnect between what you feel in your real-life hand and what’s going on in the virtual world. When you pick up a virtual object, the center of mass of that object and the center of mass you feel in your hand have some real separation. Your muscles are getting bad info.

If you were just pushing your hands in this or that direction when you threw, this separation wouldn’t matter much. But you bend your arm and rotate your wrist when you throw. (The key to a good throw?–“It’s all in the wrist!”) As you rotate your real hand, you end up applying excessive momentum to the virtual object as if you were flinging it with a little spoon.

This is a very unforgiving phenomenon. In many VR games, you can “pick up” an object up to a foot away from your hand. With a press of a button, the object is attached to your hand, but stays at that fixed distance, turning your hand into a catapult. The difference between three inches and twelve inches can mean a flick of the wrist will send an object across the room or barely nudge it forward.

Wrist movement-as-catapult.

In Rescuties, you’re catching fast-moving babies — and you’re in a rush to send them onto safety. The old approach resulted in overly-finicky controls and a frustrating, inconsistent experience. Why can’t I just throw like it feels like I should be able to?

Physical versus virtual weight

The crux of a more successful throwing strategy is to respect how the controls feel to the user over what the game’s physics engine suggests.

Rather than measuring throwing velocity from the virtual object you’re holding, you measure it from the the *object you’re holding* — the real life object you’re holding, i.e., the HTC Vive or Oculus Touch controller. That’s the weight and momentum you feel in your hand. That’s the weight and momentum your muscle memory–the physical skill and instinct you’ve spent a lifetime developing–is responding to.

This center of mass that you’re feeling doesn’t change, no matter what virtual object you’re lifting.

The way you bridge the physical weight to the virtual is to use the center of mass of the physical controller to determine in-game velocity.   First, find where that real center-of-mass point is in-game.  The controllers are telling you where they are in game-space; it’s up to you to peek under the headset and try to calibrate just where the center-of-mass is.   Then you track that point relative to the controller, calculating its velocity as it changes position.

Once I made that change, my testers performed much better at Rescuties — but I was still seeing and feeling a lot of inconsistency.

(See this article for an interesting discussion on trying to convey virtual weight to players — an opposite approach that skips leveraging our physical sense of the controllers’ weight in favor of visual cues showing the player how virtual objects behave.)

Timing

When precisely does the player intend to throw an object?

In real life, as we throw something, our fingers loosen, the object starts leaving our grasp and our fingertips continue to push it in the direction we want it to go until it’s out of reach completely. Maybe we roll the object through our fingers or spin it in that last fraction of a second.

In lieu of that tactile feedback, most VR games use the trigger under the index finger.   It’s better than a button — in Rescuties, you’ll see that squeezing the trigger at 20% makes your VR glove close 20% of the way, 100% makes a closed fist, etc. — but you’re not going to be able to feel the object leaving your grasp and rolling through your fingers. The opening-of-the-fingers described above is simply the (possibly gradual) release of the trigger.  I found that I wanted to detect that throw-signal from the player as soon as the player starts uncurling their fingers. Throws are detected when the trigger pressure eases up–not all the way to 0% or by the slightest amount, but by an amount set experimentally.

sharpie + inkjet paper

The chart above charts the trigger pressure over time of a grab-hold-throw cycle.  In this case, the user is not pressing the trigger all the way to 100% — a common occurrence as the HTC controllers’ triggers go to ~80% and then you have to squeeze significantly harder to make them click up to 100%. First the trigger is squeezed to pick up an object. Then pressure holds more or less steady as the player grips the object and winds up for a throw — here you’ll see the kind of noise you get from the trigger sensor.  The player releases the trigger as the object is thrown or dropped.

The signal noise and the heartbeat of the player can make the trigger strength jitter.   That calls for a threshold approach for detecting player action.  Specifically, that means the game detects a drop when the trigger pressure is (for example) 20% less than the peak trigger pressure detected since the player picked up the object.  The threshold has to be high enough that the player never accidentally drops a baby — a value I found through trial and error with my testers.  Similarly, if you detect a grab at too low a pressure, you won’t have the headroom to detect a reliable throw/release. You’ll get what I got during one of my many failed iterations: freaky super-rapid grab-drop-grab-drop behavior.

Velocity Noise

Measuring the right velocity and improving the timing mitigate throwing inconsistency quite well. But our source data itself–the velocity measurements coming from the hardware–are quite noisy. The noise is particularly pronounced when the headset or controllers are moving quickly. (Like, say, when you’re making a throwing motion!)

Dealing with noise calls for smoothing.

I tried smoothing the velocity with a floating average (also known as a low-pass filter) — but this just results in averaging the slow part of the throw (wind-up) with the fastest part of the throw (release) — at least to some extent. My testers found themselves throwing extra-hard as if they were underwater. (This is what I tend to feel in Rec Room.)

Averaging controller velocity gives you reliable, but too-slow results — contrast with Job Simulator at the top.

I tried taking the peak of the recently measured velocities, so my testers saw their babies flying at least as fast as they intended — but often not in exactly the direction they intended because of the last measured direction was still subject to the noise issue.

What you really want to do is take the last few frames of measurements and observe what they suggest — i.e., draw a trendline. A simple linear regression through the measurements gave us a significantly more reliable result. Finally, I could throw babies where I wanted to, when I wanted to!

Debug velocity visualization showing the last four frames of measured velocity in red, and the regression result the game uses in yellow.

How I improved throwing in Rescuties (tl;dr)

  • Measure throwing velocity from the center of mass of where the user feels the center of mass is — i.e., the controller.
  • Detect throwing at the precise moment a user intends to throw — i.e., fractional release of a trigger
  • Make the most of the velocity data you’ve measured — take a regression for a better estimate of what the player intends.

If you’re curious about testing these approaches and/or challenging them, there’s a “Labs” menu in Rescuties where you can toggle these various throwing modes on and off, switch how velocity is measured, and control how many frames of measurements are used in the regression/smoothing.

This problem is by no means solved. There’s a remarkable diversity among VR games on throwing experience and their respective players have been happy enough.  When it came to throwing babies in Rescuties, I wanted to make sure the physical expectation of our muscle memories’ matches the virtual reality of the arcing infants as best I could.  It’s better than when I started — but I’d like to make it even better.

So suggestions and criticisms of the above approach are welcome: you can always hit me @bigblueboo or at cj@modeofx.com.

blog.Awake()

A quick intro to start things off —

My name is Charlie.  I’ve been independently making apps, games. and software toys for almost ten years.

Most recently, as of this post, I made a VR game about rescuing babies, Rescuties VR.

In 2015 I made a mobile game about learning to read and write Chinese, Japanese, Korean, or Hebrew.  It’s called Word Fireworks and you can get it on Android and iOS.

I’ve also made a lot of looping gifs, either in code with Processing or with Cinema 4D.  I made one a day at bigblueboo.tumblr.com.  I made it just past a thousand before taking a break.

I enjoyed sharing the gifs so much that I’ve decided to start sharing some of the ideas and techniques I use, both from the gif stuff and the software I’ve made.  I may try some of the other various platforms, but at the very least this blog is where I will collate all the posts I write.

Enjoy + feedback is always welcome.