Random Thought 0001 – Mobile Gaming

Being a recent Apple convert, I’m still coming to terms with the idea that gaming on a Mac isn’t quite what it used to be in the Windows world. I’ve spent so many hours in the past on Steam – and with a considerable number of independent games – that are truly Windows exclusives, and this makes it painful when I open Steam on my Mac only to see that maybe 10% of my library is available. As discouraging as this is, I’ve considered it a temperament of sorts. With my family growing in size, any time I can dedicate to gaming is fast dwindling. So I’ve been spending more time gaming on my iPad, which has proven a rather stalwart companion in this regard. One that I’m quickly growing rather fond of.

None of that is to say that Apple doesn’t have good games at it disposal. But if you’re a classic gamer such as myself who desires a specific set of games, you need to be prepared to walk away disappointed most of the time. Apple Arcade is a pretty good service for those who’re willing to pay for it and are invested in the Apple ecosystem. There are some gems on there, and the fact that you can seamlessly transition from iPhone to iPad to Mac and – even in some cases – Apple TV, makes this a worthwhile service itself. However, the severe drought of Triple-A titles and the overwhelming sense that most of these games were created by frustrated art students leaves an uncomfortable itch on your mind. I can say with honesty that there have only been maybe four games in the total lot that I’ve been interested in, and even in that set only two that I’ve been able to spend any serious time with: Fantasian and Shante and the Seven Sirens. I’ve long been bored with the traditional mobile gaming experience, and most of these games trend toward that scheme: controls that a toddler can master with ease, and shallow gameplay that either can’t escape said control scheme confines or require a micro-transaction to expand upon.

All these years later, and to my amazement, some of the publishers that I’ve come to know and love on consoles and PC have started to put some of their greatest titles on the mobile platforms. I had fair reservations when hearing that SquareEnix would be releasing Chrono Trigger on Android, but there it was right in the Play Store. Konami has Castlevania: SOTN up there. And these titles are also available in the App Store too. Several years ago, when I was still playing largely in the Android space, I ponied up for Secret of Mana, one of my all-time favorite games. Thinking I’d stumbled upon a way to relive my childhood with this amazing title, I was soon completely and utterly disheartened. This was my first experience with on-screen digital controls and man did they not map onto the clean experience that the analog version once had. A game like Secret of Mana requires a certain amount of precision in controls to truly enjoy, and these touchscreen controls failed in every way to live up to that basic expectation. Map navigation felt slippery, and handling platforming sections was dubious at best, especially when trying to avoid floor hazards. Combat required an entirely new approach. Instead of advancing confidently either with gusto or tactically, each action now started with reservation. Not because the opponent demanded a change in default tactics, but because there was a more than uncertain chance that you’d miss your target because the controls were grossly inaccurate. Of course, this says nothing about the action buttons. Because you get no feedback from the “buttons” on a touchscreen, I’ve often times found my right thumb drifting away from where it should be to press on a button. In the middle of combat, I’ll be frantically tapping on the screen thinking I should be executing an action only to see my avatar doing nothing but standing idle getting clobbered. It’s only when I’m certain that the game is ignoring me do I realize that I’ve been tapping in the void this whole time. It’s one thing to feel like an idiot because you made the wrong move or poorly planned for the next section. It’s an entirely different thing to know that you’re hindered from standing a chance because you tapped in the wrong place.

Shortly after jumping into the “Walled Garden,” I learned that the iPad could support pairing with either an Xbox or PlayStation 4 controller. Not only that, but that some games actually supported in part or outright required a paired controller to play. Finally!, I thought. This is how games are meant to be played! A controller in hand. Not some goofy DPI-scaled tap range that had no way to tell you “You missed that button!” This is when I started diving deep into Shantae and the Seven Sirens. It’s also when I learned to love the ability to transition between my several Apple devices for a truly seamless gaming experience. At home, I could pair my Xbox controller with my Apple TV and play Shantae on the big screen. With the data always synchronizing to the cloud, I could take my iPad with me to work, pair up my Xbox controller to it, and play during my lunch break. Not only that, but also play by picking up from the same exact place I left off with at home on the Apple TV. But then, if that weren’t enough, if I wanted to cut loose and play a little on my Mac while taking a break, I could easily pair my Xbox controller to it and play Shante there with the same data availability as before. Wayforward seriously nailed it on the head with this title in Apple Arcade. They took advantage of all the technologies Apple has to offer and made one hell of an experience. It made me realize that this is what Triple-A gaming should be like, if not everywhere in principal, then at least on the Apple platform. My games should be playable on all the devices I own, especially the Apple TV, should support a controller as either an option or strict requirement, and if a multiplayer mode is available, certainly have an online option but also support a local option too, reminiscent of days of yore where two or more people were huddled around the TV, each with a controller.

To my dismay, it seems like many game developers and publishers are missing this aspect horrendously, and it bothers me. First of all, there’s incredibly limited support for Apple TV versions of games, and I’m not quite sure why. I’ve recently started developing on Apple platforms, but have been doing so with SpriteKit and Metal – and maybe this is the reason why – and my demos seem to work quite well on any device that I deploy them to. Further, SquareEnix didn’t seem to have any issues making Chrono Trigger work on the Apple TV as well as the iPad and iPhone (no Mac oddly enough). So why does the Apple TV get neglected so much? I just recently purchased Trials of Mana for iOS and not only does it not support a controller, but it also doesn’t support Apple TV. Why? It’s unplayable on my iPhone and barely passable on my iPad. The whole experience would be so much better if a controller were supported and the Apple TV were a viable host to play on.

Second, and I’ve already hinted at this, is the lack of controller support from the gate. As I’d mentioned previously, the newly released Trials of Mana for iOS doesn’t currently have controller support, meaning you’re relying entirely on touchscreen controls. For a game of this sort, you absolutely need a controller. I’ve plenty of recorded video where moving the camera with my right thumb is needlessly complicated and cumbersome because there’s no way to wrap the action. Genshin Impact, which I absolutely love, didn’t launch with controller support either. It only came to iOS several updates later after its launch. That game features fast-paced high adrenaline action, and you’re supposed to handle that with touchscreen controls? One game that had it right, I feel, was Call of Duty Mobile. They had touchscreen controls to be sure, and they’re pretty solid if still a little clumsy, but they also supported controllers right from the gate, changing the whole experience for the better. The thing that baffles me is that you have publishers and developers whose lifespan is well older than these mobile platforms, meaning that within their veins flows the blood of a controller-based interaction scheme, and when porting their legacy titles to newer platforms, it’s almost as if they’ve forgotten completely about having a controller in the mix.

Third, and finally, why does it seem like most of these game publishers and developers can’t handle cloud synchronized data? This is one of the aspects about Apple Arcade that I love a lot, especially because transitioning between devices is a necessity. The landscape seems to be a bit of a mess still. A few games I own flat out don’t support synchronization of any kind, requiring you to manually back the files up to your computer and move them to another device or back onto the same one should you need to DR the thing. Others will still only synchronize with a third party like Facebook, which was usually the result of not supporting synchronization in the first place and then tacking it on several updates down the line (and who the hell wants to use Facebook these days?). Bizarrely, some games do support synchronization, but it’s a manual action relegated to the title screen which is inaccessible in-game; you need to close the app and reopen it to have the option to trigger the action.

Even as my relationship with gaming metamorphoses from a serious to a casual one, I still feel a need to demand a bit of quality out of the games I play. This is doubly true both from publishers I have a tremendous amount of history with and from titles that I have respect for. These recent ports to mobile platforms don’t seem to do them justice, even if the justification is preservation into a new era of gamers or quick cash grabs. If the latter, at least make it seem like you’re making an honest effort, especially when you’re selling a game whose source material is at or over twenty years old. I start to buy into the memes that publishers and developers can grow tone-deaf to their audience, and images of the infamous Blizzard Diablo Immortal fail-conference creep in. I truly don’t mind paying a premium for software, especially since I know what it’s like to try and live off $1 per download for an app you spent months of man-hours creating. But is it too much to ask for a little more care and attention to your products?

Quick C++/SFML Tips

While I’m writing a series on working with SFML and C++, I thought I’d share some quick dirty tips for working with SFML that I’ve experienced lately. Some of these emerged while branching out to other development contexts that I’m normally not entrenched in – so you’ll forgive me if they seem axiomatic to you – and others simply failed to make the transition from mind to paper (or screen in this case).

Getting Started with SFML and Visual Studio

It’s evident after seeing some posts on the SFML forums that people don’t RTFM. TL;DR isn’t a thing to worry about here, so be sure to check this page (linked below) out. Visual Studio doesn’t require counter-intuitive thought concerning environment configurations – a compiler is a compiler – but the way one configures the compiler is measurably convoluted, especially if you’re used to programming in the Linux world. These steps are also valid if you’re considering creating a DLL to leverage shared code.

SFML on Visual Studio – https://www.sfml-dev.org/tutorials/2.5/start-vc.php

DLL Woes

Speaking of creating DLLs, there’s a nasty little caveat with the default Windows header file. Evidently the versions of the min and max functions implemented in it are grossly incompatible with the ones in STL. While not a SFML issue per-se, it’s important to be aware of because it’ll likely creep in when you least expect it, and trying to determine what the root cause is from the output of the compiler is going to require several witchdoctors and an irrefutable, globally-accepted proof of String Theory. The red herring for this typically comes in the form of error C2589: ‘(‘ illegal token on right side of ‘::’ (a.k.a. The go f-yourself error).

The fix for this is the NOMINMAX preprocessor directive. You can either add it as a file-level define at the head of the file, or you can use the Project Properties dialog and add it to All Configurations and All Platforms by navigating to C/C++->Preprocessor, and appending the NOMINMAX option to the Preprocessor Definitions field. If ever you come back to this dialog to ensure that the value was set, you’ll need to drill-down into each configuration and platform to see that the value was applied.

Deleted Copy Constructors, sf::NonCopyable

A core component of a game engine that I wrote has an Asset Manager that’s very similar to the one used in MonoGame, except that it doesn’t use the Pipeline concept. Assets are loaded into memory via PhysicsFS, and they’re translated into SFML Asset Constructs that are stored in a STL Container, specifically std::unordered_map. Some SFML Asset Constructs, specifically sf::Music, inherit from classes that leverage sf::Thread, and, of crucial note, sf::Thread inherits from sf::NonCopyable. While this utility class doesn’t explicitly delete the copy constructor and assignment operators, it marks them as private. Children of this class will likely, if you’re using C++11 or greater, have these functions implicitly deleted since they’re not valid. In the absence of STL Containers, this isn’t too much of an issue, especially since attempts at copies or assignments would result from explicit statements that you yourself wrote. When STL Containers are around and you encounter an error from implicitly deleted function calls, we’ve traipsed into another arena where compiler output is infamously horrid to the degree of being near useless.

To give some concrete to my exposition, the offending statement was this:

...
typedef sf::Music sfmusic;
typedef std::unordered_map < std::string, sfmusic > ab_bgm;
...

std::unordered_map leverages std::pair to join the key to the value, and while I haven’t been able to dissect the issue deeper than this, it would appear that std::pair is likely subsuming the lifecycle of the objects it contains. Because there is no copy constructor or assignment operator for an object that inherits from sf::Thread, and because std::pair is attempting to leverage either one of those functions in some way, the latter is going to throw up in the most flamboyant of ways.

Although what’s next is likely not a representation of a clean or efficient way to mitigate this, I’ve found that it works. For starters, the declaration changes slightly:

...
typedef sf::Music sfmusic;
typedef std::unordered_map < std::string, sfmusic* > ab_bgm;
...

Next, the member function of the Asset Manager that is responsible for copying the asset data from raw bytes into live SFML Asset Constructs takes an extra step of manually allocating the memory for it before using the sf::Music copyFromMemory function:

...
case targetloader::bgm:
    bgmb [file] = new sfmusic ();
    bgmb [file]->openFromMemory (d, f.length ());
...

Of course, because we’re now wandering down the path of explicit memory allocations, we’ve got to be responsible for cleaning it up, so the intermediate destructor does some work to delete allocations in this bank, if there were any, before removing the bank itself.

 

Monogame – Working with Touch

*NOTE – This may also work on other platforms with touch as well, but this hasn’t been tested on anything other than Android at the current moment.

One fundamental aspect to understand about touch and gestures is that code needs structured around the idea of continuous input. Thus, any touch recognition code should be encapsulated somewhere within the main game loop; where this is would be dependent upon how one elects to parse input. The reason for this is that the nature of all touch processing is a sequence of touch events that get fired in succession; there are only a few exceptions to this rule.

The other aspect to understand clearly is nomenclature mapping. What you think you want to achieve in your design may, unless you’re already familiar with either the Monogame or Xamarin frameworks, not necessarily be what either of those aforementioned frameworks are calling it through its API parlance. So be prepared to step outside your normal wheelhouse and, perhaps, discard some long-standing assertions about what truly a touch/gesture really is.

First, the consideration needs made as to if you’re going to handle touches or gestures – don’t get too caught up on thinking that a touch is a single press, because this will cause you some unnecessary grief later. If your program ebbs toward the former, there’s usually no requisite configuration needed, other than to check for the existence of a touch-capable input device and adjust your engine accordingly (the only other exception here would be if one wishes to perform touch-to-cursor mappings, but this is outside the immediate scope of this article). Conversely, gesture handling requires some configurations be made before they can be used, and this is where the Monogame documentation, as well as a majority of the information available on various forums, falls fatally short. We will focus on this once we start addressing gesture handling, which follows a look at touch handling.

Checking for the existence of hardware with touch capabilities can be queried through the TouchPanelCapabilities object, triaged through the TouchPanel static object as such:

TouchPanel.GetCapabilities ().IsConnected;

The IsConnected property is a boolean that indicates the presence of a touch-capable device. Obviously, the absence of one suggests either that the device is intermittently available, or that other sources of input are necessary. In this case, neither the touch nor gesture APIs would be valid.

Concerning touches, the current state of the touch panel holds a collection, called TouchCollection, of TouchLocation objects. Each of these correspond to a single touch event generated by the user when a press is determined to have occurred on the sensitive surface. It’s fair to suggest that multiple points of contact yield multiple TouchLocation instances within the TouchCollection (i.e. as a practical example, one finger would generate one instance, two fingers would generate two instances, etc. This can best be observed while debugging a program using this technique, and paying attention to the Id property of the TouchLocation instance(s).). In order to properly ascertain the most accurate state of the touch panel, we’ll need to do the following:

  1. Obtain the current state of the touch panel
  2. Determine if there are any TouchLocation instances in the TouchCollection collection
  3. If there are, take relative action based on any of the properties of the current TouchLocation instance

This can be accomplished using the following general template

TouchCollection tc = TouchPanel.GetState ();
foreach (TouchLocation tl in tc) {
 // Do something here
}

Fortunately, TouchLocation instances are very simple to use. There are only four properties that are of any practical significance, and one utility function for trying to determine the previous location (which, at least during my testing, wasn’t useful for anything). They are as follows:

  • Id – An Integer that uniquely identifies a single TouchLocation instance within a TouchCollection. It’s unknown how its potential values are determined, or if orphaned values are subject to recycling after the host TouchLocation has fallen out of scope.
  • Position – A Vector2 that provides the X and Y coordinate serving as the source for the event. Effectively, this is the location on the sensitive surface where contact was made.
  • Pressure – A floating-point number indicating the pressure of the touch. I’m unclear on how exactly this works, since my tests always reported a zero in this property. The only conclusions I can come up with here are either that my device doesn’t support touch sensitivity of this kind, or I missed a configuration step to enable this functionality.
  • State – An instance of TouchLocationState that determines what kind of touch we’re dealing with. This property can have one of four values:
    • Invalid – The press has been somehow deemed invalid; I’ve never seen this occur in testing, and am left to think that either I failed to satisfy conditions to make this occur, or that it’s simply a dump case for an exceptional condition.
    • Moved – Seen when a press has either remained in the same position or has moved. This is a very important state as it has a great deal of practical application, so do try to keep it in mind.
    • Pressed – Observed when a point of contact on the sensitive surface is newly minted. This is only fired once, and will not be seen again, regardless if the contact remains afterward. This could potentially be a source of confusion for a number of programmers.
    • Released – Likely the last state that a TouchLocation would be in before falling out of scope, it will be fired when the source of contact which generated this particular instance is no longer available.

Having said all of that, you now have enough information to properly utilise touches within your programs. As was stated before, simply ensure that your code is contained within the main game loop somewhere, since the data will likely change per frame, or at least as often as it can. An example of how to implement code based off this logic will be illustrated at the end of this discussion.

Gestures, insofar as Monogame is concerned, are a bit of a hassle to work with, mostly due to the lack of upstream documentation. We will seek to correct this imbalance, if not for the official documentation, then at least for programmers who wish to learn more about them.

As was stated previously, although Monogame performs a considerable amount of work to make available the touch and gesture API for programs deployed on capable hardware, gestures are left a bit open-ended by comparison to their touch counterparts. What this means is that as a programmer, you’re required to provide some configuration before the gesture API can be utilised. Here, we assume that you’re only interested in gestures, and not touch-to-cursor mappings, hence will only discuss the former.

Before proceeding, some basic information should be given as to the nature of gestures, and how they’re procedurally handled.

Although there is some very minor overlap with how a touch and gesture are expressed, they are two discrete entities. Gestures can be composed of one or more points of contact, and it’s expected that the locations of these contacts will change, in a particular way, over an indeterminate amount of time (ergonomic measurements would likely dispute this generalisation, but it is for this conversation a generalisation, and not a scientific claim). The ways in which these contacts change, or at least the resultant shape the change yields, as well as the number of contacts involved in the measurement, hints at the particular kind of gesture. In other words, a single point of contact that has a gradual change in its X-axis, be it positive or negative, which would yield a ray (or a series of small rays), is generally considered to be a horizontal drag gesture. When considering the same principles but looking to the Y-axis instead, we now find ourselves dealing with a vertical drag gesture. Pinch and zoom gestures typically involve two or more points of contact that move near concertedly away from or toward each other along the same logical axis. Perhaps paradoxically, at least when considering a contrast between touches and gestures, taps, double taps, and long-presses are registered as gestures as well; these are more concerned with the sustainment of a single point of contact relative to the time from when it was first recognised.

From a stock perspective, Monogame provides eleven types of gestures, referred to as GestureTypes. These types will effectively determine how gesture detection is performed (it’s unclear if the GestureType framework can be extended to facilitate custom gesture types, but this is a considerably advanced topic which will not be discussed here). However, Monogame will not automatically read the touch panel for gestural input. Instead, it needs instructed on which kinds of gestures to detect, and this is provided by the programmer. In any non-looping block of code, preferably during the initialisation routines, you’ll need to specify what are called EnabledGestures, which is a property of the TouchPanel static object. Multiple gestures can be configured by OR’ing one or more of the types together in the assignment statement. For example, if I wanted to parse for both a HorizontalDrag and a DragComplete gesture, I would write the following statement:

TouchPanel.EnabledGestures = GestureType.HorizontalDrag | GestureType.DragComplete;

Once this is complete, you’ll have done enough to get Monogame to start playing nice with at least these two kinds.

Parsing gestural input is, in essence, no different than parsing touch input, but there are some minor differences to the process. To start, we must first determine if there are any gestures with which to read data from. If we do not do this, attempts to read directly from the gesture store will generate fatal exceptions. Fortunately, the TouchPanel static object provides a boolean property called IsGestureAvailable, which will inform clients of the availability of queued gesture data. If we have data, we must convert the data into a sample, which is packaged into the GestureSample class. As with the TouchLocation object, the GestureSample object contains several properties that are of practical interest to the programmer, especially when making contextual decisions that respond to this kind of input. GestureSamples include the following properties:

  • Delta – A Vector2 instance which provides the delta, or difference, data for the first touch point in the gesture. This will change over time, and will always be relative to the coordinates where the touch was first recognised.
  • Position – A Vector2 instance which contains the current coordinates of the first touch point in the gesture.
  • Timestamp – A TimeSpan instance that indicates the time when the gesture was first recognised.
  • GestureType – A GestureType instance that indicates what type of gesture was determined based off several criteria.

Additionally, GestureSample contains properties called Delta2 and Position2, which are used to track a second point of contact that’s being measured as part of the current gesture. What this implies is that insofar as the stock gestures are concerned, Monogame will only be able to handle gestures where no more than two points of contact are involved.

My advice here is to experiment with the data through debugging until you’re comfortable with how these gestures are read, because there are some nuances with how the data continuously polls respective to different gesture kinds. For example, a HorizontalDrag gesture will, while the drag is occurring, constantly emit the HorizontalDrag signal until the contact source is released, terminating the gesture. At this point, if one is checking for the DragComplete signal as well, releasing the contact source will cause the touch panel to emit the DragComplete signal.

Examples:

To determine if a single press has been made:

TouchCollection tc = TouchPanel.GetState ();
foreach (TouchLocation tl in tc) {
 if (TouchLocationState.Pressed == tl.State) {
  // Execute your domain-specific code here
 }
}

To determine if a press was made, and has been held in the same position for an arbitrary period of time:

TouchCollection tc = TouchPanel.GetState ();
foreach (TouchLocation tl in tc) {
 if (TouchLocationState.Moved == tl.State) {
  // Execute your domain-specific code here
 }
}

To track the position of a horizontal drag:

(1) During game initialisation:

TouchPanel.EnabledGestures = GestureType.HorizontalDrag;

(2) During game loop:

while (TouchPanel.IsGestureAvailable) {
 GestureSample gs = TouchPanel.ReadGesture ();
 if (GestureType.HorizontalDrag == gs.GestureType) {
  // Execute your domain-specific code here
 }
}