Quick C++/SFML Tips

While I’m writing a series on working with SFML and C++, I thought I’d share some quick dirty tips for working with SFML that I’ve experienced lately. Some of these emerged while branching out to other development contexts that I’m normally not entrenched in – so you’ll forgive me if they seem axiomatic to you – and others simply failed to make the transition from mind to paper (or screen in this case).

Getting Started with SFML and Visual Studio

It’s evident after seeing some posts on the SFML forums that people don’t RTFM. TL;DR isn’t a thing to worry about here, so be sure to check this page (linked below) out. Visual Studio doesn’t require counter-intuitive thought concerning environment configurations – a compiler is a compiler – but the way one configures the compiler is measurably convoluted, especially if you’re used to programming in the Linux world. These steps are also valid if you’re considering creating a DLL to leverage shared code.

SFML on Visual Studio – https://www.sfml-dev.org/tutorials/2.5/start-vc.php

DLL Woes

Speaking of creating DLLs, there’s a nasty little caveat with the default Windows header file. Evidently the versions of the min and max functions implemented in it are grossly incompatible with the ones in STL. While not a SFML issue per-se, it’s important to be aware of because it’ll likely creep in when you least expect it, and trying to determine what the root cause is from the output of the compiler is going to require several witchdoctors and an irrefutable, globally-accepted proof of String Theory. The red herring for this typically comes in the form of error C2589: ‘(‘ illegal token on right side of ‘::’ (a.k.a. The go f-yourself error).

The fix for this is the NOMINMAX preprocessor directive. You can either add it as a file-level define at the head of the file, or you can use the Project Properties dialog and add it to All Configurations and All Platforms by navigating to C/C++->Preprocessor, and appending the NOMINMAX option to the Preprocessor Definitions field. If ever you come back to this dialog to ensure that the value was set, you’ll need to drill-down into each configuration and platform to see that the value was applied.

Deleted Copy Constructors, sf::NonCopyable

A core component of a game engine that I wrote has an Asset Manager that’s very similar to the one used in MonoGame, except that it doesn’t use the Pipeline concept. Assets are loaded into memory via PhysicsFS, and they’re translated into SFML Asset Constructs that are stored in a STL Container, specifically std::unordered_map. Some SFML Asset Constructs, specifically sf::Music, inherit from classes that leverage sf::Thread, and, of crucial note, sf::Thread inherits from sf::NonCopyable. While this utility class doesn’t explicitly delete the copy constructor and assignment operators, it marks them as private. Children of this class will likely, if you’re using C++11 or greater, have these functions implicitly deleted since they’re not valid. In the absence of STL Containers, this isn’t too much of an issue, especially since attempts at copies or assignments would result from explicit statements that you yourself wrote. When STL Containers are around and you encounter an error from implicitly deleted function calls, we’ve traipsed into another arena where compiler output is infamously horrid to the degree of being near useless.

To give some concrete to my exposition, the offending statement was this:

...
typedef sf::Music sfmusic;
typedef std::unordered_map < std::string, sfmusic > ab_bgm;
...

std::unordered_map leverages std::pair to join the key to the value, and while I haven’t been able to dissect the issue deeper than this, it would appear that std::pair is likely subsuming the lifecycle of the objects it contains. Because there is no copy constructor or assignment operator for an object that inherits from sf::Thread, and because std::pair is attempting to leverage either one of those functions in some way, the latter is going to throw up in the most flamboyant of ways.

Although what’s next is likely not a representation of a clean or efficient way to mitigate this, I’ve found that it works. For starters, the declaration changes slightly:

...
typedef sf::Music sfmusic;
typedef std::unordered_map < std::string, sfmusic* > ab_bgm;
...

Next, the member function of the Asset Manager that is responsible for copying the asset data from raw bytes into live SFML Asset Constructs takes an extra step of manually allocating the memory for it before using the sf::Music copyFromMemory function:

...
case targetloader::bgm:
    bgmb [file] = new sfmusic ();
    bgmb [file]->openFromMemory (d, f.length ());
...

Of course, because we’re now wandering down the path of explicit memory allocations, we’ve got to be responsible for cleaning it up, so the intermediate destructor does some work to delete allocations in this bank, if there were any, before removing the bank itself.

 

Monogame – Working with Touch

*NOTE – This may also work on other platforms with touch as well, but this hasn’t been tested on anything other than Android at the current moment.

One fundamental aspect to understand about touch and gestures is that code needs structured around the idea of continuous input. Thus, any touch recognition code should be encapsulated somewhere within the main game loop; where this is would be dependent upon how one elects to parse input. The reason for this is that the nature of all touch processing is a sequence of touch events that get fired in succession; there are only a few exceptions to this rule.

The other aspect to understand clearly is nomenclature mapping. What you think you want to achieve in your design may, unless you’re already familiar with either the Monogame or Xamarin frameworks, not necessarily be what either of those aforementioned frameworks are calling it through its API parlance. So be prepared to step outside your normal wheelhouse and, perhaps, discard some long-standing assertions about what truly a touch/gesture really is.

First, the consideration needs made as to if you’re going to handle touches or gestures – don’t get too caught up on thinking that a touch is a single press, because this will cause you some unnecessary grief later. If your program ebbs toward the former, there’s usually no requisite configuration needed, other than to check for the existence of a touch-capable input device and adjust your engine accordingly (the only other exception here would be if one wishes to perform touch-to-cursor mappings, but this is outside the immediate scope of this article). Conversely, gesture handling requires some configurations be made before they can be used, and this is where the Monogame documentation, as well as a majority of the information available on various forums, falls fatally short. We will focus on this once we start addressing gesture handling, which follows a look at touch handling.

Checking for the existence of hardware with touch capabilities can be queried through the TouchPanelCapabilities object, triaged through the TouchPanel static object as such:

TouchPanel.GetCapabilities ().IsConnected;

The IsConnected property is a boolean that indicates the presence of a touch-capable device. Obviously, the absence of one suggests either that the device is intermittently available, or that other sources of input are necessary. In this case, neither the touch nor gesture APIs would be valid.

Concerning touches, the current state of the touch panel holds a collection, called TouchCollection, of TouchLocation objects. Each of these correspond to a single touch event generated by the user when a press is determined to have occurred on the sensitive surface. It’s fair to suggest that multiple points of contact yield multiple TouchLocation instances within the TouchCollection (i.e. as a practical example, one finger would generate one instance, two fingers would generate two instances, etc. This can best be observed while debugging a program using this technique, and paying attention to the Id property of the TouchLocation instance(s).). In order to properly ascertain the most accurate state of the touch panel, we’ll need to do the following:

  1. Obtain the current state of the touch panel
  2. Determine if there are any TouchLocation instances in the TouchCollection collection
  3. If there are, take relative action based on any of the properties of the current TouchLocation instance

This can be accomplished using the following general template

TouchCollection tc = TouchPanel.GetState ();
foreach (TouchLocation tl in tc) {
 // Do something here
}

Fortunately, TouchLocation instances are very simple to use. There are only four properties that are of any practical significance, and one utility function for trying to determine the previous location (which, at least during my testing, wasn’t useful for anything). They are as follows:

  • Id – An Integer that uniquely identifies a single TouchLocation instance within a TouchCollection. It’s unknown how its potential values are determined, or if orphaned values are subject to recycling after the host TouchLocation has fallen out of scope.
  • Position – A Vector2 that provides the X and Y coordinate serving as the source for the event. Effectively, this is the location on the sensitive surface where contact was made.
  • Pressure – A floating-point number indicating the pressure of the touch. I’m unclear on how exactly this works, since my tests always reported a zero in this property. The only conclusions I can come up with here are either that my device doesn’t support touch sensitivity of this kind, or I missed a configuration step to enable this functionality.
  • State – An instance of TouchLocationState that determines what kind of touch we’re dealing with. This property can have one of four values:
    • Invalid – The press has been somehow deemed invalid; I’ve never seen this occur in testing, and am left to think that either I failed to satisfy conditions to make this occur, or that it’s simply a dump case for an exceptional condition.
    • Moved – Seen when a press has either remained in the same position or has moved. This is a very important state as it has a great deal of practical application, so do try to keep it in mind.
    • Pressed – Observed when a point of contact on the sensitive surface is newly minted. This is only fired once, and will not be seen again, regardless if the contact remains afterward. This could potentially be a source of confusion for a number of programmers.
    • Released – Likely the last state that a TouchLocation would be in before falling out of scope, it will be fired when the source of contact which generated this particular instance is no longer available.

Having said all of that, you now have enough information to properly utilise touches within your programs. As was stated before, simply ensure that your code is contained within the main game loop somewhere, since the data will likely change per frame, or at least as often as it can. An example of how to implement code based off this logic will be illustrated at the end of this discussion.

Gestures, insofar as Monogame is concerned, are a bit of a hassle to work with, mostly due to the lack of upstream documentation. We will seek to correct this imbalance, if not for the official documentation, then at least for programmers who wish to learn more about them.

As was stated previously, although Monogame performs a considerable amount of work to make available the touch and gesture API for programs deployed on capable hardware, gestures are left a bit open-ended by comparison to their touch counterparts. What this means is that as a programmer, you’re required to provide some configuration before the gesture API can be utilised. Here, we assume that you’re only interested in gestures, and not touch-to-cursor mappings, hence will only discuss the former.

Before proceeding, some basic information should be given as to the nature of gestures, and how they’re procedurally handled.

Although there is some very minor overlap with how a touch and gesture are expressed, they are two discrete entities. Gestures can be composed of one or more points of contact, and it’s expected that the locations of these contacts will change, in a particular way, over an indeterminate amount of time (ergonomic measurements would likely dispute this generalisation, but it is for this conversation a generalisation, and not a scientific claim). The ways in which these contacts change, or at least the resultant shape the change yields, as well as the number of contacts involved in the measurement, hints at the particular kind of gesture. In other words, a single point of contact that has a gradual change in its X-axis, be it positive or negative, which would yield a ray (or a series of small rays), is generally considered to be a horizontal drag gesture. When considering the same principles but looking to the Y-axis instead, we now find ourselves dealing with a vertical drag gesture. Pinch and zoom gestures typically involve two or more points of contact that move near concertedly away from or toward each other along the same logical axis. Perhaps paradoxically, at least when considering a contrast between touches and gestures, taps, double taps, and long-presses are registered as gestures as well; these are more concerned with the sustainment of a single point of contact relative to the time from when it was first recognised.

From a stock perspective, Monogame provides eleven types of gestures, referred to as GestureTypes. These types will effectively determine how gesture detection is performed (it’s unclear if the GestureType framework can be extended to facilitate custom gesture types, but this is a considerably advanced topic which will not be discussed here). However, Monogame will not automatically read the touch panel for gestural input. Instead, it needs instructed on which kinds of gestures to detect, and this is provided by the programmer. In any non-looping block of code, preferably during the initialisation routines, you’ll need to specify what are called EnabledGestures, which is a property of the TouchPanel static object. Multiple gestures can be configured by OR’ing one or more of the types together in the assignment statement. For example, if I wanted to parse for both a HorizontalDrag and a DragComplete gesture, I would write the following statement:

TouchPanel.EnabledGestures = GestureType.HorizontalDrag | GestureType.DragComplete;

Once this is complete, you’ll have done enough to get Monogame to start playing nice with at least these two kinds.

Parsing gestural input is, in essence, no different than parsing touch input, but there are some minor differences to the process. To start, we must first determine if there are any gestures with which to read data from. If we do not do this, attempts to read directly from the gesture store will generate fatal exceptions. Fortunately, the TouchPanel static object provides a boolean property called IsGestureAvailable, which will inform clients of the availability of queued gesture data. If we have data, we must convert the data into a sample, which is packaged into the GestureSample class. As with the TouchLocation object, the GestureSample object contains several properties that are of practical interest to the programmer, especially when making contextual decisions that respond to this kind of input. GestureSamples include the following properties:

  • Delta – A Vector2 instance which provides the delta, or difference, data for the first touch point in the gesture. This will change over time, and will always be relative to the coordinates where the touch was first recognised.
  • Position – A Vector2 instance which contains the current coordinates of the first touch point in the gesture.
  • Timestamp – A TimeSpan instance that indicates the time when the gesture was first recognised.
  • GestureType – A GestureType instance that indicates what type of gesture was determined based off several criteria.

Additionally, GestureSample contains properties called Delta2 and Position2, which are used to track a second point of contact that’s being measured as part of the current gesture. What this implies is that insofar as the stock gestures are concerned, Monogame will only be able to handle gestures where no more than two points of contact are involved.

My advice here is to experiment with the data through debugging until you’re comfortable with how these gestures are read, because there are some nuances with how the data continuously polls respective to different gesture kinds. For example, a HorizontalDrag gesture will, while the drag is occurring, constantly emit the HorizontalDrag signal until the contact source is released, terminating the gesture. At this point, if one is checking for the DragComplete signal as well, releasing the contact source will cause the touch panel to emit the DragComplete signal.

Examples:

To determine if a single press has been made:

TouchCollection tc = TouchPanel.GetState ();
foreach (TouchLocation tl in tc) {
 if (TouchLocationState.Pressed == tl.State) {
  // Execute your domain-specific code here
 }
}

To determine if a press was made, and has been held in the same position for an arbitrary period of time:

TouchCollection tc = TouchPanel.GetState ();
foreach (TouchLocation tl in tc) {
 if (TouchLocationState.Moved == tl.State) {
  // Execute your domain-specific code here
 }
}

To track the position of a horizontal drag:

(1) During game initialisation:

TouchPanel.EnabledGestures = GestureType.HorizontalDrag;

(2) During game loop:

while (TouchPanel.IsGestureAvailable) {
 GestureSample gs = TouchPanel.ReadGesture ();
 if (GestureType.HorizontalDrag == gs.GestureType) {
  // Execute your domain-specific code here
 }
}

PhysicsFS/PhysFS++ Tutorial

This is a follow-up from a promise that I made in my tutorial video on designing an asset manager using SFML. That video can be found here.

This tutorial is centred around PhysFS++, a C++ wrapper for PhysicsFS. For the sake of the tutorial, it’s assumed that the reader is familiar with PhysicsFS and what it’s capable of. Described here is a workflow for simple use of the library. Anything more discrete is beyond the scope and will have to be ascertained by the reader on their own accord.

Lastly of note, PhysFS++ encapsulates PhysicsFS calls in a PhysFS namespace. The functions are global within this namespace and there are only a few classes that are provided. PhysFS++ further encapsulates key PhysicsFS constructs, notably those corresponding to archived files, into said classes that are derivatives of STL stream classes (a huge boon).

Like some other libraries, PhysicsFS requires explicit initialisation before it can be used. This is facilitated by a function named init. It takes one argument of type const char* and is semantically directed at a terminal invocation argument triaged through the much loved argv. However, this can be an empty string especially if you’re not expecting to handle terminal invocations. Thus, a very general call to init can be performed as such:

PhysFS::init (nullptr);

Right of the bat we have to mention a caveat here. From PhysFS++, there’s no way to assert the initialisation process of PhysicsFS. This is counter-intuitive to the upstream library which relies on the tried-and-true zero on failure, non-zero on success return values as sanity checks. Furthermore, while PhysFS++ implements C++ exceptions, the init function doesn’t throw any at all. Ostensibly, what one is left with is an unchecked call to init. Because one needs to compile PhysFS++ from the source, it is possible to modify the code, as I have done, to add a check to this call.

Once init has successfully completed, the next step is to mount an archive file into the virtual filesystem created by init. This is performed with the mount function. mount expects three arguments: the archive file on disk, a string specifying a mount point in the virtual filesystem, and a boolean which appends the mount point to the search path. One should place the archive file in the working directory for the binary file of their executable so PhysFS++ can see it. The second argument can be an empty string which would force root mounting, and the third can be true. Thus, for our example, if we assume we have an archive file named funphotos.zip on our disk in the binary’s working directory, we can issue a call to mount as such:

PhysFS::mount (“funphotos.zip”, “”, 1);

Unfortunately, as was the case with init, mount wraps an upstream function that adheres to the zero-or-nonzero return value paradigm that is outright dropped by PhysFS++ with no exception catering otherwise. Thus, if mount fails to mount the archive for whatever reason, it’ll be a little difficult to ascertain why; plan accordingly.

Assuming mount has returned properly, one can start to work with the files that are contained in the archive. At this point, you should begin working in the mentality of filesystem calls. It’s possible to have archive files with complex directory layouts which would require one to perform recursive searches. That being said, everything should be considered a file – even a directory. An analysis of the PhysicsFS and PhysFS++ APIs will provide for you the full breadth of your available capabilities so for the sake of brevity, only a select number of those will be touched here.

Enumeration of files in a directory can be performed with the enumerateFiles function. It takes as an argument a string which indicates the directory to use as a root for the enumeration. As a return it provides the caller with a vector of strings indicating the name of the file. For ease of use, a call to this to enumerate the files in root can be performed as such:

auto rootfiles = PhysFs::enumerateFiles (“/”);

To actually work with a file in the filesystem, one needs to use the PHYSFS_file ADT. This upstream ADT is encapsulated by one of three classes: PhysFS::ifstream, PhysFS::ostream, or PhysFS::fstream. Each of these is fantastic in that they encapsulate a PHYSFS_file ADT in a standard stream which means that one can now perform established stream operations on the file itself. Each class has its own use: fstream for reading files, ostream for writing to files, and ifstream for bidirectional file handling. In any case, any of these classes can be instantiated with a string argument that contains a fully qualified pathname to a file in the virtual filesystem that one wants to work with. For example, given that our virtual filesystem has a file named goofycats.png located at the path /textures/cats, we could instantiate a read file as such:

PhysFS::fstream gc (“/textures/cats/goofycats.png”);

Again, the underlying upstream call to open the file goes unchecked.

What you do from here is largely subjective relative to your program’s context. Say, for example (and this is actually pretty specific to my own use case), that you have a series of files of various formats in an archive that are being used in a video game program. The multimedia library you’re using should have ADTs that represent types of assets such as textures, fonts, etc. One could do some work to transpose the raw data of these streams into the aforementioned ADTs for use in the multimedia library. The following snippet of code, borrowed from one of my own projects, illustrates this. A PhysFS::fstream instance named f is created with a certain file. Following up is a char array named d is instantiated with the size of f. The read function of f is called to transpose the bytes from f into d. Then, d is used to instantiate an instance of sf::Music from SFML contained in a std::unordered_map:

code

This covers most the basic and general needs of users of PhysicsFS. Should you require anything else from it, you’d do well to read up on the Doxygen file of PhysicsFS or the source for PhysFS++. Lastly, when one is done with the library, you’ll need to call a deinitialization function called deinit:

PhysFS::deinit ();

Configuring Apache Cordova with JetBrains WebStorm

Stepping back into the mobile arena after a bit of an absence, I decided to take a bit of advice from a gentleman who sat in on my 2014 Ohio Linux Fest lecture “Android Development on Linux” (generously, and anonymously, curated on the Internet Archive) and look not just at Android but at cross platform. For the longest time, this has always been a topic of considerable consternation. One has to think, upon reflection of the history of technology, that we’ve experienced in some ways a regression to the days when cross platform littered the landscape and was, in certain respects, paralyzing. Fortunately for myself and others, we weren’t the only ones who recognized this. Some have taken action to help ensure that we do have a shim solution for these cases, and this is where Cordova enters the fray. Being derived from PhoneGap and adopted by Apache, Cordova attempts to bring to mobile developers the ability to write a program using web languages such as HTML, CSS, and JavaScript and deploy it to multiple platforms including but not limited to Android, iOS, and Windows Phone.

As I mentioned before, the focus this time around is on cross platform development. I had a little taste of PhoneGap a few years ago and it never really stuck but there seems to be a little community acquiescence toward Cordova. Not to say that I’m following the grain too much here because there is a genuine personal interest in the framework on my part, but I want to be able to help people as well.

Now, the development environment that I use is detailed here. As you should be aware, you may want to keep this in mind proceeding forward.

  • OS: Linux
    • Distribution: Ubuntu 15.10
    • Only the default upstream repositories are used in dpkg
  • Arch: 64
  • IDE: JetBrains WebStorm (I have a license for this; the unlicensed version only works for thirty days).

As the title indicates, this tutorial is specific to configuring WebStorm to be used with Cordova. As such, there will be a lot of WebStorm-specific information here that, if you were using an alternative IDE, may not be applicable. You’ll want to follow your IDE’s integration instructions or deduce it on your own at that rate.

Prerequisite – NodeJS/NPM

I’m going to be frank here – there is very little that I know about NodeJS other than what it is, that it’s been the subject of quite a lot of hype in the web development community since its conception, and that it is a requirement on behalf of Cordova. While there is still quite a bit of homework that needs to be done on my part, I have been able to successfully install and configure it so that it works for the purposes of Cordova. This will be the topic that is covered here.

Node can either be installed by downloading it in a pre-compiled package from the Node website or through your distribution’s default repositories. If you elect to download from the Node website, you’ll need to be responsible for manual maintenance of the package and for extracting it to a location that you have access to via your permissions or ACLs (if your file system supports them). The method I used was installing was the former since updates are automatic and installation is performed in the appropriate directories. This can be done with the following command

sudo apt-get install nodejs

Once this command completes, you’ll have Node installed on your computer. To test this, you’ll want to start a Node interpreter by issuing the command nodejs at your terminal. If you’re brought to a new prompt lacking the traditional user and host name information, Node is all set to go. You’ll need to press Ctrl+C twice to end the program and return to your traditional prompt.

NPM, a pseudo package manager for JavaScript libraries, is required since this is the preferred method for installing Cordova. It will also help with obtaining Cordova plugins and other JavaScript libraries that you might want to use later on for development. It can be installed in the same was as Node was with the distribution’s package manager.

sudo apt-get install npm

To test the installation, you can simply issue npm at the terminal. If you get back a page of text indicating the usage syntax, NPM has been installed successfully.

Installing Cordova

When you’re using WebStorm, it’s apparently possible to install Cordova entirely through the IDE once you have configured it to locate both Node and NPM. These steps, however, are a little convoluted to follow, especially with all of the potential pitfalls you’re going to encounter, so we’re going to avoid this entirely and install Cordova with NPM on the terminal.

NPM has two install modes: local and global. A local installation will create a node_modules directory in the working directory where the command was issued at and install the module there. The global installation places all of the modules in a consolidated directory and makes that available to the system through environment variables; Cordova is best installed in global mode (recommended by the official install documentation). The installation can be performed with a single NPM command

sudo npm install -g cordova

The installation, when in the global scope, will need to be run as root hence the use of sudo.

Caveats… Already

To test Cordova, you’d do it in the same way that you would with both Node and NPM. However when you type cordova into the terminal and press enter, you will most likely, but certainly not always, get in response an error that may look like this:

/usr/bin/env: node: not found

The issue here is that Cordova is looking for the Node binary with a specific identifier, node. However when Node is installed through the package manager, the identifier of the binary is nodejs. Despite there being several tutorials on the Internet offering advice such as aliasing nodejs as node in your .bashrc file, the solution that needs implemented here is to create a symlink in /usr/bin named node that links back to nodejs. So what your directory tree looks like then is similar to this

cordova-node-usrbin-dirstructure

If the highlighting gets in the way, what you should be paying attention to here is that the bottom highlighted line shows the actual Node binary, nodejs, and the top line shows the node symlink which points straight to the nodejs binary. They’re both in the same directory but Cordova is looking specifically for the node file. I’m unsure if this can be configured in Cordova in some way so if someone knows, please share how to do this. Either way, once this symlink exists in /usr/bin, you should be able to then issue cordova from the terminal and get syntax help printed out to the terminal. If this happens, Cordova can see Node and is ready to go.

Prerequisite – Platform SDKs

As great as Cordova may be, it needs the platform SDKs in order to build for each one specifically. Obviously, while capable of bridging the gap between platforms, you still need the platforms themselves to actually accomplish anything. The good thing is that the Cordova wiki hosts a plethora of information relative to acquiring the corresponding SDKs. Being on a Linux system, you can install without a serious amount of labor the Android, BlackBerry and Ubuntu SDKs. For the sake of this tutorial, we’re only going to be focusing on the Android SDK. If there is further interest in setting up any of the other SDKs, I’ll create them later.

Downloading and installing the Android SDK should be a relatively straightforward process at this stage. I’m going to assume that you either know how to do this or can follow the instructions outlined on the Cordova wiki. Post installation, you’ll want to ensure that you have added an environment variable called ANDROID_HOME and included it in your PATH environment variable that points to the root directory of the SDK; again assuming that you know how to set persistent environment variables on your Linux computer..

As a secondary caveat, if you’re starting WebStorm from a desktop link or a link in the Unity Launcher, there is a bit of a catch in that the invocation context will be such that the program won’t notice the user-modified PATH variable that contains the ANDROID_HOME variable. What this means is that the IDE won’t be able to see the location of the Android SDK (I’m assuming that this would be the case for other SDKs as well). The way to fix this is to modify the EXEC field in the file to preface the issuing command with bash -ic.

cordova-node-wsdesktopstart-mod

Keep in mind that traditional desktop icons are found in ~/.local/share/applications while Unity Launcher icons are located in /usr/share/applications.

So long as all of these conditions are met, you should be able then to start WebStorm and create a Cordova project. Let’s step through that process next.

WebStorm – Creating a Cordova Project

webstorm-cordova-launch001

As you can tell, I’ve been at this for quite some time.

The first thing you’ll want to do is click on Configure at the bottom-right and then select Settings which will be the first item in the subsequent menu.

webstorm-cordova-launch002

First you’ll want to examine to ensure that WebStorm knows the location of Node. It’s highly unlikely that it has automatically determined its location so you may have to set it. Keeping in mind the location of the Node binary that was installed through your system’s package manager, you’ll want to get this fully qualified path name, along with the binary, into the Node interpreter field. Code Assistance may not be enabled for you by default. Frankly I’m unsure what this feature is but I have it enabled because… reasons.

webstorm-cordova-launch003

Next you’ll want to instruct WebStorm as to where to find the Cordova binary. Again, there’s a very high chance that it’s not automatically detected by your installation so you’ll have to manually specify. As we did with specifying the Node installation directory, you’ll need to do that here in the first field labeled PhoneGap/Cordova executable. Note that WebStorm still retains the PhoneGap label for all things Cordova even though Cordova has absorbed PhoneGap. Once the installation directory has been specified and WebStorm sees the Cordova binary, the PhoneGap/Cordova version field should automatically populate. The third field, PhoneGap/Cordova working directory shouldn’t be filled out at this point; ignore my entry here. This field is specific to your current project. This also explains the error that you’re seeing at the bottom of the window in the screen shot.

Once those are set, you can click OK and go back to the WebStorm Greeting Window. Here, you can click on Create New Project.

webstorm-cordova-launch004

Once the New Project dialog appears, you’ll select PhoneGap/Cordova App on the left side and then fill out the Location field. The PhoneGap/Cordova field is simply the location of the Cordova binary; we set this previously in the global settings. Click Create and WebStorm should take care of the work to generate the files necessary for your project.

webstorm-cordova-launch005

Now so long as the steps above were followed to the letter, you shouldn’t have any errors, other than something simply blowing up, that get thrown from WebStorm. Now you can start working on your program.

Deployment

webstorm-cordova-launch006

Testing your Cordova program can be done by creating multiple Run Configurations. Each of these would be distinguished by the value in the Name field. The value in the Command field will determine if Cordova will attempt to delegate to the emulator for the target platform, and the value in the Platform field will determine which platform is being targeted with this Run Configuration. I prefer to test on actual hardware, unless I’m constrained by the lack of, so for Android deployments I’ll deploy strictly to the device.

This should be enough to get you started. You can always read more on the Cordova Wiki to get a primer on the Cordova specifics. Otherwise you can start hacking away using HTML, CSS, and JavaScript