RHEL/CentOS – Configuring a Local Repository Server

I haven’t made a decent technology post in some time, let alone one about my beloved Linux. To rectify that travesty, this post comes at a time to end said drought and to share with my Linux friends how to accomplish the goal outlined by the title. I do often see a fair number of questions as to how to get a local repository setup, but the contexts always vary a little by virtue of distribution as well as delivery mechanism. The objective here is to get up and running with as little hassle as possible. Our target distribution will be RHEL/CentOS 7 with delivery facilitated via FTP.

Getting right to it, there are six key points we need to address:

  1. Configuration of the FTP server
  2. Location for the software packages that will compose the repository(-ies)
  3. Creation of the repository metadata
  4. Firewall configuration
  5. SELinux configuration
  6. Exposure

FTP Server

Both RHEL and CentOS will by default install vsftpd as the FTP server of choice. This can be installed during the package selection phase of Anaconda by first selecting the Infrastructure Server profile and then the FTP Server group. You can install several other groups as well, but we only focus on this one here. Please note that if you intend on using another FTP package, you’ll want to skip some parts of this tutorial since it’s assumed that you are going to use vsftpd. If during the installation process you fail to install this package, simply install it post-installation with yum:

yum install vsftpd

The nice thing about installing it from Anaconda is that the default vsftpd configuration file (/etc/vsftpd/vsftpd.conf) is geared toward a public anonymous context with maybe only some slight modifications being required. Removing all of the comments from the file for brevity, these are the options that are used on the server I’ve configured:


If you have to make any changes to this file, be sure to restart the daemon after writing said changes:

systemctl restart vsftpd

File Location

The repository files will be kept in a custom location, not in /var/ftp/pub – the default landing directory for anonymous. For this example, the files will be stored in /opt/repos. This will make it easier if we want to have multiple repositories on the same server. In other words, we could have a directory for the base CentOS packages – copied from the resource DVD – and custom packages for your company under another directory. Hypothetically, this leaves us with the following directory structures:


Finally, you’ll want to change the permissions for the /opt/repos directory and its contents:

chmod -R 755 /opt/repos

Create Repositories

Next you’ll want to populate your newly created directory structures with the packages that will comprise the repository. Keeping with our example directories, the first will need to be copied from the CentOS Resource DVD. This can be accomplished by mounting the ISO/DVD and copying everything out of the Packages directory in the root of the mount. If you have them stored somewhere else, you’ll need to get them onto the server by other means (SCP, NFS, SMB, etc…). The same principle applies for the custom packages that you’ll be putting in the second directory. Ultimately, your storage and security strategy will dictate where the files are actually stored at and how they’re placed in those two aforementioned directories.

If you haven’t done so already, you’ll want to install the createrepo package. This package will automate the creation of the metafiles required for both identifying and advertising the manifest of a repository. Once you have the package installed, you’ll want to use it to create the repository for each of these directories. Createrepo takes a path as an argument. It also takes other options, but for this example, it’s satisfactory enough to omit them and provide only the directory. When you do this, do not inlcude the packages portion of the path. Instead, you’ll provide the path up to that point. Obviously, your current working directory will determine what you have to provide to createrepo. It’ll most likely look like one of the following forms:

createrepo /opt/repos/centos7/dvd
createrepo .
createrepo ../

Use whichever one you want, so long as the path is correct. The larger the number of packages createrepo has to parse, the longer it’ll take to process. As you might notice, parsing all of the CentOS Resource DVD packages will take some time (there’s almost 10,000 packages).

When you create the repositories in this manner, you’re essentially creating a static repository. If any of the packages in there change, you’ll need to issue a command to createrepo to update the repository manifest. Or you can make createrepo occasionally sync with a mirror, but these configurations are outside the immediate scope of this document.

Once the packages are in place and the repositories created, you’ll need to get them to the point of being exposed by the FTP server. Because the default public anonymous location for vsftpd is /var/ftp/pub, and because vsftpd doesn’t handle following symlinks very well (or not at all), you’ll want to use a bind mount to get the directory over there. The basic gist of a bind mount is to mount a portion of the filesystem to another portion of the filesystem. So if you perform a listing command on /var/ftp/pub prior to the bind mount, you’ll likely get nothing back since that directory doesn’t (or at least shouldn’t) contain anything after a fresh installation. If you perform a bind mount:

mount --bind /opt/repos /var/ftp/pub

and then list directory contents:

ls /var/ftp/pub

you’ll get returned two directories. Note that this doesn’t survive reboots unless you add the mount to fstab:

/opt/repos /var/ftp/pub xfs defaults,bind 0 0

If you don’t do this, don’t be shocked if after a reboot you can’t see anything on your FTP server since the mount will be disconnected.

Firewall Configuration

There’s a pretty good chance that your network interface is going to have the public zone from firewalld applied to it during the installation process. Even so, this zone likely won’t have the FTP server whitelisted, so you’ll need to check to see if it is and react accordingly.

First, check what name was given to the interface (henceforth referred to as ifname). Hopefully, it wasn’t given something unpredictable. You won’t have to do this if you explicitly name your interfaces during installation.

ip addr sh

Once you have the interface name, check to see which zone was applied to it through firewalld (henceforth referred to as zname):

firewall-cmd --get-zone-of-interface=ifname

Now, use this zone name to see what kinds of traffic are permitted or disallowed:

firewall-cmd --zone=zname --list-all

Most likely, if this is the default public zone, you’ll only see the services ssh and dhcpv6-client. The key here though is that if you don’t see ftp in the list of services, you’re going to need to add it.

firewall-cmd --permanent --zone=zname --add-service ftp

If during the initial configuration of vsftpd you elected to run FTP through a non-standard port, you’ll need to make the appropriate accommodations:

firewall-cmd --permanent --zone=zname --add-port port#/protocol

Once that’s done, reload firewalld rules:

firewall-cmd --reload

SELinux Configuration

At this point, you should be able to run an external port scan and see your FTP port open. However, it’s unlikely that you’re going to be able to access your files because of SELinux rules. Even though people hate it, and given too that this is a local server, you may be tempted to shut it off and go about your business. This post encourages that you leave it on and simply reconfigure it to work with your server.

Basically, you’ll need to change the SELinux type on the entire directory structure starting with the root /opt/repos to public_content_t.

chcon -R -t public_content_t /opt/repos

Testing and Exposure

Run a few simple tests before giving the green flag for clients to start consuming your packages. What I’ll usually do is run a port scan with nmap to verify that the FTP port is open, which effectively means that vsftpd is running and listening on that port. Next, I’ll open a browser and navigate to the address of the server with the FTP protocol to see if I can view the contents. You can also test this in two other ways:

  • Install a FTP client on the repository server and attempt to establish a FTP connection to localhost (I don’t recommend this for probably a foolish reason).
  • Use a FTP client on a remote client and attempt to establish a FTP connection to the repository server.

Two really common issues at this point are either access or visibility to the repository contents. If access is an issue, make sure that vsftpd is running on your server and that firewalld is configured to permit ingress/egress traffic for FTP, especially if you’re using a non-standard port for FTP (egress traffic should be open by default, but these require direct rules through firewall-cmd to filter since the principal route of focus for firewalld is ingress). If you’re still having issues, make sure that your clients are configured correctly too (firewall, routing, etc.). Visibility of content can usually be traced back to one of three matters: a bind mount that wasn’t added to fstab (this would cause the mount to be dismounted on restart), incorrect DAC (755 should be sufficient; FTP wants the execute bit set), or incorrect SELinux type. The only other major rub is that if your storage strategy defines that your files are located on a NAS or other file server, you need to ensure that those mounts are established at system start (i.e. remote SMB share is mounted correctly).

To add the repository to a client, you can either package the public data into a RPM and have clients install it (much the same way that RPM Fusion does theirs), or they can manually add it to the listing under /etc/yum.repos.d. The file would look similar to this:

name=Custom Repo

Helpful Links

VSFTPD Online Manpages
Firewalld Homepage
Configuring a Yum Repository File

Configuring Apache Cordova with JetBrains WebStorm

Stepping back into the mobile arena after a bit of an absence, I decided to take a bit of advice from a gentleman who sat in on my 2014 Ohio Linux Fest lecture “Android Development on Linux” (generously, and anonymously, curated on the Internet Archive) and look not just at Android but at cross platform. For the longest time, this has always been a topic of considerable consternation. One has to think, upon reflection of the history of technology, that we’ve experienced in some ways a regression to the days when cross platform littered the landscape and was, in certain respects, paralyzing. Fortunately for myself and others, we weren’t the only ones who recognized this. Some have taken action to help ensure that we do have a shim solution for these cases, and this is where Cordova enters the fray. Being derived from PhoneGap and adopted by Apache, Cordova attempts to bring to mobile developers the ability to write a program using web languages such as HTML, CSS, and JavaScript and deploy it to multiple platforms including but not limited to Android, iOS, and Windows Phone.

As I mentioned before, the focus this time around is on cross platform development. I had a little taste of PhoneGap a few years ago and it never really stuck but there seems to be a little community acquiescence toward Cordova. Not to say that I’m following the grain too much here because there is a genuine personal interest in the framework on my part, but I want to be able to help people as well.

Now, the development environment that I use is detailed here. As you should be aware, you may want to keep this in mind proceeding forward.

  • OS: Linux
    • Distribution: Ubuntu 15.10
    • Only the default upstream repositories are used in dpkg
  • Arch: 64
  • IDE: JetBrains WebStorm (I have a license for this; the unlicensed version only works for thirty days).

As the title indicates, this tutorial is specific to configuring WebStorm to be used with Cordova. As such, there will be a lot of WebStorm-specific information here that, if you were using an alternative IDE, may not be applicable. You’ll want to follow your IDE’s integration instructions or deduce it on your own at that rate.

Prerequisite – NodeJS/NPM

I’m going to be frank here – there is very little that I know about NodeJS other than what it is, that it’s been the subject of quite a lot of hype in the web development community since its conception, and that it is a requirement on behalf of Cordova. While there is still quite a bit of homework that needs to be done on my part, I have been able to successfully install and configure it so that it works for the purposes of Cordova. This will be the topic that is covered here.

Node can either be installed by downloading it in a pre-compiled package from the Node website or through your distribution’s default repositories. If you elect to download from the Node website, you’ll need to be responsible for manual maintenance of the package and for extracting it to a location that you have access to via your permissions or ACLs (if your file system supports them). The method I used was installing was the former since updates are automatic and installation is performed in the appropriate directories. This can be done with the following command

sudo apt-get install nodejs

Once this command completes, you’ll have Node installed on your computer. To test this, you’ll want to start a Node interpreter by issuing the command nodejs at your terminal. If you’re brought to a new prompt lacking the traditional user and host name information, Node is all set to go. You’ll need to press Ctrl+C twice to end the program and return to your traditional prompt.

NPM, a pseudo package manager for JavaScript libraries, is required since this is the preferred method for installing Cordova. It will also help with obtaining Cordova plugins and other JavaScript libraries that you might want to use later on for development. It can be installed in the same was as Node was with the distribution’s package manager.

sudo apt-get install npm

To test the installation, you can simply issue npm at the terminal. If you get back a page of text indicating the usage syntax, NPM has been installed successfully.

Installing Cordova

When you’re using WebStorm, it’s apparently possible to install Cordova entirely through the IDE once you have configured it to locate both Node and NPM. These steps, however, are a little convoluted to follow, especially with all of the potential pitfalls you’re going to encounter, so we’re going to avoid this entirely and install Cordova with NPM on the terminal.

NPM has two install modes: local and global. A local installation will create a node_modules directory in the working directory where the command was issued at and install the module there. The global installation places all of the modules in a consolidated directory and makes that available to the system through environment variables; Cordova is best installed in global mode (recommended by the official install documentation). The installation can be performed with a single NPM command

sudo npm install -g cordova

The installation, when in the global scope, will need to be run as root hence the use of sudo.

Caveats… Already

To test Cordova, you’d do it in the same way that you would with both Node and NPM. However when you type cordova into the terminal and press enter, you will most likely, but certainly not always, get in response an error that may look like this:

/usr/bin/env: node: not found

The issue here is that Cordova is looking for the Node binary with a specific identifier, node. However when Node is installed through the package manager, the identifier of the binary is nodejs. Despite there being several tutorials on the Internet offering advice such as aliasing nodejs as node in your .bashrc file, the solution that needs implemented here is to create a symlink in /usr/bin named node that links back to nodejs. So what your directory tree looks like then is similar to this


If the highlighting gets in the way, what you should be paying attention to here is that the bottom highlighted line shows the actual Node binary, nodejs, and the top line shows the node symlink which points straight to the nodejs binary. They’re both in the same directory but Cordova is looking specifically for the node file. I’m unsure if this can be configured in Cordova in some way so if someone knows, please share how to do this. Either way, once this symlink exists in /usr/bin, you should be able to then issue cordova from the terminal and get syntax help printed out to the terminal. If this happens, Cordova can see Node and is ready to go.

Prerequisite – Platform SDKs

As great as Cordova may be, it needs the platform SDKs in order to build for each one specifically. Obviously, while capable of bridging the gap between platforms, you still need the platforms themselves to actually accomplish anything. The good thing is that the Cordova wiki hosts a plethora of information relative to acquiring the corresponding SDKs. Being on a Linux system, you can install without a serious amount of labor the Android, BlackBerry and Ubuntu SDKs. For the sake of this tutorial, we’re only going to be focusing on the Android SDK. If there is further interest in setting up any of the other SDKs, I’ll create them later.

Downloading and installing the Android SDK should be a relatively straightforward process at this stage. I’m going to assume that you either know how to do this or can follow the instructions outlined on the Cordova wiki. Post installation, you’ll want to ensure that you have added an environment variable called ANDROID_HOME and included it in your PATH environment variable that points to the root directory of the SDK; again assuming that you know how to set persistent environment variables on your Linux computer..

As a secondary caveat, if you’re starting WebStorm from a desktop link or a link in the Unity Launcher, there is a bit of a catch in that the invocation context will be such that the program won’t notice the user-modified PATH variable that contains the ANDROID_HOME variable. What this means is that the IDE won’t be able to see the location of the Android SDK (I’m assuming that this would be the case for other SDKs as well). The way to fix this is to modify the EXEC field in the file to preface the issuing command with bash -ic.


Keep in mind that traditional desktop icons are found in ~/.local/share/applications while Unity Launcher icons are located in /usr/share/applications.

So long as all of these conditions are met, you should be able then to start WebStorm and create a Cordova project. Let’s step through that process next.

WebStorm – Creating a Cordova Project


As you can tell, I’ve been at this for quite some time.

The first thing you’ll want to do is click on Configure at the bottom-right and then select Settings which will be the first item in the subsequent menu.


First you’ll want to examine to ensure that WebStorm knows the location of Node. It’s highly unlikely that it has automatically determined its location so you may have to set it. Keeping in mind the location of the Node binary that was installed through your system’s package manager, you’ll want to get this fully qualified path name, along with the binary, into the Node interpreter field. Code Assistance may not be enabled for you by default. Frankly I’m unsure what this feature is but I have it enabled because… reasons.


Next you’ll want to instruct WebStorm as to where to find the Cordova binary. Again, there’s a very high chance that it’s not automatically detected by your installation so you’ll have to manually specify. As we did with specifying the Node installation directory, you’ll need to do that here in the first field labeled PhoneGap/Cordova executable. Note that WebStorm still retains the PhoneGap label for all things Cordova even though Cordova has absorbed PhoneGap. Once the installation directory has been specified and WebStorm sees the Cordova binary, the PhoneGap/Cordova version field should automatically populate. The third field, PhoneGap/Cordova working directory shouldn’t be filled out at this point; ignore my entry here. This field is specific to your current project. This also explains the error that you’re seeing at the bottom of the window in the screen shot.

Once those are set, you can click OK and go back to the WebStorm Greeting Window. Here, you can click on Create New Project.


Once the New Project dialog appears, you’ll select PhoneGap/Cordova App on the left side and then fill out the Location field. The PhoneGap/Cordova field is simply the location of the Cordova binary; we set this previously in the global settings. Click Create and WebStorm should take care of the work to generate the files necessary for your project.


Now so long as the steps above were followed to the letter, you shouldn’t have any errors, other than something simply blowing up, that get thrown from WebStorm. Now you can start working on your program.



Testing your Cordova program can be done by creating multiple Run Configurations. Each of these would be distinguished by the value in the Name field. The value in the Command field will determine if Cordova will attempt to delegate to the emulator for the target platform, and the value in the Platform field will determine which platform is being targeted with this Run Configuration. I prefer to test on actual hardware, unless I’m constrained by the lack of, so for Android deployments I’ll deploy strictly to the device.

This should be enough to get you started. You can always read more on the Cordova Wiki to get a primer on the Cordova specifics. Otherwise you can start hacking away using HTML, CSS, and JavaScript

Projekt Vagabond Isn’t Dead. I Swear.

I’m starting to get back to a point where I can start working on this thing again and it’s like flipping through a kindergarten yearbook. Every now and then I’ll find something that makes me think I had a stroke of genius. Other times it’s like seeing a photo of that one kid you hated more than bees. Like you’d prefer taking nails hammered into your ears than listen to that twat for a single second more than you had to.

Tonight I had more of the latter instead of the former.

This thing has taken several forms starting out as a PoC Bash Script that was around sometime shortly before Ohio Linux Fest 2014 to a full-blown 15,000+ line C++ program which would have worked but I didn’t realize how insanely asinine packaging software has to be (I don’t have that kind of time, especially these days). But then I had the bright idea to simply make a Vagrant Box (I was already using Vagrant in the backend for handling a lot of things) and just distribute that instead of all this rigmarole. Funny thing here is that there are still quite a few snags.

As I’m typing this, the box is uploading to a cloud store that I’ll make available to the public tomorrow. I ran into some problems during this process.

  • I wanted to be a somewhat normal person and upload the Box to Atlas, HashiCorps repository of Base Boxes. I thought that would have been a great way to make this available to people. But nope! I don’t know if there’s a size restriction on the Box size or what but it just wouldn’t take it. FYI – the Box is about 780MB in size. The goal here is that someone would have simply been able to issue “vagrant up gregfmartin/vagabond” and get the VM. Man that would have been nice…
  • Google pisses me off to no end these days. Tonight was no exception. It still blows my mind that I can’t update or configure an Android SDK Installation from the terminal without either (A) getting bitched at for some ridiculous reason or (B) having to press ‘yes’ to accept fifty licenses for these libraries instead of just being able to use an option that will opt-in to any license requests that would come up. The former makes it literally impossible to automate an installation. Some people on GitHub have described a work around for this but it’s really hacky and I’m a little concerned about the platform portability of solutions like those so I’m avoiding them like the flu.
  • This all lead to my idea of just making the Box and distributing the Box that I’ve manually tweaked. This has issues in and of itself in that (A) the bundled software is static unless the user wants to manually update or (B) wait for me to to update and republish an updated Box (which I REALLY don’t want to do if I’m being honest). This leads into supplementary tutorial material that will be on the project’s website.

In case anyone is wondering, the reason why the Box is so huge is because it contains a fully updated Ubuntu 14.02 64-bit base image, required prerequisite software to use the Android SDK and associate tools/IDE, the Android SDK installation with all 5.0.1 components as well as all Support Libraries that are compatible with Linux (important to note), and the recent version of IntelliJ Community Edition. So yeah, it’s a little fat. That’s the size it would be on your disk anyway.

Tomorrow I’ll get all of the stuff up on the website like documentation and how to do things with it and what you can expect by using it as well.

Something tells me I’m going to have to make some changes to this before too long. 🙂

Never Underestimate the Role of the Sysadmin

The past few weeks have been a rather interesting adventure in my technology career. Not only has it been a eye-opening experience but it’s been a humbling one as well.

Ever since I started eleven years ago when I chose IT as a profession, I’ve been one of the following at any point in time:

  • Software Engineer (I still actively participate in this)
  • Tech Support Monkey
  • Cog-in-the-Machine Glorified Maintenance Guy
  • Consultant
  • Independent Contractor
  • Evangelist
  • Public Speaker (Still do this and want to do more)

These days, I’m getting the unadulterated taste of what it’s like to be a manager/system administrator/network architect. If I’m being honest, I wasn’t totally prepared to be put in a position like that but what’s life if not an opportunity to learn new things.

Learning is what I’ve been doing since starting this new job almost a month ago. I wasn’t totally unprepared for the networking side of things but it certainly was not my forte. So when we started having all sorts of issues with DNS, routing different kinds of traffic between two ISPs, modest but improper implementation of VLANs, investment in monitoring and NACs, my life became this seemingly endless cycle of banging my head against the wall, reading technical manuals until the wee hours of the morning, working on maybe four hours of sleep a day and loving every second of it.

One thing I picked up on though is that system administrators deal with way more acronyms than programmers ever do in their entire careers. In three weeks I may have learned more acronyms with regard to networking than I have in ten years of software engineering.

The really cool thing about all of this is that I can remember back in high school when I started taking the CCNA courses and I seriously was falling asleep during them. As it would turn out, that’s not an uncommon thing to happen as most every other sysadmin I’ve talked to about CCNA says the same damn thing. But I never thought in a million years that I’d use any of that and guess what? I am now. What a world of difference your life becomes when you completely understand the OSI/DoD Networking Models and what each piece of networking hardware does. I love building Nagios and pfSense boxes and configuring them. It makes my nerd giggles happy when I start configuring a switch/router/firewall. DNS? I’m up to my neck in BIND and it’s great. DR and Failover? Let’s do it!

Speaking of switches, can we just start making it a point to boycott further production of Dell PowerConnect switches and find the existing ones and toss them in a lake somewhere on Mars? I mean if we were ever afraid of an alien invasion of some sort, just show them that those monstrosities were made and they’ll turn right the fuck around. They’re proof that intelligent life was not found on this planet.

The really great thing about being a sysadmin, I think, is that there are more chances for you to be faced with do-or-die situations and that’s where I work best at. When things are so hard that you’re the only person to turn to and even you don’t have anything near close to an answer. Not only that but being able to orchestrate all of these technologies to work smoothly in tandem with each other is a bit exhilarating. AND I GET TO USE AS MUCH OPEN-SOURCE SOFTWARE AS MY LITTLE HEART DESIRES!!! :)))) Save for Windows Server. Oh and by the way Microsoft, your CALs can go rot in hell.

Vagabond 1.0 – Nearly Here

Yes that’s right. After much toil, and missing my self-imposed deadline for a release, I’m nearly complete with the program and can give it a 1.0 which means that it’ll go into General Availability.

I’m not posting Vagabond News on its website because I want that strictly to be a place to get the program from and a reference for information that developers would be interested in. Just wanted to get that out of the way as well. 🙂

Right now the program is about 98% done. All that remains is to test two additional features, push to the repository (assuming they work, which they will) and then build Debian, RPM and Source Taballs for deliverables. I haven’t built Debian packages for quite some time so I’m having to go through several refreshers to bring myself back up to speed on the workflow.

After this release, I have some additional features I want to work in so development will continue alongside general maintenance coding. I want to be able to add support for choosing to use either the official Android SDK or to use the Replicant SDK. This is more of an ethical decision as I feel like Replicant is a great choice for developing in an open-source manner but might not be the best idea if you’re writing Android to be published on Play. But the option should be there. I’m also considering the idea of implementing SDK Administrative Task Shortcuts. These are commands that can be passed to Vagabond that will inflate to more robust tasks passed to the VM that maintain the SDK.

If all goes as planned, and work and life leave me alone long enough, I can potentially have this ready to go by the end of the week.

Another Vagabond Update

I wanted to take some time out from working on it to provide you with an update to the Vagabond project.

Recently I had some piqued interest on Twitter for the project and it’s lit a little more of a fire under my arse to get it going. I’d continued to work on it but I was putting in on the back-burner in favor of other projects that I’ve been working on. It’s safe to say that I’ve got enough interest now to get it done.

If you need another synopsis on it, please check out my previous post about what Vagabond is.


My PoC code was a shell script that did the whole thing for me. I’ve since migrated the project to C++ and am going to build both Debian and RedHat Packages and also provide a source tarball. After some consideration, I may actually take this and eventually migrate to a web-interface in Perl but I’m still not too sure yet. I’ve got too much work going into the C++ conversion that I need to stick with that and see where that takes me. Additionally, I’ve put the development branch up on my GitHub. The project is under the BSD 3-Clause License so you can do what you want with it so long as it carries the license and you don’t use my name on derived works without pinging me first.

Right now the project is sitting at about 3011 lines of code (tallying both insertions and deletions but the graph is laid out on the GitHub). This may seem inflated but I’m adhering to the 80-line gutter since I’m editing exclusively in vim and I hate the word wrap on there. Rightly this could be consolidated and I’d probably manage to shave off a hundred or so lines. A compilation is possible as it is now and the help feature is 100% complete. The create option is responsive to tests but doesn’t do anything useful at this point and the Vagrant Propagation command isn’t coded yet. Most of the code might seem redundant compared to the PoC but I wanted to use some of the more robust features of C++ like Exceptions and Namespacing for proper encapsulation.

I’m still looking to be able to have this done by the end of next week. At least a rev one. If anything comes up that keeps that from happening, I’ll change the date but I’m pouring nearly all of my free time into the project so I should be able to hit it.