Gist Publishing

I’m digging my way through the SFML.NET bindings, and creating a series of GitHub Gists along the way to illustrate both how to use the binding and patterns that I may employ whilst using them. These will all lead up to some shareware thing, eventually.

At a minimum, I figured I’d get these out there. It’s not clear to me what the user base of the .NET binding for SFML is, but I’ve been having some fun with it.

The gists on my page are all condensed into one file, but the solution file has them all broken out into individual class files. Further, SFML.NET is incorporated into the solution via NuGet. I have a two VS Templates: one that serves as a really quick demo base, and the other that uses my Game Core object as the launchpad for future endavours. If desired, I can share these templates as well.

The gists can be found at my GitHub Gists page.


VMware Workstation Pro and Device/Credential Guard Incompatibility Fix

Just wanted to throw up a quick distillation of how to accomplish this since it seems to crop up more often than naught.

The observable issue is outlined in this KB from VMware. It usually happens if one had Hyper-V installed on their computer, or any feature that relies on modern virtualization features such as Windows Sandbox (I think WSL leverages these features as well), and one wants to use any of the VMware Type-2 hypervisors.

The aforementioned KB references a KB from Microsoft that’s pretty convoluted and glosses over a lot of different scenarios for local and enterprise contexts. The local context is the one I want to distill here since I feel it’s buried.

The gist here is this: There is a PowerShell script you can download and run that will disable Device/Credential Guard automatically. You can download the PowerShell script here. Once downloaded, you’ll want to run the script with the following switches:

.\DG_Readiness_Tool_v3.6.ps1 -Disable -AutoReboot

Why Year of the Linux Desktop is Bullshit

The title of this alone speaks volumes greater than the exposition that’s to follow, and I’m sure that some of my peers are already bounding from the sheds with pitchforks and torches in hand, but I’ve never been one to not voice a concern even when the house is burning down.

Yet again, we in the Linux users community find ourselves at an interesting juncture. Microsoft has as of 14 January 2020 officially EOL’d Windows 7. As with XP before it, this will likely be a major issue for the immediate future considering how prevalent its use is in the desktop market (Gartner estimates still have Windows penetration at greater than 90%). As expected, most of the podcasts and reporting sources are cobbling together pieces to launch yet another slew of volleys, perhaps to rally the uninitiated to take another look at Linux if they haven’t already done so.

But, who listens to or reads these sources? Non-Linux users?

This nonsense of the Year of the Linux Desktop has been going on for as long as I can remember. And from working with those who have vastly longer tenures than myself in the Linux world, it seems as if it caught on well before my dive nearly fifteen years ago. From the gate, I too shared this sentiment. I had been a Windows user from the start, and although Linux was a monstrous beast to handle back then, I still loved it with all my silicon heart. And I, like every other witness, wanted to espouse my love to the world in the most compelling and boisterous method I could imagine. I’d rake myself through the coals of Hell to learn all I could about Linux, and while still salving my burns thought there was no way anyone else wouldn’t want to be a part of this. The computing revolution was here! Or was it?

Truth be told, the premises of either open source or free software were entirely lost to me until about seven years after I’d taken the swan dive. The fact that I was using something that wasn’t Windows, and that I could be as hardcore of a programmer as I wanted were the only things I cared about. None of this liberty crap mattered. Software wasn’t a first class citizen having attributes of sovereignty. I just wanted to be as nerdy as I could because I wanted to. The politics of software, which would come in much later, seemed to dissolve this juvenile yearning in me to a large extent. It wasn’t about being a nerd anymore. All of a sudden, it was all patents and licenses and codes of conduct and ruination abound.

So if it weren’t for being able to divorce the creations from their Gods, why would I want others to use Linux? To know and embrace it the way I had?

Two words: technological empiricism. I was a fucking God amongst men, a wolf amongst sheep when it came to sheer computing skill, and I knew it, and I was more than happy to get everyone else onboard, their readiness for the transition be damned.

Turns out, this wasn’t too dissimilar of a position for young bucks in my pool at the time. The early 2000s were ripe with young technologists who, after having done whatever it was they did to make the Internet the cultural platform it ended up being, just wanted to flex on the boys and girls as much as possible. We didn’t skip leg day at the gym, because we made the gym.

Even then, it seemed as if using Linux was a horrific experience. Most all of my peers were still using Windows 2000 and Windows XP, especially those of us who fancied ourselves programmers. Hell, the first commercial software I wrote was in VB 6 and was a team effort (which was hilarious in its own right). But I ported it to Linux on my own using the customary tools of the day: glibc, GTK, Glade, MySQL (MariaDB wasn’t even a thing then), etc. And guess what? It worked! But guess what, again? The damned customer was running Windows 2000, and that silly little abstraction layer, GTK for Windows, wasn’t going to be running this shit anytime soon. Besides, who statically links?

The horror of this for programmers was one thing, but the stage was entirely different for the pedestrian user. Games? Well, if you like spending hours playing AisleRiot, then Linux was the platform for you. I mean, why wouldn’t you want to trade playing WoW or Diablo for good old GNOME Mahjong? Total no-brainer. Office? Come on! That’s easy! OpenOffice (LibreOffice wasn’t a thing back then) was the killer app that could do EVERYTHING that MS Office did (this turned out to be a colossal lie then and still is now, despite everything that The Document Foundation does with LO; don’t believe me? Try converting a SMB to LibreOffice from MSO). Internet Browser? Oh just use Mozilla, because that was super compatible back in the day. Need to install some software? Just use the terminal!

Oh… wait. THAT thing.

We’ve already stumbled upon what is potentially the largest issue with Desktop Environments on Linux. What started as a detailed thesis in the form of a borderline anthropological analysis of why the current landscape of Desktop Environments sucks was boiled down to three quintessential matters, this being the first and perhaps the largest.

You know what’s unattractive to the pedestrian computer user? Terminals. You know what contemporary operating systems do a bang up job of getting those out of the way? Windows and virtually every OS that Apple produces for their product line. You know what operating system practically begs to be used nearly exclusively by the terminal? Linux. You know what Linux software doesn’t do at the terminal? Provide a consistent or coherent interface. You know what pedestrian users are most afraid of concerning their computers? Breaking them. You know what breaks Linux computers? Using the right command in the wrong context by accident. And without the aid of a witchdoctor who bears the scars from having mutilated themselves to possess such knowledge, those people are fucked.

Short: Every single Desktop Environment, then and now, does an absolutely piss poor job of abstracting away the need for a casual user to ever whip out the terminal.

But Greg, that’s all bullshit because there’s plenty of cases where a DE does what you’re saying it doesn’t! Yeah? Let’s through a few use cases here that the casual user takes for granted in other operating systems and environments that requires a terminal regardless of the DE:

  1. Installing a group of related software. In RHEL-derivatives, this is usually handled through groups in either YUM or DNF (modules in the contemporary sense). Guess what doesn’t show up in GNOME Software? Groups or Modules. In Debian-derivatives, this isn’t even a concept that APT knows about without first getting tasksel. And even after installing it, guess what doesn’t show up in GNOME Software? tasksel groups.
  2. Installing software. Despite the Windows Store being the hip place to get Windows 10 software from, it’ll never quite be the thing that Microsoft wants it to be because developers can’t distribute traditional binaries through it; they’re required to be repackaged in a fairly unintuitive manner. Ergo, with Windows having the Lion’s Share of the desktop market, pedestrian users have been habituated to installing software in the legacy fashion. And let’s face it, the Windows 10 Store is full of garbage in the same way that each mobile app store is. Guess how you can’t install software on Linux? The legacy method that every user is accustomed to. BUT WAIT, GREG! WHAT ABOUT APPIMAGE OR SNAPS OR FLATPAKS!? Bullshit, each of them. They’re great for those of us who’ve abused ourselves for years to get these programs to work otherwise, but you still run into issues with dependencies (try installing a Flatpak on CentOS 8 that requires X.264, only to find out that you can’t upgrade the base Flatpak installer because it’s about nine versions behind, or to do all of the manual hacking required to get certain Snaps permission to break their cells and access otherwise inaccessible resources on a system (virtually anything that installs with the –classic switch)). But surely, anything you install from the software store that comes with the DE you’re using is good enough, right? Yeah, it usually is, for the most part. Until one realizes that you can’t get a piece of software that you would otherwise have obtained on either Windows or Mac without incident. You want Google Chrome? Not available without downloading a DEB or RPM. Need some AV codecs because you can’t view DRM content or Flash content? Guess you’re adding some repos (you know, because EVERYONE keeps their AV in OGG or Theora, because those are solid and x-plat formats).
  3. Troubleshooting DE. But wait! Didn’t know that the DE runs atop a Window Manager? Didn’t know the Window Manager runs atop a Display Server? Don’t know how to change VTYs when the whole thing goes down the shitter? Oh well.

The next point: the DE themselves miss virtually all the targets required to hit to render a casual user experience viable. I get so sick and tired of hearing people claim that the current landscape of choices in not just Linux Distributions but DE are such a good thing because, as one Matt Hartley put it, “One man’s perfect distribution isn’t another man’s perfect distribution.” Bullshit. Total fucking bullshit. I don’t even know where to start with this.

I can’t resist the urge to address the elephant in the room here that trespasses onto the larger, slightly relevant, topic of Linux Distribution Saturation. The question is this: if I download Ubuntu and make some minor tweaks to user land, does that actually mean I have a different distribution, further that it warrants creating a whole new entity for consideration and download within the Distribution Space? The crux here? Most Ubuntu-based derivatives are still using the fucking Ubuntu repositories. I can’t stress that point enough. If a distribution is supposed to be, at its core, Linux with a swath of software available to make the user experience more concrete, and you’re not offering any different software than what’s available upstream, why the fuck are you creating a distribution? Changing a handful of packages, or forking because you’re butthurt about the init system used, isn’t a new distribution. Why do we think it’s okay to do this? Why do we have over fifty Linux Distributions to choose from? How is this indicative of offering clear choice to outside users? Plainly, it isn’t. It’s offensive to not just those in the community, but mostly to those outside.

Off the top of my head, these are the DE that I can think of that are available to choose from: GNOME, KDE, MATE, Cinnamon, Unity (if you’re still using slightly older versions of Ubuntu), XFCE, LXDE, LXQt, CDE, Budgie, Enlightenment, Razor-Qt, Pantheon, Lumina, and that one that Deepin uses, which I think is just called Deepin. Each of these expresses colloquial desktop metaphors in different ways, each has their own quirks about customization, each has their own methods for enterprise considerations (actually, most don’t even consider this, and if they do, the implementation is fucking horrible and unmanagable at best), each come with their own suite of tools that functions differently than the next, etc. I can’t go any further here without wanting to throw up. The fact that only a select few of these come close to being inviting to a casual user is appalling, and even these fall fatally short of the root objective, amongst many others.

It’s worth pointing out that although most desktop metaphors aren’t codified in anyway, that doesn’t mean that casual users are malleable to the point of wanting to abuse themselves endlessly to use their computers, and you can foist whatever you want in front of them. For fuck’s sake people, we’ve had traditional metaphors being expressed since GEM, and although it might be time for some change, tell me how well that’s worked out for you on the desktop form factor? Why take something that works, something that people are accustomed to, and not just break it, but irreparably obliterate it?

Oh, and here’s the other hilarious spin sold as a positive about the Choice Paradox concerning DEs: if you don’t like the one you have, just get another one! Blech.

  1. This is an unbelievably moot thing to say to a casual user. None of them view this as a benefit. Being accustomed to just using the computer, it doesn’t dawn on any of them that Explorer is a shell as much as it is a file manager, and that it can be customized or even replaced (unsure if this is true for Mac). Whatever they see first is what they’re stuck with. End of story.
  2. Even if you manage to get someone beyond this point, just how do get another DE? Do you install it alongside your existing one? Do you get a fresh distribution flavor? If the latter, you better hope you partitioned your system correctly, otherwise your shit gets blown to kingdom come.
  3. Having multiple DE running parallel on a single installation is a fucking nightmare.
    1. The only real safe way to install an alternative DE is to get it through a group in your default repositories. See aforementioned point about installing software groups. If you don’t see it here, you’re already about two-thirds the way jumping the shark.
    2. Once you get it installed, you’ll likely have to compete with the idea that your distribution will have a strong preference for the Display Manager. Wait, what’s a Display Manager? Oh yeah. Forgot to mention that little bit. The program that logs you into the computer? That’s the Display Manager, which is yet again an entirely different component. Anyway, for example, CentOS has a strong preference for GNOME Display Manager (GDM). Even if one installs the KDE flavor of CentOS, you’re still going to be using GDM rather than SDDM. To put the icing on the cake, let’s say you’ve got a system with both GNOME and KDE on it with GDM as your DM. You have to know enough about GDM to know that you need to change the session type to KDE Plasma instead of GNOME Shell, because if you don’t change it, you’re going to keep using GNOME Shell.
    3. If something goes wrong, or if you just decide on a DE you like and wish to exorcise the alternate beast, good fucking luck. Pulling a DE out of a running system is like getting a steak out of the throat of a lion. Sometimes it can be easy, others run you the risk of crippling your system if you’re not paying attention to how the package manager is resolving dependencies for removal. Guess what can’t pull these out safely in some cases? Graphical software managers like GNOME Software or Discover.
    4. If you decide you want to be like Lois and Clark and go deep in on running multiple DE in parallel, you’ve now got an issue where you’ll have multiple programs that do the same damn thing. There’s nothing cooler than looking for a terminal emulator and seeing Konsole, XTerm, and GNOME Terminal all at the same time. Sometimes I want to look for files using Dolphin, but other times Nautilus gets my gutchies that day.

This had to be obvious to someone at some point, because we have distribution flavors. Ubuntu has several: Ubuntu, Kubuntu, Xubuntu, Ubuntu MATE, and Ubunty Budgie (Kylin sort of doesn’t count here). The reason why these flavors exist? Ubuntu gives you GNOME Shell, Kubuntu gives you KDE, Xubuntu gives you XFCE, Ubuntu MATE gives you MATE, and Ubuntu Budgie gives you Budgie. This. Is. The. Only. Reason. These. Exist: to isolate the DE from each other for a hopefully more gooder experience than if one were to use Ubuntu and install KDE inside it.

How is this feasible? How do you attract users to this? How is it EVER going to be the Year of the Linux Desktop? Can we please stop this nonsense madness of blindly repeating ourselves about dominating the desktop space? It isn’t going to happen when things look like this. Not. Fucking. Ever. All we’re doing is circle jerking with ourselves in a fantasy where we can finally say we came out on top. If we want desktop dominance, which may never happen, we should at least attempt to start with these goals (IMHO):

  • Standardize. There’s nothing more annoying to a casual user than too much choice; Choice Paralysis is a real thing, think buying toothpaste. Maybe this means consolidation. Maybe this means a new project whose focus is on these things.
  • User Focus. Make a product whose core philosophy is the user and their experience rather than an experiment with cool code. Software shouldn’t abuse users or require them to abuse themselves.
  • Ease of Use. This should’ve been a no-brainer, but methinks the horse died some time ago and we just agreed to leave it be and not replace it. Anything that could be done at a terminal should be able to be performed through the UI, no exceptions. Metaphors are not play things. We have established ones that work considering the form factor, so fucking use them. They work for a reason: they don’t assault the user.
  • Customization. It should be EASY to customize your environment, and it should also be EASY to sell to an enterprise for adoption. It shouldn’t be a fucking sell of a nuclear power plant to get people to use this technology.

RHEL 8… Y U No Werk Bruh? (Again)

Yet again I’ve stumbled onto a workflow breaking issue with RHEL 8.

RDP is a major component of a lot of workflows for engineers and Remmina has traditionally been a great solution for these situations. All up until getting Remmina through Flatpak is the only reasonable method for obtaining it, and that RDP connections won’t work any longer since RHEL/CentOS 8 ship with a version of Flatpak that’s several behind the current and OpenX264 refuses to install on any version of Flatpak that’s lower than 1.4.

So let’s try and update Flatpak through native repos:

Right. So let’s just remind ourselves of the version here:

And the official Flatpak git repo has a release tag at 1.6.0. So why isn’t this in the repos? Let’s add a few more repos to see if we get any joy there:

Can we update Flatpak now?

Nope. Brilliant.

So let’s build from source. Now we may want to remove the existing Flatpak installation since it may conflict with our manual build, so let’s try to remove that.

Yikes. We probably don’t want to do this. Some of this seems benign, but we may end up with some issues afterward. So let’s proceed as if everything is normal and we’ll leave this alone for the time being. Let’s grab the Flatpak source and go to town.

Missing dependencies from the start.

  • libcap-devel
  • libarchive-devel
  • libsoup-devel
  • gpgme-devel
  • polkit-devel
  • fuse-devel
  • ostree-devel…

So it turns out that the version of ostree-devel that is available through @System is .1 of a build version off from what Flatpak wants…

Back to 7 I go…

How to Learn Linux, Addendum

I swear, I’m not exclusively picking on CompTIA lately. I just happen to be really interested in what they’re doing, especially within the context of Linux. Also, since my last post, I’m suddenly receiving emails from their mailing list even though I never explicitly signed up for one. Weeeee.

One such email included a list of recent blog posts from their official blog, which appears to be a planet aggregate of sorts. The headline article was titled “How to Learn Linux” by Priyanka Sarangabany. It’s a well written perfunctory that blends advice given within the last twenty years with some minor contemporary flavor added. Whilst reading, I tried hard to demarcate between the objective of the article – as laid out by the title – and this nagging feeling of being grossly out of touch with reality. Despite my best urges to jettison the aforementioned intuitions, it got the better of me.

It might be just this article in particular, but most How do I Learn Linux articles lack a certain ubi we vera, or “In reality, professionals encounter this.” I think this bears some talk, even if not within the confines of the direction pointing. This piece in particular doesn’t actually get to the How To part until right near the end.

There’s no doubt that Linux is quickly becoming a powerful force in the IT industry. In fact, you’re probably using Linux without even knowing it! From smartphones and home media centers to smart thermostats and in-car GPS systems, this open-source operating system is quietly running nearly all supercomputers and cloud servers that power our daily lives.

Priyanka Sarangabany

One very common complaint you’ll hear lobbied from the Free Software Community, especially those who rabble-rouse with RMS, is that it’s a travesty when people don’t truly understand that when you’re using Linux, you’re actually using a complete suite of GNU software tools alongside the Linux kernel. Their vain efforts to correct the misnomer of simply Linux were to address it as GNU/Linux (along with several other strident misnomers). Regardless, the point remains that people running Linux are in fact taking advantage of a complete set of GNU tools developed by the Free Software Foundation way back in the day. The Linux community, however, is ripe with all sorts of misnomers such as the one illustrated here. Free Software/Open Source is quite muddy in terms of who uses what, and more importantly, who cares specifically. A similar phenomenon was at one time witnessed when Android first exploded onto the scene compliments of Google (it wasn’t originally a Google product ;)). The Android OS is running Linux as its kernel. Consequently, most in the Linux community saw this as a striking win for our cause. Long had we waited for the day when Linux saturation was prevalent enough in the user-space to render it a contender worthy of use cases the likes of which only Windows and OSX seemed to garner. However, hardly any of these smartphone users are taking advantage of Linux itself, explicitly. Furthermore, the smartphone space as it pertains to Android is an absolute shithole. Polluted by countless dumpster bin devices with all sorts of malicious software on them, privacy-raping middleware compliments of Google’s nefarious growth trajectory, and an overall exhaustion from being trained to ante up for a new device every six months, the fact that anyone is using Linux at all is both a non-sequitur and buried under the morass.

Some of the truth here is that the misnomers aren’t just about calling a duck a duck; they mean more than correcting bad speech, for better or worse. Not all Linux jobs are glorious administrative escapades where the objective of reformation in the user space is going to earn you badges of honor. It’s not an accident that Linux finds itself reserved for the infrastructure roles. Linux is mostly far too technical for 90% of so-called users, and the fact that Android runs atop it doesn’t mean that you’ve accomplished much other than distributing shadow copies. Emphasis here should be placed on the “quietly running” remark. You’d do well to keep this in mind.

Why Is Linux So Prevalent?

There are multiple reasons why Linux is considered one of the most diverse and powerful operating systems in the world. To understand why Linux is loved by many, it is important to identify its defining characteristics.

Open Source: As Denise Dumas, the vice president of software engineering and operating systems at Red Hat, said in a recent CompTIA webinar about Linux, “Open source is a place where innovation ferments and happens.” When software is released under an open source license, people can view and build upon the software’s original source code. This feature encourages software developers to adopt Linux and apply their own improvements to the code. As result, Linux’s public domain drives constant evolution and advancement.

UNIX-Like System: Linux behaves in a similar manner to a Unix system. This means that the operating system relies on multiple parts/programs that carry out specific jobs collectively. This is a fundamental principle of good system design and is at the core of what makes Linux so great.

Stable: As a public domain that is constantly evolving, Linux remains an incredibly secure operating system. In the words of Eric S. Raymond, “Given enough eyeballs, all bugs are shallow.” Linux’s general public license allows a plethora of software developers to rapidly identify issues in code and just as quickly respond to fix the errors.

Free: Linux is priceless. Literally! The underlying software of Linux has been free to download and install since its creation. For this reason, Linux remains one of the most accessible, diverse operating systems to this day.

Priyanka Sarangabany

All of this is 100% true. But it also 100% only panders to programmers or people looking for software to do something that doesn’t cost them a thing in terms of material price.

Flagshipping Linux’s success in contemporary terms as simply its adherence to Free Software and Open Source ideologies is missing the target just a bit. It’s an attractive aspect only if you’re a software developer or belong to a software engineering group specializing in Linux itself or creating software to run on it. By extension, an end-user benefits from this in that they have some assurance, as ESR puts it, that bugs are simply squashed faster than with alternative monolithic or bureaucratic projects. But end-users most likely don’t care about the fact that the source code for their favorite programs, let alone the entire OS, is available to them whenever. Concurrently, most IT management doesn’t care either. The questions of can and how are the servers going to be supported are the real tests, and we’re so far down the line from the days when there was real competition between IIS and Apache that the lines aren’t as clear as they once were. The fact that Linux is open-source matters only to the kernel team, its contributors, and upstream distributions that repackage the kernel and a collection of software. Your garden-variety sysadmin isn’t going to fondle this too much, at least for billable hours. In general contexts, management presented with the proposition of dedicating resources to retrofitting an open-source project to meet internal needs usually falls out of their chair laughing, and simply resorts to searching for another hopefully complete package. Of course, this says nothing of the emergence of IoT and cloud technologies. Many major industrial vendors are leveraging Linux as a second-class citizen in customer-facing equipment, a handful of specialized server vendors are selling products that are possible only because of Linux, and a vast majority of the cloud-focused architecture is built on or is exploiting Linux in a non-trivial capacity. Although the cut here between administration/architect/engineer is obvious, it’s mostly either this or programming.

Another thing: Implementing Linux isn’t free. While you can download the software and, depending upon the license, run it in your enterprise without legal incident, you most certainly had better have the internal support available to compliment it. Most SMBs are in a position where they could benefit substantially from the use of Linux and derivative technologies. But most SMBs are woefully ill-equipped to float the administrative overhead that running Linux actually entails. The work of Canonical and RedHat have made employing Linux easier over the years, but it hasn’t yet given people the Windows-feel that they hopelessly crutch against. Yes, it costs money as well to administer Windows systems. However, there’s no doubt that a more technical skillset is required for Linux.

One other thing: the use of the term public domain here is inaccurate. RMS, ESR, and Bruce Perens – amongst many others – have historically been cited as having railed against the claim that Linux transacts in this specific realm.

Over the years, companies such as Red Hat have put effort toward making system administration and development easier to master. In turn, today’s Linux graphical user interfaces (GUIs) are highly functional and significantly less intimidating.

Priyanka Sarangabany

This is, unfortunately, false. At least the final statement is. While Canonical, Red Hat, and SUSE have done a tremendous amount of work to streamline new technologies and shore up existing ones, these efforts have very little influence over the GUI/DE projects. These things fly free at their own pace and, frankly, it’s one of the most toxic components of the modern Linux user experience IMHO aside from the stupid number of distributions to choose from. Some insight:

  • Hardly any of these DE are completely functional. Some of them are close to highly functional, but not quite what’s available from traditional Windows/OSX. The very flexibility that these projects benefit from is the same aspect that ultimately undermines their acceptance. The divergence from traditional – but more importantly established – desktop metaphors witnessed in most DE are entirely unacceptable in an enterprise space; they’re barely passable in the user space. For the two or three that still look like they care about helping users rather than hindering them, they’re either too watered down or too full of flourish, coupled with programs that are too convoluted.
  • Consequently, the intimidation factor remains a plague as it’s more real than what the author of this post or perhaps others would proselytize. Take a look at the following DE projects:
  • Not only are there a wealth of choices, but they all express the usual metaphors in different ways which are sometimes really non-intuitive. It’s not a pedestrian user that’s going to find any safe haven here. And if the DE isn’t delivered as a first-class citizen in the DE roundup from a given distribution, it likely isn’t going to be given the time of day; shoehorning a DE into a distribution flavor that didn’t ship native is a bit of a gamble. This all sounds great for a Linux user who’s chomping at the bit to learn the new shiny, but imagine yourself as an IT Manager. Who in their right mind is going to look at this and think they’ve got a snowball’s chance in hell at adoption? What should a budding sysadmin learn? The intimidation factor here is real for both users and prospects, similar to what one finds in the realm of “Which JavaScript framework should I use to develop my web program?” All religion, no substance.

To begin your journey through the Linux space, you will have to make a few choices:

Choose a Linux Distribution: Linux is not developed by a single entity, so there are multiple different distributions (distros) that can take code from Linux open-source projects and compile it for you. Since these distros choose your default software (desktop environment, browser, etc.), all that’s left for you to do is boot up and install.

Choose a Virtualization Solution: Linux virtualization is used to isolate your operating systems so you can run multiple virtual machines on one physical machine, and in turn save time, money and energy on maintaining multiple physical servers. Some popular selections include VMWare, VirtualBox (Oracle) and Hyper-V (Microsoft).

Set Up Your Linux Play Space and Explore: Once you log in to your virtualization environment, you can start learning and practicing. The best way to become comfortable with Linux is to jump in and get your hands dirty.

Priyanka Sarangabany

Choosing a Linux distribution shouldn’t be a cavalier decision. CompTIA Linux+ is, like its LPI contemporary, a vendor-agnostic certification track. Essentially, passing this exam requires knowledge of not just the general administrative topics of Linux itself, but a selection of the more esoteric differences in the major distributions (Debian-based, Red Hat-based, or SUSE). The effort, I suspect, is to suggest or imply that certified individuals are capable of handling virtually anything thrown at them. There’s nothing wrong with this in theory or practice since you’re not guaranteed to be working for/with an organization that has landed solely in one camp or the other. The problem here is that you need to spend some time in at least all three to some extent. I’ll cover more on this later, but there should be a bit of consideration before downloading. Learning Linux can certainly be accelerated by distro-hopping, but this behavior should dramatically slow as time goes forward.

Selecting a virtualization technology isn’t as trivial as this section would potentially lead users to believe. VMware has historically been quite difficult to install and run on various distributions. Legacy versions of the software maybe work on older kernel versions, but newer kernels are hit-and-miss. Furthermore, VMware has a fairly lackadaisical approach to supporting Linux as a viable platform to run its software on. More often than naught, you’ll be scouring the support forums to find that not only are most other people experiencing difficulty installing the software, but they’re either not finding good solutions for their problems or they’re running into other issues that inhibit a good user experience. VirtualBox is an okay Type-2 Hypervisor, but anyone working seriously with virtualization technologies isn’t going to be deploying this any time soon. The implication here is that if you’re not committed to running Linux on baremetal, you’re likely running Mac OSX or Windows and should virtualize it via a hypervisor or two. This may work well, but some of the exam content for Linux+ requires a subset of knowledge that you’ll get only through installing baremetal.

But wait a minute… what does any of this have to do with actually learning Linux?

I’m trying to help set realistic expectations here. Despite the work to push forward, things still aren’t as crystal clear as the author of this blog post would have you believe. Allow me, then, to offer what I think are the best ways to learn Linux.

Manage Your Expectations

Linux is hard. Remember to separate the kernel from the DE, because it’s important. So long as the DE you choose provides an adequate terminal emulator, you can get away with focusing exclusively on the kernel interface and nothing else. Be sure not to get lost in the convoluted nature of the DE, otherwise it’ll add another layer of complexity that you’ll likely want to avoid.

Understand also that doing Linux professionally isn’t the same as doing it for a hobby. Swapping DE every five seconds, or advocating for the use of the flashy or nuanced one isn’t going to get you anywhere. This matters more than you think. Learn GNOME and KDE, and fiddle with the rest in your spare time if interested.

Distribution Selection

Pay attention to the leading distribution vendors out there, and try not to get lost too much in the new shiny that comes out of left field. Take a look at this image and try not to throw up. We in the Linux community say we’re welcoming, and options are great, but this is nauseatingly asinine. The major players here are Canonical, Red Hat (CentOS), or SUSE. Distro-Hopping is okay if you’re just looking to have fun, but that should be relegated to virtualization. Run either Ubuntu, RHEL/CentOS, or SUSE baremetal, and leverage KVM through virt-manager or Cockpit, or VirtualBox, to run VMs locally.


Despite the notion that Linux iterates quickly, the widespread adoption of newer kernels is left to a select special group of distributions. Most are running kernels that are a few versions behind for the sake of sanity. That said, a handful of books exist that help learn Linux itself (not the DE) that will matter for a majority of the versions in the common arena. A few of my recommendations:

Online Documentation

I don’t mean the manpages here, although some of them are useful. I’m talking about wikis, forums, and upstream documentation from distribution vendors. The Arch Wiki is an unbelievable treasure trove of highly technical information for all kinds of software that doesn’t necessarily peg itself to Arch (most of the time). Red Hat/CentOS publish a wealth of documents to give all kinds of administrative information. LinuxQuestions is a great forum for getting help with nearly all matters. Of course, if you’re feeling up to it, you could always get in touch with the developers of the software you’re using directly and get advice or help from them. I’ve talked to a few people from the GNOME team occasionally to get help on certain matters, and it’s proven quite valuable.

Taking Classes

I’ve personally never attended a Linux training course, but that doesn’t mean I haven’t heard wonderful things about them. Some certification authorities like CompTIA, LPI, and Red Hat, will offer both e-Learning and instructor-led courses that will accelerate your learning track right up to the day of examination.

Banging Your Head Against the Wall

I started with Linux in 2004 with Red Hat 9 that was given out to a friend of mine who was attending ITT Tech at the time. All I had was the book it came with, the installation media, and a lot of time on my hands (I didn’t even have access to the internet at that time). The best way to learn, albeit the hardest way, is to simply rake yourself through the coals. Grab a shitbox, abuse it, abuse yourself. Plain and simple.


Get involved with a community. Don’t let the rumors about the Linux Kernel Mailing List scare you away. Most mere mortals are more than willing to discuss Linux, especially if you’re willing to put yourself out there.


Although the landscape is far too saturated, podcasts are still a viable source of information. I miss Linux Outlaws terribly, but shows like Destination Linux, SMLR, and Late Night Linux are great for getting the latest 411 on the happenings and hearing from people who’re incredibly skilled in what they do with Linux.

CompTIA Linux+ XK0-004 Thoughts

Lately I’ve been seeing a lot of steam about the CompTIA Linux+ exam. Evidently they’re separating away from the LPI partnership that’s long been in place – not sure if that has anything to do with the bruhaha – but I thought I’d dig into the exam outline to see what the competency focuses were, and issue some of my opinions about them. Bear in mind that I’m not a proctor or advisor of any kind, and that opinions are strictly that. I’m going to run down the objectives in the same order they appear in the official outline document, so nothing comes out of order.

You can view the outline here:

1.0 Hardware and System Configuration

1.1 Linux Boot Process Concepts

Man, am I happy to see that someone finally understands that not a single person on this planet uses LILO any longer. Say what you will about technical merit, the clear winner here was GRUB. Any mention of the former has been wiped clear from the objective list. Hopefully this isn’t one of those Cisco-style documents where what’s on the exam isn’t anywhere near close to the outline document, unless of course your abstract thinking expands to the realm of what’s par for LSD abuse. Also happy to see that there’s a focus on UEFI/EFI rather than BIOS. Having deployed more than a fair share of contemporary computers both manually and via PXE, it feels dirty to reconfigure the system to run BIOS. Practically speaking, I don’t think UEFI/EFI is as big of a monster as it once was several years ago. We in the Linux community have already crossed this bridge, so let’s stop taking a piss on the side with wilting grass here.

1.2 Kernel Modules

Part of me feels as if this section is gratuity on every entry-level Linux exam. Why? There have been maybe a handful of times I’ve had to manhandle modules, and its come in the user-space on workstations rather than servers. Dealing with Type-2 Hypervisors that don’t play nice with Linux (looking at you VMware) or Nvidia graphics drivers seem to be the only real plays here. For the most part, the kernel does a good job of taking care of what you need for common use cases, and this is especially true if you’re deploying any enterprise distribution whose philosophy is that users shouldn’t have to eat their own skin off their arms to get these systems to work in the 21st century. That said, it’s still valuable knowledge. I’m just unsure that it requires a point allocation on an exam.

1.3 Network Connectivity Configuration

Not really too much to comment on here, except for the inclusion of NetPlan configuration. Along with Gradle, YAML is one of those technologies that was likely written by some hipster and is just a dumpster fire of epic proportions. Since that’s all dandy, let’s change from semi-palatable traditional network configuration scripts that look much like an INI file – which is well understood – to some arcane indent-based copulation between Python-like syntax (because, you know, Python is the greatest thing since sliced bread) and the never ending ML-based projects that seek to change the world. No thanks. Learn it for the exam, learn to hate it, and move back to better things.

1.4 Linux Storage Management

RIP btrfs. Not really.

I’m not sure I’ve understood the migratory path to XFS over EXT4. In my deployment contexts, especially with M2 drives, XFS has caused all sorts of problems that I can’t really explain away. The result, however, was a revert to EXT4 after several FS-level repair attempts were made to fix the corruption on the root partition. One instance I chalked up to a silently botched install, but the other five I couldn’t attribute to really anything. But this FS seems to sit in the first-class citizen spot with EXT4 not too far behind it.

Glad to see that there are some subtle hints at RAID management here. It’s never a huge factor in entry-level exams, but still worth mentioning.

1.5 Cloud and Virtualization Concepts

YAML makes yet another appearance. Yay…

With as long as virtualization has been around, I’m a bit shocked that its taken this long for it to appear in entry-level exams. Most enterprises these days are at a minimum leveraging Type-2 Hypervisors, but this comes in the form of VMware. The focus here, however, is on KVM. Looks as if there may be a little bit of a touch on containers as well, although I seriously doubt it’d be a heavy hitter in comparison to the contemporary content.

An aside, I’m not aware of many enterprises that leverage KVM explicitly for virtualization needs. This mostly gets passed off to VMware or Citrix. I usually find KVM in a Type-2 context on workstations.

That said, there appear to be more here that serves general-purpose understanding of virtualization technologies. Definitely worth taking a look at if you’re unfamiliar.

1.6 Localization Options

Most people don’t really pay attention to these sorts of configurations, but they’re important, especially those concerned with keeping accurate time on a computer. If not for the workstation, then at least be sure that you’re familiar with these commands, especially in the context of virtualized guests. Time drift here can be a pretty common problem.

2.0 Systems Operations and Maintenance

2.1 Software Management

As with many vendor-neutral exams, this one appears to target the most common installation methods for three types of distributions: Debian-based, RHEL-based, and OpenSUSE (Zypper is an explicit target here, for some reason). Not sure why there’s no mention of Flatpak or Snap. Both are emerging as pretty common ways to install user-space programs on a Linux computer.

2.2 User and Group Management

Run-of-the-mill stuff here. The only addition I would’ve added would be domain-based local user management. I believe there’s a section later in the Security topic that covers LDAP integration, but there are some user-space tools that go along with this and I don’t personally consider these to be mid-level knowledge points.

2.3 File Management

These sections should be renamed Grep/Sed/Awk 101. At least you’ll get exposure to some of the more esoteric commands for file management like wc and tee, but again, there’s nothing here that isn’t off kilter.

2.4 Service Management

I thought we were beyond the point where SysV was still a major player, but evidently it remains more pervasive than I estimated. Most enterprise-focused distributions will focus only on Systemd, and it’s more than adequate enough for even the prevalent Debian-based distributions (unless of course you think running Devuan is a good idea, to which I’d say you need clinical help). In these situations, most of the SysV commands translate to Systemd commands anyway.

2.5 Summarize Server Roles

Not much to mention here. Just know the roles.

2.6 Job Automation and Scheduling

If you don’t know the five finger mnemonic for remembering how to configure cron jobs, take a look at this post:–timing-your-cron-jobs.html

2.7 Linux Devices

You’d be surprised how little most people know about udev, and it’s critical to understand when talking about managing devices on contemporary Linux computers. My recommendation would be to read through the Arch Wiki article on udev to get a better understanding of it if you’re unfamiliar:

2.8 Graphical User Interfaces

In the wake of recent events with my attempts at deploying Linux to workstations in the enterprise I manage, I’ve since developed a substantial amount of beef with sections like these. Without getting too much into detail, because honestly it could warrant its own post, I’ll say the following concerning the exam outline:

No serious enterprise professional is going to leverage anything other than GNOME in their environment because it’s easily the most supported in terms of contractual support from major enterprise distribution vendors. Anything outside of that is going to require internal support abilities which may or may not exist. Furthermore, Unity as a DE was officially deprecated by Canonical within the last few releases of Ubuntu, and it was so jarring to begin with that supporting it is completely out of the question. In my opinion, requesting that a prospective student be familiar with DE like Unity, Cinnamon, or MATE is just an absolute waste. This isn’t a game. Managers will have a hard enough time selling the idea of getting Linux on workstations to begin with. Along with that decision comes which DE to standardize on, and this is frankly more contentious than the predicate aspect of getting Linux installed. Rolling the dice to every single option out there is an incredibly insane notion. X11 forwarding via SSH isn’t as common a function as it may have once been. Most all servers run headless, ergo there’s no need for this.

My advice here is to understand at least what the DE arena looks like, familiarize yourself with how each expresses various UX metaphors, and then move on with your life.

3.0 Security

3.1 User/Group Permissions

The focus here is on traditional DAC concepts as well as MAC through both SELinux and AppArmor, with the lion’s share being the former. There appears to be some concern with ACLs, which both EXT4 and XFS support, but most people don’t realize that ACLs are entirely optional in these file systems, and that their translation to other file systems is generally unclean in the sense that they just get clobbered. Furthermore, you can have several EXT4/XFS mounts on a system, one of them supporting ACLs and the other not. The point here is that because they’re not first-class citizens, honouring ACLs in Linux has been and continues to be an odd conversation.

The fact that the bulk of the weight appears to be on SELinux isn’t an accident. Again, in the arena it has emerged largely victorious despite Canonical’s need to be different. As arcane as SELinux seems to be, the truth is that there’s a tremendous amount of enterprise support behind it.

3.2 Access and Authentication Methods

Not too much to comment on here. One thing worth mentioning, however, is the part that focuses on LDAP integration. In most cases, Linux servers/workstations will integrate with AD rather than a LDAP implementation like IPA, regardless of the benefits. Most tests will operate under the latter context, unfortunately, and may focus exclusively on pure OpenLDAP, which is to my knowledge hardly ever deployed itself.

3.3 Security Best Practises

Not too much to comment on here either. These are things that most everyone should be doing if they’re serious about getting Linux secure, even in the server environment.

3.4 Logging Services

Another not too much going on section. Garden variety things here.

3.5 Linux Firewalls

Here’s another one of those fun sections where cross-vendor technologies come into play. Most people are familiar with iptables and Netfilter, but when we’re talking about firewalld VS ufw, the former is the clear victor in the enterprise space, and doesn’t appear to be changing any time soon.

3.6 Backups

I’m glad to see some focus on this for entry-level exams. This still seems to be the last thing anyone thinks about concerning their computing architecture. Three techs are covered here: SFTP, SCP, and rsync. I still maintain that rsync is the winner here, even for off-site. SCP has noted performance concerns, and SFTP has FTP in it, so we don’t want to touch it.

4.0 Troubleshooting and Diagnostics

4.1 System Analysis and Remediation

In general, I feel as if this section is one that most Linux users gloss over, especially since in the day-to-day, a reinstall combined with smart partitioning will usually cure all serious ails.

Some of the network diagnostics here are a bit odd since they’ll usually always end up at a network-level rather than at the host. For example, unless you’ve been modifying your network interfaces, routing issues hardly ever emerge at the host level. Further, some of the network diagnostic commands aren’t trivial, like the use of nmap or tshark. Sure you could stumble your way through these, but you might not realize half of what you’re looking at when viewed with an untrained eye.

Root password recovery has shifted a bit over the years. Even select contemporary enterprise distributions are shipping with the root-account-disabled model, instead relying exclusively on sudo for escalation. The techniques for recovery are still valid, however.

EDIT: Reading over this some time in the future, I realised that I omitted here that although the root-account-disabled model is becoming prevalent, systems without the proper configurations can be vulnerable when booting into single user since the root account will just login by default with no password. There are provisions for this in your boot configuration files. Look them up for your distribution.

4.2 Optimize Process Performance

Again, another aspect where users might get a taste but not dive too deeply. Being able to dynamically adjust process priority is crucial when diagnosing system performance issues. Furthermore, being able to identify a process is a bit of an art. Being able to go between top, ps, lsof and pgrep are important.

4.3 Troubleshoot User Issues

If you’ve understood topics from previous sections concerning SELinux, DAC/MAC, and file systems, you’ve pretty much got this section in the bag.

4.4 Troubleshoot Application and Hardware Issues

Most of this is garden variety, with the caveat on select storage points such as the focus on HBAs and degraded storage in a RAID context. Not very common problems encountered by junior admins, but still worth mentioning.

5.0 Automation and Scripting

5.1 Deploy and Execute Bash Scripts

I think the title here is a bit misleading, as it seems the content is focused more on being a Bash primer more than anything else. If you already are familiar with Bash, this should be a breeze.

5.2 Git

Very basic git usage is covered here. You’re not going to be doing cherry picking, rebasing, or blaming here.

5.3 Orchestration Concepts

It’s not really clear what they mean here. General principles are one thing, but are they hinting at any specific implementation such as Puppet, Chef, or Ansible? Orchestration also occurs in the virtualization space, and it means something a little different. Methinks some ambiguity is here simply because of the aforementioned virtualization section not being exclusive to Linux itself.

Overall, I think this looks like a pretty good vendor-agnostic exam, despite my personal opinions on the matter. There’s a nice effort to blend rudimentary enterprise concepts with general knowledge, which seems to be a trend, and I think exam takers would get a lot out of it. It’s unclear to me what the industry adoption would be, especially since there’s a split between them and LPI.