Why Year of the Linux Desktop is Bullshit

The title of this alone speaks volumes greater than the exposition that’s to follow, and I’m sure that some of my peers are already bounding from the sheds with pitchforks and torches in hand, but I’ve never been one to not voice a concern even when the house is burning down.

Yet again, we in the Linux users community find ourselves at an interesting juncture. Microsoft has as of 14 January 2020 officially EOL’d Windows 7. As with XP before it, this will likely be a major issue for the immediate future considering how prevalent its use is in the desktop market (Gartner estimates still have Windows penetration at greater than 90%). As expected, most of the podcasts and reporting sources are cobbling together pieces to launch yet another slew of volleys, perhaps to rally the uninitiated to take another look at Linux if they haven’t already done so.

But, who listens to or reads these sources? Non-Linux users?

This nonsense of the Year of the Linux Desktop has been going on for as long as I can remember. And from working with those who have vastly longer tenures than myself in the Linux world, it seems as if it caught on well before my dive nearly fifteen years ago. From the gate, I too shared this sentiment. I had been a Windows user from the start, and although Linux was a monstrous beast to handle back then, I still loved it with all my silicon heart. And I, like every other witness, wanted to espouse my love to the world in the most compelling and boisterous method I could imagine. I’d rake myself through the coals of Hell to learn all I could about Linux, and while still salving my burns thought there was no way anyone else wouldn’t want to be a part of this. The computing revolution was here! Or was it?

Truth be told, the premises of either open source or free software were entirely lost to me until about seven years after I’d taken the swan dive. The fact that I was using something that wasn’t Windows, and that I could be as hardcore of a programmer as I wanted were the only things I cared about. None of this liberty crap mattered. Software wasn’t a first class citizen having attributes of sovereignty. I just wanted to be as nerdy as I could because I wanted to. The politics of software, which would come in much later, seemed to dissolve this juvenile yearning in me to a large extent. It wasn’t about being a nerd anymore. All of a sudden, it was all patents and licenses and codes of conduct and ruination abound.

So if it weren’t for being able to divorce the creations from their Gods, why would I want others to use Linux? To know and embrace it the way I had?

Two words: technological empiricism. I was a fucking God amongst men, a wolf amongst sheep when it came to sheer computing skill, and I knew it, and I was more than happy to get everyone else onboard, their readiness for the transition be damned.

Turns out, this wasn’t too dissimilar of a position for young bucks in my pool at the time. The early 2000s were ripe with young technologists who, after having done whatever it was they did to make the Internet the cultural platform it ended up being, just wanted to flex on the boys and girls as much as possible. We didn’t skip leg day at the gym, because we made the gym.

Even then, it seemed as if using Linux was a horrific experience. Most all of my peers were still using Windows 2000 and Windows XP, especially those of us who fancied ourselves programmers. Hell, the first commercial software I wrote was in VB 6 and was a team effort (which was hilarious in its own right). But I ported it to Linux on my own using the customary tools of the day: glibc, GTK, Glade, MySQL (MariaDB wasn’t even a thing then), etc. And guess what? It worked! But guess what, again? The damned customer was running Windows 2000, and that silly little abstraction layer, GTK for Windows, wasn’t going to be running this shit anytime soon. Besides, who statically links?

The horror of this for programmers was one thing, but the stage was entirely different for the pedestrian user. Games? Well, if you like spending hours playing AisleRiot, then Linux was the platform for you. I mean, why wouldn’t you want to trade playing WoW or Diablo for good old GNOME Mahjong? Total no-brainer. Office? Come on! That’s easy! OpenOffice (LibreOffice wasn’t a thing back then) was the killer app that could do EVERYTHING that MS Office did (this turned out to be a colossal lie then and still is now, despite everything that The Document Foundation does with LO; don’t believe me? Try converting a SMB to LibreOffice from MSO). Internet Browser? Oh just use Mozilla, because that was super compatible back in the day. Need to install some software? Just use the terminal!

Oh… wait. THAT thing.

We’ve already stumbled upon what is potentially the largest issue with Desktop Environments on Linux. What started as a detailed thesis in the form of a borderline anthropological analysis of why the current landscape of Desktop Environments sucks was boiled down to three quintessential matters, this being the first and perhaps the largest.

You know what’s unattractive to the pedestrian computer user? Terminals. You know what contemporary operating systems do a bang up job of getting those out of the way? Windows and virtually every OS that Apple produces for their product line. You know what operating system practically begs to be used nearly exclusively by the terminal? Linux. You know what Linux software doesn’t do at the terminal? Provide a consistent or coherent interface. You know what pedestrian users are most afraid of concerning their computers? Breaking them. You know what breaks Linux computers? Using the right command in the wrong context by accident. And without the aid of a witchdoctor who bears the scars from having mutilated themselves to possess such knowledge, those people are fucked.

Short: Every single Desktop Environment, then and now, does an absolutely piss poor job of abstracting away the need for a casual user to ever whip out the terminal.

But Greg, that’s all bullshit because there’s plenty of cases where a DE does what you’re saying it doesn’t! Yeah? Let’s through a few use cases here that the casual user takes for granted in other operating systems and environments that requires a terminal regardless of the DE:

  1. Installing a group of related software. In RHEL-derivatives, this is usually handled through groups in either YUM or DNF (modules in the contemporary sense). Guess what doesn’t show up in GNOME Software? Groups or Modules. In Debian-derivatives, this isn’t even a concept that APT knows about without first getting tasksel. And even after installing it, guess what doesn’t show up in GNOME Software? tasksel groups.
  2. Installing software. Despite the Windows Store being the hip place to get Windows 10 software from, it’ll never quite be the thing that Microsoft wants it to be because developers can’t distribute traditional binaries through it; they’re required to be repackaged in a fairly unintuitive manner. Ergo, with Windows having the Lion’s Share of the desktop market, pedestrian users have been habituated to installing software in the legacy fashion. And let’s face it, the Windows 10 Store is full of garbage in the same way that each mobile app store is. Guess how you can’t install software on Linux? The legacy method that every user is accustomed to. BUT WAIT, GREG! WHAT ABOUT APPIMAGE OR SNAPS OR FLATPAKS!? Bullshit, each of them. They’re great for those of us who’ve abused ourselves for years to get these programs to work otherwise, but you still run into issues with dependencies (try installing a Flatpak on CentOS 8 that requires X.264, only to find out that you can’t upgrade the base Flatpak installer because it’s about nine versions behind, or to do all of the manual hacking required to get certain Snaps permission to break their cells and access otherwise inaccessible resources on a system (virtually anything that installs with the –classic switch)). But surely, anything you install from the software store that comes with the DE you’re using is good enough, right? Yeah, it usually is, for the most part. Until one realizes that you can’t get a piece of software that you would otherwise have obtained on either Windows or Mac without incident. You want Google Chrome? Not available without downloading a DEB or RPM. Need some AV codecs because you can’t view DRM content or Flash content? Guess you’re adding some repos (you know, because EVERYONE keeps their AV in OGG or Theora, because those are solid and x-plat formats).
  3. Troubleshooting DE. But wait! Didn’t know that the DE runs atop a Window Manager? Didn’t know the Window Manager runs atop a Display Server? Don’t know how to change VTYs when the whole thing goes down the shitter? Oh well.

The next point: the DE themselves miss virtually all the targets required to hit to render a casual user experience viable. I get so sick and tired of hearing people claim that the current landscape of choices in not just Linux Distributions but DE are such a good thing because, as one Matt Hartley put it, “One man’s perfect distribution isn’t another man’s perfect distribution.” Bullshit. Total fucking bullshit. I don’t even know where to start with this.

I can’t resist the urge to address the elephant in the room here that trespasses onto the larger, slightly relevant, topic of Linux Distribution Saturation. The question is this: if I download Ubuntu and make some minor tweaks to user land, does that actually mean I have a different distribution, further that it warrants creating a whole new entity for consideration and download within the Distribution Space? The crux here? Most Ubuntu-based derivatives are still using the fucking Ubuntu repositories. I can’t stress that point enough. If a distribution is supposed to be, at its core, Linux with a swath of software available to make the user experience more concrete, and you’re not offering any different software than what’s available upstream, why the fuck are you creating a distribution? Changing a handful of packages, or forking because you’re butthurt about the init system used, isn’t a new distribution. Why do we think it’s okay to do this? Why do we have over fifty Linux Distributions to choose from? How is this indicative of offering clear choice to outside users? Plainly, it isn’t. It’s offensive to not just those in the community, but mostly to those outside.

Off the top of my head, these are the DE that I can think of that are available to choose from: GNOME, KDE, MATE, Cinnamon, Unity (if you’re still using slightly older versions of Ubuntu), XFCE, LXDE, LXQt, CDE, Budgie, Enlightenment, Razor-Qt, Pantheon, Lumina, and that one that Deepin uses, which I think is just called Deepin. Each of these expresses colloquial desktop metaphors in different ways, each has their own quirks about customization, each has their own methods for enterprise considerations (actually, most don’t even consider this, and if they do, the implementation is fucking horrible and unmanagable at best), each come with their own suite of tools that functions differently than the next, etc. I can’t go any further here without wanting to throw up. The fact that only a select few of these come close to being inviting to a casual user is appalling, and even these fall fatally short of the root objective, amongst many others.

It’s worth pointing out that although most desktop metaphors aren’t codified in anyway, that doesn’t mean that casual users are malleable to the point of wanting to abuse themselves endlessly to use their computers, and you can foist whatever you want in front of them. For fuck’s sake people, we’ve had traditional metaphors being expressed since GEM, and although it might be time for some change, tell me how well that’s worked out for you on the desktop form factor? Why take something that works, something that people are accustomed to, and not just break it, but irreparably obliterate it?

Oh, and here’s the other hilarious spin sold as a positive about the Choice Paradox concerning DEs: if you don’t like the one you have, just get another one! Blech.

  1. This is an unbelievably moot thing to say to a casual user. None of them view this as a benefit. Being accustomed to just using the computer, it doesn’t dawn on any of them that Explorer is a shell as much as it is a file manager, and that it can be customized or even replaced (unsure if this is true for Mac). Whatever they see first is what they’re stuck with. End of story.
  2. Even if you manage to get someone beyond this point, just how do get another DE? Do you install it alongside your existing one? Do you get a fresh distribution flavor? If the latter, you better hope you partitioned your system correctly, otherwise your shit gets blown to kingdom come.
  3. Having multiple DE running parallel on a single installation is a fucking nightmare.
    1. The only real safe way to install an alternative DE is to get it through a group in your default repositories. See aforementioned point about installing software groups. If you don’t see it here, you’re already about two-thirds the way jumping the shark.
    2. Once you get it installed, you’ll likely have to compete with the idea that your distribution will have a strong preference for the Display Manager. Wait, what’s a Display Manager? Oh yeah. Forgot to mention that little bit. The program that logs you into the computer? That’s the Display Manager, which is yet again an entirely different component. Anyway, for example, CentOS has a strong preference for GNOME Display Manager (GDM). Even if one installs the KDE flavor of CentOS, you’re still going to be using GDM rather than SDDM. To put the icing on the cake, let’s say you’ve got a system with both GNOME and KDE on it with GDM as your DM. You have to know enough about GDM to know that you need to change the session type to KDE Plasma instead of GNOME Shell, because if you don’t change it, you’re going to keep using GNOME Shell.
    3. If something goes wrong, or if you just decide on a DE you like and wish to exorcise the alternate beast, good fucking luck. Pulling a DE out of a running system is like getting a steak out of the throat of a lion. Sometimes it can be easy, others run you the risk of crippling your system if you’re not paying attention to how the package manager is resolving dependencies for removal. Guess what can’t pull these out safely in some cases? Graphical software managers like GNOME Software or Discover.
    4. If you decide you want to be like Lois and Clark and go deep in on running multiple DE in parallel, you’ve now got an issue where you’ll have multiple programs that do the same damn thing. There’s nothing cooler than looking for a terminal emulator and seeing Konsole, XTerm, and GNOME Terminal all at the same time. Sometimes I want to look for files using Dolphin, but other times Nautilus gets my gutchies that day.

This had to be obvious to someone at some point, because we have distribution flavors. Ubuntu has several: Ubuntu, Kubuntu, Xubuntu, Ubuntu MATE, and Ubunty Budgie (Kylin sort of doesn’t count here). The reason why these flavors exist? Ubuntu gives you GNOME Shell, Kubuntu gives you KDE, Xubuntu gives you XFCE, Ubuntu MATE gives you MATE, and Ubuntu Budgie gives you Budgie. This. Is. The. Only. Reason. These. Exist: to isolate the DE from each other for a hopefully more gooder experience than if one were to use Ubuntu and install KDE inside it.

How is this feasible? How do you attract users to this? How is it EVER going to be the Year of the Linux Desktop? Can we please stop this nonsense madness of blindly repeating ourselves about dominating the desktop space? It isn’t going to happen when things look like this. Not. Fucking. Ever. All we’re doing is circle jerking with ourselves in a fantasy where we can finally say we came out on top. If we want desktop dominance, which may never happen, we should at least attempt to start with these goals (IMHO):

  • Standardize. There’s nothing more annoying to a casual user than too much choice; Choice Paralysis is a real thing, think buying toothpaste. Maybe this means consolidation. Maybe this means a new project whose focus is on these things.
  • User Focus. Make a product whose core philosophy is the user and their experience rather than an experiment with cool code. Software shouldn’t abuse users or require them to abuse themselves.
  • Ease of Use. This should’ve been a no-brainer, but methinks the horse died some time ago and we just agreed to leave it be and not replace it. Anything that could be done at a terminal should be able to be performed through the UI, no exceptions. Metaphors are not play things. We have established ones that work considering the form factor, so fucking use them. They work for a reason: they don’t assault the user.
  • Customization. It should be EASY to customize your environment, and it should also be EASY to sell to an enterprise for adoption. It shouldn’t be a fucking sell of a nuclear power plant to get people to use this technology.

RHEL 8… Y U No Werk Bruh? (Again)

Yet again I’ve stumbled onto a workflow breaking issue with RHEL 8.

RDP is a major component of a lot of workflows for engineers and Remmina has traditionally been a great solution for these situations. All up until getting Remmina through Flatpak is the only reasonable method for obtaining it, and that RDP connections won’t work any longer since RHEL/CentOS 8 ship with a version of Flatpak that’s several behind the current and OpenX264 refuses to install on any version of Flatpak that’s lower than 1.4.

So let’s try and update Flatpak through native repos:

Right. So let’s just remind ourselves of the version here:

And the official Flatpak git repo has a release tag at 1.6.0. So why isn’t this in the repos? Let’s add a few more repos to see if we get any joy there:

Can we update Flatpak now?

Nope. Brilliant.

So let’s build from source. Now we may want to remove the existing Flatpak installation since it may conflict with our manual build, so let’s try to remove that.

Yikes. We probably don’t want to do this. Some of this seems benign, but we may end up with some issues afterward. So let’s proceed as if everything is normal and we’ll leave this alone for the time being. Let’s grab the Flatpak source and go to town.

Missing dependencies from the start.

  • libcap-devel
  • libarchive-devel
  • libsoup-devel
  • gpgme-devel
  • polkit-devel
  • fuse-devel
  • ostree-devel…

So it turns out that the version of ostree-devel that is available through @System is .1 of a build version off from what Flatpak wants…

Back to 7 I go…

How to Learn Linux, Addendum

I swear, I’m not exclusively picking on CompTIA lately. I just happen to be really interested in what they’re doing, especially within the context of Linux. Also, since my last post, I’m suddenly receiving emails from their mailing list even though I never explicitly signed up for one. Weeeee.

One such email included a list of recent blog posts from their official blog, which appears to be a planet aggregate of sorts. The headline article was titled “How to Learn Linux” by Priyanka Sarangabany. It’s a well written perfunctory that blends advice given within the last twenty years with some minor contemporary flavor added. Whilst reading, I tried hard to demarcate between the objective of the article – as laid out by the title – and this nagging feeling of being grossly out of touch with reality. Despite my best urges to jettison the aforementioned intuitions, it got the better of me.

It might be just this article in particular, but most How do I Learn Linux articles lack a certain ubi we vera, or “In reality, professionals encounter this.” I think this bears some talk, even if not within the confines of the direction pointing. This piece in particular doesn’t actually get to the How To part until right near the end.

There’s no doubt that Linux is quickly becoming a powerful force in the IT industry. In fact, you’re probably using Linux without even knowing it! From smartphones and home media centers to smart thermostats and in-car GPS systems, this open-source operating system is quietly running nearly all supercomputers and cloud servers that power our daily lives.

Priyanka Sarangabany

One very common complaint you’ll hear lobbied from the Free Software Community, especially those who rabble-rouse with RMS, is that it’s a travesty when people don’t truly understand that when you’re using Linux, you’re actually using a complete suite of GNU software tools alongside the Linux kernel. Their vain efforts to correct the misnomer of simply Linux were to address it as GNU/Linux (along with several other strident misnomers). Regardless, the point remains that people running Linux are in fact taking advantage of a complete set of GNU tools developed by the Free Software Foundation way back in the day. The Linux community, however, is ripe with all sorts of misnomers such as the one illustrated here. Free Software/Open Source is quite muddy in terms of who uses what, and more importantly, who cares specifically. A similar phenomenon was at one time witnessed when Android first exploded onto the scene compliments of Google (it wasn’t originally a Google product ;)). The Android OS is running Linux as its kernel. Consequently, most in the Linux community saw this as a striking win for our cause. Long had we waited for the day when Linux saturation was prevalent enough in the user-space to render it a contender worthy of use cases the likes of which only Windows and OSX seemed to garner. However, hardly any of these smartphone users are taking advantage of Linux itself, explicitly. Furthermore, the smartphone space as it pertains to Android is an absolute shithole. Polluted by countless dumpster bin devices with all sorts of malicious software on them, privacy-raping middleware compliments of Google’s nefarious growth trajectory, and an overall exhaustion from being trained to ante up for a new device every six months, the fact that anyone is using Linux at all is both a non-sequitur and buried under the morass.

Some of the truth here is that the misnomers aren’t just about calling a duck a duck; they mean more than correcting bad speech, for better or worse. Not all Linux jobs are glorious administrative escapades where the objective of reformation in the user space is going to earn you badges of honor. It’s not an accident that Linux finds itself reserved for the infrastructure roles. Linux is mostly far too technical for 90% of so-called users, and the fact that Android runs atop it doesn’t mean that you’ve accomplished much other than distributing shadow copies. Emphasis here should be placed on the “quietly running” remark. You’d do well to keep this in mind.

Why Is Linux So Prevalent?

There are multiple reasons why Linux is considered one of the most diverse and powerful operating systems in the world. To understand why Linux is loved by many, it is important to identify its defining characteristics.

Open Source: As Denise Dumas, the vice president of software engineering and operating systems at Red Hat, said in a recent CompTIA webinar about Linux, “Open source is a place where innovation ferments and happens.” When software is released under an open source license, people can view and build upon the software’s original source code. This feature encourages software developers to adopt Linux and apply their own improvements to the code. As result, Linux’s public domain drives constant evolution and advancement.

UNIX-Like System: Linux behaves in a similar manner to a Unix system. This means that the operating system relies on multiple parts/programs that carry out specific jobs collectively. This is a fundamental principle of good system design and is at the core of what makes Linux so great.

Stable: As a public domain that is constantly evolving, Linux remains an incredibly secure operating system. In the words of Eric S. Raymond, “Given enough eyeballs, all bugs are shallow.” Linux’s general public license allows a plethora of software developers to rapidly identify issues in code and just as quickly respond to fix the errors.

Free: Linux is priceless. Literally! The underlying software of Linux has been free to download and install since its creation. For this reason, Linux remains one of the most accessible, diverse operating systems to this day.

Priyanka Sarangabany

All of this is 100% true. But it also 100% only panders to programmers or people looking for software to do something that doesn’t cost them a thing in terms of material price.

Flagshipping Linux’s success in contemporary terms as simply its adherence to Free Software and Open Source ideologies is missing the target just a bit. It’s an attractive aspect only if you’re a software developer or belong to a software engineering group specializing in Linux itself or creating software to run on it. By extension, an end-user benefits from this in that they have some assurance, as ESR puts it, that bugs are simply squashed faster than with alternative monolithic or bureaucratic projects. But end-users most likely don’t care about the fact that the source code for their favorite programs, let alone the entire OS, is available to them whenever. Concurrently, most IT management doesn’t care either. The questions of can and how are the servers going to be supported are the real tests, and we’re so far down the line from the days when there was real competition between IIS and Apache that the lines aren’t as clear as they once were. The fact that Linux is open-source matters only to the kernel team, its contributors, and upstream distributions that repackage the kernel and a collection of software. Your garden-variety sysadmin isn’t going to fondle this too much, at least for billable hours. In general contexts, management presented with the proposition of dedicating resources to retrofitting an open-source project to meet internal needs usually falls out of their chair laughing, and simply resorts to searching for another hopefully complete package. Of course, this says nothing of the emergence of IoT and cloud technologies. Many major industrial vendors are leveraging Linux as a second-class citizen in customer-facing equipment, a handful of specialized server vendors are selling products that are possible only because of Linux, and a vast majority of the cloud-focused architecture is built on or is exploiting Linux in a non-trivial capacity. Although the cut here between administration/architect/engineer is obvious, it’s mostly either this or programming.

Another thing: Implementing Linux isn’t free. While you can download the software and, depending upon the license, run it in your enterprise without legal incident, you most certainly had better have the internal support available to compliment it. Most SMBs are in a position where they could benefit substantially from the use of Linux and derivative technologies. But most SMBs are woefully ill-equipped to float the administrative overhead that running Linux actually entails. The work of Canonical and RedHat have made employing Linux easier over the years, but it hasn’t yet given people the Windows-feel that they hopelessly crutch against. Yes, it costs money as well to administer Windows systems. However, there’s no doubt that a more technical skillset is required for Linux.

One other thing: the use of the term public domain here is inaccurate. RMS, ESR, and Bruce Perens – amongst many others – have historically been cited as having railed against the claim that Linux transacts in this specific realm.

Over the years, companies such as Red Hat have put effort toward making system administration and development easier to master. In turn, today’s Linux graphical user interfaces (GUIs) are highly functional and significantly less intimidating.

Priyanka Sarangabany

This is, unfortunately, false. At least the final statement is. While Canonical, Red Hat, and SUSE have done a tremendous amount of work to streamline new technologies and shore up existing ones, these efforts have very little influence over the GUI/DE projects. These things fly free at their own pace and, frankly, it’s one of the most toxic components of the modern Linux user experience IMHO aside from the stupid number of distributions to choose from. Some insight:

  • Hardly any of these DE are completely functional. Some of them are close to highly functional, but not quite what’s available from traditional Windows/OSX. The very flexibility that these projects benefit from is the same aspect that ultimately undermines their acceptance. The divergence from traditional – but more importantly established – desktop metaphors witnessed in most DE are entirely unacceptable in an enterprise space; they’re barely passable in the user space. For the two or three that still look like they care about helping users rather than hindering them, they’re either too watered down or too full of flourish, coupled with programs that are too convoluted.
  • Consequently, the intimidation factor remains a plague as it’s more real than what the author of this post or perhaps others would proselytize. Take a look at the following DE projects:
  • Not only are there a wealth of choices, but they all express the usual metaphors in different ways which are sometimes really non-intuitive. It’s not a pedestrian user that’s going to find any safe haven here. And if the DE isn’t delivered as a first-class citizen in the DE roundup from a given distribution, it likely isn’t going to be given the time of day; shoehorning a DE into a distribution flavor that didn’t ship native is a bit of a gamble. This all sounds great for a Linux user who’s chomping at the bit to learn the new shiny, but imagine yourself as an IT Manager. Who in their right mind is going to look at this and think they’ve got a snowball’s chance in hell at adoption? What should a budding sysadmin learn? The intimidation factor here is real for both users and prospects, similar to what one finds in the realm of “Which JavaScript framework should I use to develop my web program?” All religion, no substance.

To begin your journey through the Linux space, you will have to make a few choices:

Choose a Linux Distribution: Linux is not developed by a single entity, so there are multiple different distributions (distros) that can take code from Linux open-source projects and compile it for you. Since these distros choose your default software (desktop environment, browser, etc.), all that’s left for you to do is boot up and install.

Choose a Virtualization Solution: Linux virtualization is used to isolate your operating systems so you can run multiple virtual machines on one physical machine, and in turn save time, money and energy on maintaining multiple physical servers. Some popular selections include VMWare, VirtualBox (Oracle) and Hyper-V (Microsoft).

Set Up Your Linux Play Space and Explore: Once you log in to your virtualization environment, you can start learning and practicing. The best way to become comfortable with Linux is to jump in and get your hands dirty.

Priyanka Sarangabany

Choosing a Linux distribution shouldn’t be a cavalier decision. CompTIA Linux+ is, like its LPI contemporary, a vendor-agnostic certification track. Essentially, passing this exam requires knowledge of not just the general administrative topics of Linux itself, but a selection of the more esoteric differences in the major distributions (Debian-based, Red Hat-based, or SUSE). The effort, I suspect, is to suggest or imply that certified individuals are capable of handling virtually anything thrown at them. There’s nothing wrong with this in theory or practice since you’re not guaranteed to be working for/with an organization that has landed solely in one camp or the other. The problem here is that you need to spend some time in at least all three to some extent. I’ll cover more on this later, but there should be a bit of consideration before downloading. Learning Linux can certainly be accelerated by distro-hopping, but this behavior should dramatically slow as time goes forward.

Selecting a virtualization technology isn’t as trivial as this section would potentially lead users to believe. VMware has historically been quite difficult to install and run on various distributions. Legacy versions of the software maybe work on older kernel versions, but newer kernels are hit-and-miss. Furthermore, VMware has a fairly lackadaisical approach to supporting Linux as a viable platform to run its software on. More often than naught, you’ll be scouring the support forums to find that not only are most other people experiencing difficulty installing the software, but they’re either not finding good solutions for their problems or they’re running into other issues that inhibit a good user experience. VirtualBox is an okay Type-2 Hypervisor, but anyone working seriously with virtualization technologies isn’t going to be deploying this any time soon. The implication here is that if you’re not committed to running Linux on baremetal, you’re likely running Mac OSX or Windows and should virtualize it via a hypervisor or two. This may work well, but some of the exam content for Linux+ requires a subset of knowledge that you’ll get only through installing baremetal.

But wait a minute… what does any of this have to do with actually learning Linux?

I’m trying to help set realistic expectations here. Despite the work to push forward, things still aren’t as crystal clear as the author of this blog post would have you believe. Allow me, then, to offer what I think are the best ways to learn Linux.

Manage Your Expectations

Linux is hard. Remember to separate the kernel from the DE, because it’s important. So long as the DE you choose provides an adequate terminal emulator, you can get away with focusing exclusively on the kernel interface and nothing else. Be sure not to get lost in the convoluted nature of the DE, otherwise it’ll add another layer of complexity that you’ll likely want to avoid.

Understand also that doing Linux professionally isn’t the same as doing it for a hobby. Swapping DE every five seconds, or advocating for the use of the flashy or nuanced one isn’t going to get you anywhere. This matters more than you think. Learn GNOME and KDE, and fiddle with the rest in your spare time if interested.

Distribution Selection

Pay attention to the leading distribution vendors out there, and try not to get lost too much in the new shiny that comes out of left field. Take a look at this image and try not to throw up. We in the Linux community say we’re welcoming, and options are great, but this is nauseatingly asinine. The major players here are Canonical, Red Hat (CentOS), or SUSE. Distro-Hopping is okay if you’re just looking to have fun, but that should be relegated to virtualization. Run either Ubuntu, RHEL/CentOS, or SUSE baremetal, and leverage KVM through virt-manager or Cockpit, or VirtualBox, to run VMs locally.

Books

Despite the notion that Linux iterates quickly, the widespread adoption of newer kernels is left to a select special group of distributions. Most are running kernels that are a few versions behind for the sake of sanity. That said, a handful of books exist that help learn Linux itself (not the DE) that will matter for a majority of the versions in the common arena. A few of my recommendations:

Online Documentation

I don’t mean the manpages here, although some of them are useful. I’m talking about wikis, forums, and upstream documentation from distribution vendors. The Arch Wiki is an unbelievable treasure trove of highly technical information for all kinds of software that doesn’t necessarily peg itself to Arch (most of the time). Red Hat/CentOS publish a wealth of documents to give all kinds of administrative information. LinuxQuestions is a great forum for getting help with nearly all matters. Of course, if you’re feeling up to it, you could always get in touch with the developers of the software you’re using directly and get advice or help from them. I’ve talked to a few people from the GNOME team occasionally to get help on certain matters, and it’s proven quite valuable.

Taking Classes

I’ve personally never attended a Linux training course, but that doesn’t mean I haven’t heard wonderful things about them. Some certification authorities like CompTIA, LPI, and Red Hat, will offer both e-Learning and instructor-led courses that will accelerate your learning track right up to the day of examination.

Banging Your Head Against the Wall

I started with Linux in 2004 with Red Hat 9 that was given out to a friend of mine who was attending ITT Tech at the time. All I had was the book it came with, the installation media, and a lot of time on my hands (I didn’t even have access to the internet at that time). The best way to learn, albeit the hardest way, is to simply rake yourself through the coals. Grab a shitbox, abuse it, abuse yourself. Plain and simple.

Community

Get involved with a community. Don’t let the rumors about the Linux Kernel Mailing List scare you away. Most mere mortals are more than willing to discuss Linux, especially if you’re willing to put yourself out there.

Podcasts

Although the landscape is far too saturated, podcasts are still a viable source of information. I miss Linux Outlaws terribly, but shows like Destination Linux, SMLR, and Late Night Linux are great for getting the latest 411 on the happenings and hearing from people who’re incredibly skilled in what they do with Linux.

CompTIA Linux+ XK0-004 Thoughts

Lately I’ve been seeing a lot of steam about the CompTIA Linux+ exam. Evidently they’re separating away from the LPI partnership that’s long been in place – not sure if that has anything to do with the bruhaha – but I thought I’d dig into the exam outline to see what the competency focuses were, and issue some of my opinions about them. Bear in mind that I’m not a proctor or advisor of any kind, and that opinions are strictly that. I’m going to run down the objectives in the same order they appear in the official outline document, so nothing comes out of order.

You can view the outline here: https://certification.comptia.org/docs/default-source/exam-objectives/comptia-linux-xk0-004-exam-objectives.pdf

1.0 Hardware and System Configuration

1.1 Linux Boot Process Concepts

Man, am I happy to see that someone finally understands that not a single person on this planet uses LILO any longer. Say what you will about technical merit, the clear winner here was GRUB. Any mention of the former has been wiped clear from the objective list. Hopefully this isn’t one of those Cisco-style documents where what’s on the exam isn’t anywhere near close to the outline document, unless of course your abstract thinking expands to the realm of what’s par for LSD abuse. Also happy to see that there’s a focus on UEFI/EFI rather than BIOS. Having deployed more than a fair share of contemporary computers both manually and via PXE, it feels dirty to reconfigure the system to run BIOS. Practically speaking, I don’t think UEFI/EFI is as big of a monster as it once was several years ago. We in the Linux community have already crossed this bridge, so let’s stop taking a piss on the side with wilting grass here.

1.2 Kernel Modules

Part of me feels as if this section is gratuity on every entry-level Linux exam. Why? There have been maybe a handful of times I’ve had to manhandle modules, and its come in the user-space on workstations rather than servers. Dealing with Type-2 Hypervisors that don’t play nice with Linux (looking at you VMware) or Nvidia graphics drivers seem to be the only real plays here. For the most part, the kernel does a good job of taking care of what you need for common use cases, and this is especially true if you’re deploying any enterprise distribution whose philosophy is that users shouldn’t have to eat their own skin off their arms to get these systems to work in the 21st century. That said, it’s still valuable knowledge. I’m just unsure that it requires a point allocation on an exam.

1.3 Network Connectivity Configuration

Not really too much to comment on here, except for the inclusion of NetPlan configuration. Along with Gradle, YAML is one of those technologies that was likely written by some hipster and is just a dumpster fire of epic proportions. Since that’s all dandy, let’s change from semi-palatable traditional network configuration scripts that look much like an INI file – which is well understood – to some arcane indent-based copulation between Python-like syntax (because, you know, Python is the greatest thing since sliced bread) and the never ending ML-based projects that seek to change the world. No thanks. Learn it for the exam, learn to hate it, and move back to better things.

1.4 Linux Storage Management

RIP btrfs. Not really.

I’m not sure I’ve understood the migratory path to XFS over EXT4. In my deployment contexts, especially with M2 drives, XFS has caused all sorts of problems that I can’t really explain away. The result, however, was a revert to EXT4 after several FS-level repair attempts were made to fix the corruption on the root partition. One instance I chalked up to a silently botched install, but the other five I couldn’t attribute to really anything. But this FS seems to sit in the first-class citizen spot with EXT4 not too far behind it.

Glad to see that there are some subtle hints at RAID management here. It’s never a huge factor in entry-level exams, but still worth mentioning.

1.5 Cloud and Virtualization Concepts

YAML makes yet another appearance. Yay…

With as long as virtualization has been around, I’m a bit shocked that its taken this long for it to appear in entry-level exams. Most enterprises these days are at a minimum leveraging Type-2 Hypervisors, but this comes in the form of VMware. The focus here, however, is on KVM. Looks as if there may be a little bit of a touch on containers as well, although I seriously doubt it’d be a heavy hitter in comparison to the contemporary content.

An aside, I’m not aware of many enterprises that leverage KVM explicitly for virtualization needs. This mostly gets passed off to VMware or Citrix. I usually find KVM in a Type-2 context on workstations.

That said, there appear to be more here that serves general-purpose understanding of virtualization technologies. Definitely worth taking a look at if you’re unfamiliar.

1.6 Localization Options

Most people don’t really pay attention to these sorts of configurations, but they’re important, especially those concerned with keeping accurate time on a computer. If not for the workstation, then at least be sure that you’re familiar with these commands, especially in the context of virtualized guests. Time drift here can be a pretty common problem.

2.0 Systems Operations and Maintenance

2.1 Software Management

As with many vendor-neutral exams, this one appears to target the most common installation methods for three types of distributions: Debian-based, RHEL-based, and OpenSUSE (Zypper is an explicit target here, for some reason). Not sure why there’s no mention of Flatpak or Snap. Both are emerging as pretty common ways to install user-space programs on a Linux computer.

2.2 User and Group Management

Run-of-the-mill stuff here. The only addition I would’ve added would be domain-based local user management. I believe there’s a section later in the Security topic that covers LDAP integration, but there are some user-space tools that go along with this and I don’t personally consider these to be mid-level knowledge points.

2.3 File Management

These sections should be renamed Grep/Sed/Awk 101. At least you’ll get exposure to some of the more esoteric commands for file management like wc and tee, but again, there’s nothing here that isn’t off kilter.

2.4 Service Management

I thought we were beyond the point where SysV was still a major player, but evidently it remains more pervasive than I estimated. Most enterprise-focused distributions will focus only on Systemd, and it’s more than adequate enough for even the prevalent Debian-based distributions (unless of course you think running Devuan is a good idea, to which I’d say you need clinical help). In these situations, most of the SysV commands translate to Systemd commands anyway.

2.5 Summarize Server Roles

Not much to mention here. Just know the roles.

2.6 Job Automation and Scheduling

If you don’t know the five finger mnemonic for remembering how to configure cron jobs, take a look at this post: https://www.networkworld.com/article/2709784/unix–timing-your-cron-jobs.html

2.7 Linux Devices

You’d be surprised how little most people know about udev, and it’s critical to understand when talking about managing devices on contemporary Linux computers. My recommendation would be to read through the Arch Wiki article on udev to get a better understanding of it if you’re unfamiliar: https://wiki.archlinux.org/index.php/Udev

2.8 Graphical User Interfaces

In the wake of recent events with my attempts at deploying Linux to workstations in the enterprise I manage, I’ve since developed a substantial amount of beef with sections like these. Without getting too much into detail, because honestly it could warrant its own post, I’ll say the following concerning the exam outline:

No serious enterprise professional is going to leverage anything other than GNOME in their environment because it’s easily the most supported in terms of contractual support from major enterprise distribution vendors. Anything outside of that is going to require internal support abilities which may or may not exist. Furthermore, Unity as a DE was officially deprecated by Canonical within the last few releases of Ubuntu, and it was so jarring to begin with that supporting it is completely out of the question. In my opinion, requesting that a prospective student be familiar with DE like Unity, Cinnamon, or MATE is just an absolute waste. This isn’t a game. Managers will have a hard enough time selling the idea of getting Linux on workstations to begin with. Along with that decision comes which DE to standardize on, and this is frankly more contentious than the predicate aspect of getting Linux installed. Rolling the dice to every single option out there is an incredibly insane notion. X11 forwarding via SSH isn’t as common a function as it may have once been. Most all servers run headless, ergo there’s no need for this.

My advice here is to understand at least what the DE arena looks like, familiarize yourself with how each expresses various UX metaphors, and then move on with your life.

3.0 Security

3.1 User/Group Permissions

The focus here is on traditional DAC concepts as well as MAC through both SELinux and AppArmor, with the lion’s share being the former. There appears to be some concern with ACLs, which both EXT4 and XFS support, but most people don’t realize that ACLs are entirely optional in these file systems, and that their translation to other file systems is generally unclean in the sense that they just get clobbered. Furthermore, you can have several EXT4/XFS mounts on a system, one of them supporting ACLs and the other not. The point here is that because they’re not first-class citizens, honouring ACLs in Linux has been and continues to be an odd conversation.

The fact that the bulk of the weight appears to be on SELinux isn’t an accident. Again, in the arena it has emerged largely victorious despite Canonical’s need to be different. As arcane as SELinux seems to be, the truth is that there’s a tremendous amount of enterprise support behind it.

3.2 Access and Authentication Methods

Not too much to comment on here. One thing worth mentioning, however, is the part that focuses on LDAP integration. In most cases, Linux servers/workstations will integrate with AD rather than a LDAP implementation like IPA, regardless of the benefits. Most tests will operate under the latter context, unfortunately, and may focus exclusively on pure OpenLDAP, which is to my knowledge hardly ever deployed itself.

3.3 Security Best Practises

Not too much to comment on here either. These are things that most everyone should be doing if they’re serious about getting Linux secure, even in the server environment.

3.4 Logging Services

Another not too much going on section. Garden variety things here.

3.5 Linux Firewalls

Here’s another one of those fun sections where cross-vendor technologies come into play. Most people are familiar with iptables and Netfilter, but when we’re talking about firewalld VS ufw, the former is the clear victor in the enterprise space, and doesn’t appear to be changing any time soon.

3.6 Backups

I’m glad to see some focus on this for entry-level exams. This still seems to be the last thing anyone thinks about concerning their computing architecture. Three techs are covered here: SFTP, SCP, and rsync. I still maintain that rsync is the winner here, even for off-site. SCP has noted performance concerns, and SFTP has FTP in it, so we don’t want to touch it.

4.0 Troubleshooting and Diagnostics

4.1 System Analysis and Remediation

In general, I feel as if this section is one that most Linux users gloss over, especially since in the day-to-day, a reinstall combined with smart partitioning will usually cure all serious ails.

Some of the network diagnostics here are a bit odd since they’ll usually always end up at a network-level rather than at the host. For example, unless you’ve been modifying your network interfaces, routing issues hardly ever emerge at the host level. Further, some of the network diagnostic commands aren’t trivial, like the use of nmap or tshark. Sure you could stumble your way through these, but you might not realize half of what you’re looking at when viewed with an untrained eye.

Root password recovery has shifted a bit over the years. Even select contemporary enterprise distributions are shipping with the root-account-disabled model, instead relying exclusively on sudo for escalation. The techniques for recovery are still valid, however.

EDIT: Reading over this some time in the future, I realised that I omitted here that although the root-account-disabled model is becoming prevalent, systems without the proper configurations can be vulnerable when booting into single user since the root account will just login by default with no password. There are provisions for this in your boot configuration files. Look them up for your distribution.

4.2 Optimize Process Performance

Again, another aspect where users might get a taste but not dive too deeply. Being able to dynamically adjust process priority is crucial when diagnosing system performance issues. Furthermore, being able to identify a process is a bit of an art. Being able to go between top, ps, lsof and pgrep are important.

4.3 Troubleshoot User Issues

If you’ve understood topics from previous sections concerning SELinux, DAC/MAC, and file systems, you’ve pretty much got this section in the bag.

4.4 Troubleshoot Application and Hardware Issues

Most of this is garden variety, with the caveat on select storage points such as the focus on HBAs and degraded storage in a RAID context. Not very common problems encountered by junior admins, but still worth mentioning.

5.0 Automation and Scripting

5.1 Deploy and Execute Bash Scripts

I think the title here is a bit misleading, as it seems the content is focused more on being a Bash primer more than anything else. If you already are familiar with Bash, this should be a breeze.

5.2 Git

Very basic git usage is covered here. You’re not going to be doing cherry picking, rebasing, or blaming here.

5.3 Orchestration Concepts

It’s not really clear what they mean here. General principles are one thing, but are they hinting at any specific implementation such as Puppet, Chef, or Ansible? Orchestration also occurs in the virtualization space, and it means something a little different. Methinks some ambiguity is here simply because of the aforementioned virtualization section not being exclusive to Linux itself.

Overall, I think this looks like a pretty good vendor-agnostic exam, despite my personal opinions on the matter. There’s a nice effort to blend rudimentary enterprise concepts with general knowledge, which seems to be a trend, and I think exam takers would get a lot out of it. It’s unclear to me what the industry adoption would be, especially since there’s a split between them and LPI.

Basic firewalld

As someone who dreaded having to interact with the esoteric networking gatekeeper that was iptables, firewalld presented an opportunity for mere mortals to feel like more of a badass when crafting ingress rules. Although firewalld manages iptables, some abstraction is most welcome, if incomplete. For example, playing in the firewalld arena will only handle ingress traffic. If you need granular control over egress traffic, you’ll still need to dive into iptables, but you’ll triage these through firewalld’s so-called rich rules.

firewalld sees fragmented adoption across various distributions; maybe because firewalld isn’t the only netfilter abstraction in town, or because we all want to be different. Most distributions offer firewalld through their default repositories even if it’s not the existing sheriff, so if you want to run it instead of whatever else was on offer, you’ll want to remove the original program first. Most every RedHat-based distribution will be running firewalld as a default. Ubuntu-based shite will likely have ufw or something else to that effect (ergo, if you’re forced at gunpoint to use any of that garbage, get yourself firewalld immediately).

Contained within firewalld are the concept of zones. Each zone encapsulates a different set of rules that are logically associated to the zone itself. Not only are there a decent handful of default zones – which are more than sufficient for garden-variety use cases – but you have the ability to create and delete other zones (You’re unable to delete any of the stock options. I tried about ninety times.). Each zone can be applied to a particular interface, be it physical or virtual. The rules within each of these zones will dictate how ingress traffic is handled. For example, you can configure a zone to disallow ICMP traffic to the host, or drop all traffic other than a select handful of services.

As with zones, firewalld offers a plethora of default services that can be used. Services are a collection of colloquial protocol/port mappings consolidated under an easy to understand identifier. They’re intended to save time with building zones, by being readily available to any zone that wants them. You can also add or delete custom services, just as you can with zones. For example, the firewalld service http will map to tcp/80, https will map to tcp/443, ssh will map to tcp/22, and so on.

And this is essentially all you’ll need to know in order to get some reliable mileage out of your firewalld installation. This says nothing about the details of rich rules, IPSets, or Helpers, but these are more advanced topics that can be understood by reading the official firewalld documentation. Think of this document as a way to whet your appetite and help you play with a tool. Note that going forward, all commands displayed will be assuming that you’re running a RedHat-based distribution that leverages systemd.

To start, you can ensure that firewalld is running by querying systemd:

systemctl status firewalld

And obviously, you can toggle the state of the daemon by using one of the following:

systemctl start firewalld
systemctl stop firewalld

You can use the reload command as well for forcing configuration changes, but there’s an alternative method to this which we’ll cover momentarily.

Interfacing with firewalld is facilitated by either the terminal command firewall-cmd or by the GUI client firewall-config (which can also partner with firewall-applet, assuming you’re running a GUI). This document will focus only on the terminal interface, especially since most enterprise production servers will be operating headless.

The obligatory commands are available for your typing pleasure:

firewall-cmd --version
firewall-cmd --help
man firewall-cmd

Trust me, the man pages for this program are very good.

Now, although the daemon may be running, the firewall may be in a state where it’s not enforcing. You can query the current state of the firewall using the following:

firewall-cmd --state

You can determine which zones are active (i.e. a binding to an interface that has an active connection).

firewall-cmd --get-active-zones

If you wish to see the zone that’s associated with a particular interface:

firewall-cmd --get-zone-of-interface=<ifname>

You can get the names of your interfaces by using either of the following:

nmcli c
ip addr sh

A list of all the zones known to firewalld can be obtained.

firewall-cmd --get-zones

The same can be done to get a complete listing of all hardcoded services that can be used in zone configurations.

firewall-cmd --get-services

Now that you know how to see zones, regardless if they’re active or passive, you’ll want to see the configuration of the zone itself.

firewall-cmd --zone=<zonename> --list-all

Again, the name of a zone can be obtained by either listing all of the zones or determining which zone is associated with your active network interface.

A similar breakdown for services is available. Sometimes a service can encapsulate multiple ports or other targets, so knowing what the service identifier is referencing is important. For example, if you want to know what the service ssh contains, you’ll issue the following command:

firewall-cmd --info-service=ssh

Now that we can obtain some rudimentary information about both zones and their services, we can move forward modifying existing zones. However, there is still a bit more to know before going too far down the rabbit hole.

Aside from services, there are a few other basic properties of zones that you need to pay attention to, especially when considering which zone to use or if you’re designing your own.

Every zone has a target. The target is effectively a so-called next-hop for the packet after applying the filter rules in the current zone. There are three targets available, and any given zone can only have one target.

ACCEPT – Any packet not matching any rule is permitted.
%%REJECT%% – Any packet not matching any rule is rejected.
DROP – Any packet not matching any rule is dropped.

In practise, what this means is that if a zone has a target of ACCEPT, virtually all packets are permitted. %%REJECT%% and DROP will deny packets based on rules, but a denial under the former will trigger an ICMP response back to the source, whereas the latter will simply discard the packet with no response. Ergo, under a DROP target, it might not be obvious to clients if something is amiss, and the absence of diagnostic messages could make troubleshooting for lower-tier support more difficult than it need be otherwise.

Next are ICMP Blocks. ICMP provides a few neat features for querying devices on your network. One of the most common ICMP functions is ping, which is used to determine host visibility (which, in reality, is a somewhat erroneous assumption once you understand how the service is classed). However, being able to obtain this kind of information may not be desirable in certain contexts. For example, while you may want certain ports on an infrastructure server exposed, you may also not want the server to be pingable by any random associate. And while there are definitely more robust and reliable ways of achieving this goal, for the sake of this discussion, we’ll say that we simply want to disallow pinging.

ICMP Blocks under firewalld are broken into two categories: individual and masked. To understand this, one need look no further than the zone information for the default zone public. As default, icmp-block-inversion is no, and there are no individual icmp-blocks. Effectively, this permits all ICMP traffic. Now, we have two options here for blocking ICMP traffic. We can either add individual ICMP services to the zone, or we can develop a permutation that utilizes both individual blocks and/or a block inversion. The block inversion simply takes the configured ICMP Blocks and flips them around, or inverts them. Thus, if we add no individual ICMP services but add an ICMP Block Inversion, we are now blocking all ICMP services. If we add an ICMP Block Inversion as well as specific ICMP services, we are now permitting ONLY the specified services.

That sounds like quite a bit, but we can summarise it as thus:

Basic building blocks of zones are targets, services, ICMP services, and ICMP Block Inversions. Knowing how to manipulate these will go a long way.

This is a gross over-simplification, but knowledge here can make all the difference in most cases.

One last thing regarding changes to firewalld. Any changes issued are as default memory-resident only. Unless explicitly committed, changes will be wiped when the system goes down. Adding the –permanent option to your commands will ensure that modifications survive power cycles.

Let’s walk through the process of creating a new zone called ZONE_OF_POWER. It’s target will be %%REJECT%%, it’ll permit SSH, HTTP, HTTPS, and NTP traffic, and deny all ICMP except for ping. We can accomplish this with the following:

firewall-cmd --permanent --new-zone=ZONE_OF_POWER
firewall-cmd --permanent --zone=ZONE_OF_POWER --add-service=ssh
firewall-cmd --permanent --zone=ZONE_OF_POWER --add-service=http
firewall-cmd --permanent --zone=ZONE_OF_POWER --add-service=https
firewall-cmd --permanent --zone=ZONE_OF_POWER --add-service=ntp
firewall-cmd --permanent --zone=ZONE_OF_POWER --set-target=%%REJECT%%
firewall-cmd --permanent --zone=ZONE_OF_POWER --add-icmp-block={echo-request,echo-reply}
firewall-cmd --permanent --zone=ZONE_OF_POWER --add-icmp-block-inversion
firewall-cmd --reload

Time for a breakdown.

First, notice how all of the statements issued have the –permanent option in them. This is to ensure that our changes are rendered gospel by the firewalld overlords.

The first statement creates a new zone called ZONE_OF_POWER. Zones in firewalld are actually structured XML files, but we’re not going to dive into those here.

The following four statements add ssh, http, https, and ntp services to our new zone. This means that ingress traffic matching these services will be permitted to pass. Other kinds of traffic will be passed to the %%REJECT%% chain.

Next, we assign the %%REJECT%% target to our new zone.

Following that, we add two ICMP services, echo-request and echo-reply. These two form the foundation of a ping, and if we stopped here, we’d be instructing firewalld to block pings and permit everything else, which is not precisely what we set out to do.

Finally, we add an ICMP Block Inversion. This means that we take our current ICMP Blocks and flip them. With this added, we’re now permitting only ping requests and denying everything else.

By the way, as was mentioned before about both zones and services, you can obtain a full list of ICMP types that are stock to firewalld, so you know what to add or remove when dealing with them:

firewall-cmd --get-icmptypes

It’s also possible to add your own ICMP types, but this is beyond the scope here.

The very last statement will force firewalld to reload its configurations. This will permit you to assign ZONE_OF_POWER to an available interface. Speaking of which, if you want to add an interface to this new zone, you’d do it like this:

firewall-cmd --permanent --zone=ZONE_OF_POWER --add-interface=<ifname>

Note that this may throw an error, depending upon how angry DBus is on that particular day. I actually still don’t know why it happens, but occasionally you’ll get a quark error when attempting to place an interface into a new zone, requiring you to reboot the host to resolve it (at least my current understanding makes this the path of least resistance). If anyone has any ideas, filling me in would be great.

Finally, let’s talk about custom services. Custom services are useful if you plan on using custom ports or migrating existing services to non-standard ports. For example, if you decide to have SSH operating on port 2500 instead of 22, you’ll likely want to create a new service. While you might be able to modify the existing service definition, it’s probably best to create a whole new service for the sake of clarity and maintenance.

The following statements will create a new service called CUSTOM_SSH and add TCP port 2500 to it. Then, we’ll remove the existing ssh service from our custom zone from above and replace it with the new CUSTOM_SSH service.

firewall-cmd --permanent --new-service=CUSTOM_SSH
firewall-cmd --permanent --service=CUSTOM_SSH --add-port=2500/tcp
firewall-cmd --reload
firewall-cmd --permanent --zone=ZONE_OF_POWER --remove-service=ssh
firewall-cmd --permanent --zone=ZONE_OF_POWER --add-service=CUSTOM_SSH
firewall-cmd --reload

The first statement will tell firewalld that we want to create a new service definition called CUSTOM_SSH. Then we want to add the TCP port 2500 to that service definition. We’ll then reload the daemon so that we have the service available for distribution to other objects. Next, we’ll remove the existing ssh service, and then add the new CUSTOM_SSH service. Once we reload, the firewall should be ready to start permitting SSH traffic on TCP port 2500.*

  • – There are several peripheral caveats with this particular example. First, sshd needs configured to listen on port 2500. Second, if your computer is running SELinux, you’ll need to manipulate it to permit SSH traffic on a non-standard port. Both of these configurations are beyond the scope of this document.

Having finished this document, you should be able to start using firewalld in a basic, if not isolated, sense.

unsplash-logoBit Cloud

RHEL/CentOS – Configuring a Local Repository Server

I haven’t made a decent technology post in some time, let alone one about my beloved Linux. To rectify that travesty, this post comes at a time to end said drought and to share with my Linux friends how to accomplish the goal outlined by the title. I do often see a fair number of questions as to how to get a local repository setup, but the contexts always vary a little by virtue of distribution as well as delivery mechanism. The objective here is to get up and running with as little hassle as possible. Our target distribution will be RHEL/CentOS 7 with delivery facilitated via FTP.

Getting right to it, there are six key points we need to address:

  1. Configuration of the FTP server
  2. Location for the software packages that will compose the repository(-ies)
  3. Creation of the repository metadata
  4. Firewall configuration
  5. SELinux configuration
  6. Exposure

FTP Server

Both RHEL and CentOS will by default install vsftpd as the FTP server of choice. This can be installed during the package selection phase of Anaconda by first selecting the Infrastructure Server profile and then the FTP Server group. You can install several other groups as well, but we only focus on this one here. Please note that if you intend on using another FTP package, you’ll want to skip some parts of this tutorial since it’s assumed that you are going to use vsftpd. If during the installation process you fail to install this package, simply install it post-installation with yum:

yum install vsftpd

The nice thing about installing it from Anaconda is that the default vsftpd configuration file (/etc/vsftpd/vsftpd.conf) is geared toward a public anonymous context with maybe only some slight modifications being required. Removing all of the comments from the file for brevity, these are the options that are used on the server I’ve configured:

anonymous_enable=YES
local_enable=YES
write_enable=NO
local_umask=022
xferlog_enable=YES
connect_from_port_20=YES
xferlog_std_format=YES
listen=YES
listen_ipv6=NO
pam_service_name=vsftpd
tcp_wrappers=YES

If you have to make any changes to this file, be sure to restart the daemon after writing said changes:

systemctl restart vsftpd

File Location

The repository files will be kept in a custom location, not in /var/ftp/pub – the default landing directory for anonymous. For this example, the files will be stored in /opt/repos. This will make it easier if we want to have multiple repositories on the same server. In other words, we could have a directory for the base CentOS packages – copied from the resource DVD – and custom packages for your company under another directory. Hypothetically, this leaves us with the following directory structures:

/opt/repos/centos/7/dvd/packages
/opt/repos/company/packages

Finally, you’ll want to change the permissions for the /opt/repos directory and its contents:

chmod -R 755 /opt/repos

Create Repositories

Next you’ll want to populate your newly created directory structures with the packages that will comprise the repository. Keeping with our example directories, the first will need to be copied from the CentOS Resource DVD. This can be accomplished by mounting the ISO/DVD and copying everything out of the Packages directory in the root of the mount. If you have them stored somewhere else, you’ll need to get them onto the server by other means (SCP, NFS, SMB, etc…). The same principle applies for the custom packages that you’ll be putting in the second directory. Ultimately, your storage and security strategy will dictate where the files are actually stored at and how they’re placed in those two aforementioned directories.

If you haven’t done so already, you’ll want to install the createrepo package. This package will automate the creation of the metafiles required for both identifying and advertising the manifest of a repository. Once you have the package installed, you’ll want to use it to create the repository for each of these directories. Createrepo takes a path as an argument. It also takes other options, but for this example, it’s satisfactory enough to omit them and provide only the directory. When you do this, do not inlcude the packages portion of the path. Instead, you’ll provide the path up to that point. Obviously, your current working directory will determine what you have to provide to createrepo. It’ll most likely look like one of the following forms:

createrepo /opt/repos/centos7/dvd
createrepo .
createrepo ../

Use whichever one you want, so long as the path is correct. The larger the number of packages createrepo has to parse, the longer it’ll take to process. As you might notice, parsing all of the CentOS Resource DVD packages will take some time (there’s almost 10,000 packages).

When you create the repositories in this manner, you’re essentially creating a static repository. If any of the packages in there change, you’ll need to issue a command to createrepo to update the repository manifest. Or you can make createrepo occasionally sync with a mirror, but these configurations are outside the immediate scope of this document.

Once the packages are in place and the repositories created, you’ll need to get them to the point of being exposed by the FTP server. Because the default public anonymous location for vsftpd is /var/ftp/pub, and because vsftpd doesn’t handle following symlinks very well (or not at all), you’ll want to use a bind mount to get the directory over there. The basic gist of a bind mount is to mount a portion of the filesystem to another portion of the filesystem. So if you perform a listing command on /var/ftp/pub prior to the bind mount, you’ll likely get nothing back since that directory doesn’t (or at least shouldn’t) contain anything after a fresh installation. If you perform a bind mount:

mount --bind /opt/repos /var/ftp/pub

and then list directory contents:

ls /var/ftp/pub

you’ll get returned two directories. Note that this doesn’t survive reboots unless you add the mount to fstab:

/opt/repos /var/ftp/pub xfs defaults,bind 0 0

If you don’t do this, don’t be shocked if after a reboot you can’t see anything on your FTP server since the mount will be disconnected.

Firewall Configuration

There’s a pretty good chance that your network interface is going to have the public zone from firewalld applied to it during the installation process. Even so, this zone likely won’t have the FTP server whitelisted, so you’ll need to check to see if it is and react accordingly.

First, check what name was given to the interface (henceforth referred to as ifname). Hopefully, it wasn’t given something unpredictable. You won’t have to do this if you explicitly name your interfaces during installation.

ip addr sh

Once you have the interface name, check to see which zone was applied to it through firewalld (henceforth referred to as zname):

firewall-cmd --get-zone-of-interface=ifname

Now, use this zone name to see what kinds of traffic are permitted or disallowed:

firewall-cmd --zone=zname --list-all

Most likely, if this is the default public zone, you’ll only see the services ssh and dhcpv6-client. The key here though is that if you don’t see ftp in the list of services, you’re going to need to add it.

firewall-cmd --permanent --zone=zname --add-service ftp

If during the initial configuration of vsftpd you elected to run FTP through a non-standard port, you’ll need to make the appropriate accommodations:

firewall-cmd --permanent --zone=zname --add-port port#/protocol

Once that’s done, reload firewalld rules:

firewall-cmd --reload

SELinux Configuration

At this point, you should be able to run an external port scan and see your FTP port open. However, it’s unlikely that you’re going to be able to access your files because of SELinux rules. Even though people hate it, and given too that this is a local server, you may be tempted to shut it off and go about your business. This post encourages that you leave it on and simply reconfigure it to work with your server.

Basically, you’ll need to change the SELinux type on the entire directory structure starting with the root /opt/repos to public_content_t.

chcon -R -t public_content_t /opt/repos

Testing and Exposure

Run a few simple tests before giving the green flag for clients to start consuming your packages. What I’ll usually do is run a port scan with nmap to verify that the FTP port is open, which effectively means that vsftpd is running and listening on that port. Next, I’ll open a browser and navigate to the address of the server with the FTP protocol to see if I can view the contents. You can also test this in two other ways:

  • Install a FTP client on the repository server and attempt to establish a FTP connection to localhost (I don’t recommend this for probably a foolish reason).
  • Use a FTP client on a remote client and attempt to establish a FTP connection to the repository server.

Two really common issues at this point are either access or visibility to the repository contents. If access is an issue, make sure that vsftpd is running on your server and that firewalld is configured to permit ingress/egress traffic for FTP, especially if you’re using a non-standard port for FTP (egress traffic should be open by default, but these require direct rules through firewall-cmd to filter since the principal route of focus for firewalld is ingress). If you’re still having issues, make sure that your clients are configured correctly too (firewall, routing, etc.). Visibility of content can usually be traced back to one of three matters: a bind mount that wasn’t added to fstab (this would cause the mount to be dismounted on restart), incorrect DAC (755 should be sufficient; FTP wants the execute bit set), or incorrect SELinux type. The only other major rub is that if your storage strategy defines that your files are located on a NAS or other file server, you need to ensure that those mounts are established at system start (i.e. remote SMB share is mounted correctly).

To add the repository to a client, you can either package the public data into a RPM and have clients install it (much the same way that RPM Fusion does theirs), or they can manually add it to the listing under /etc/yum.repos.d. The file would look similar to this:

[reponame]
name=Custom Repo
baseurl=ftp://name-or-ip-of-ftp-server/centos
gpgcheck=0

Helpful Links

VSFTPD Online Manpages
Firewalld Homepage
SELinux FTPD
Configuring a Yum Repository File