How to Learn Linux, Addendum

I swear, I’m not exclusively picking on CompTIA lately. I just happen to be really interested in what they’re doing, especially within the context of Linux. Also, since my last post, I’m suddenly receiving emails from their mailing list even though I never explicitly signed up for one. Weeeee.

One such email included a list of recent blog posts from their official blog, which appears to be a planet aggregate of sorts. The headline article was titled “How to Learn Linux” by Priyanka Sarangabany. It’s a well written perfunctory that blends advice given within the last twenty years with some minor contemporary flavor added. Whilst reading, I tried hard to demarcate between the objective of the article – as laid out by the title – and this nagging feeling of being grossly out of touch with reality. Despite my best urges to jettison the aforementioned intuitions, it got the better of me.

It might be just this article in particular, but most How do I Learn Linux articles lack a certain ubi we vera, or “In reality, professionals encounter this.” I think this bears some talk, even if not within the confines of the direction pointing. This piece in particular doesn’t actually get to the How To part until right near the end.

There’s no doubt that Linux is quickly becoming a powerful force in the IT industry. In fact, you’re probably using Linux without even knowing it! From smartphones and home media centers to smart thermostats and in-car GPS systems, this open-source operating system is quietly running nearly all supercomputers and cloud servers that power our daily lives.

Priyanka Sarangabany

One very common complaint you’ll hear lobbied from the Free Software Community, especially those who rabble-rouse with RMS, is that it’s a travesty when people don’t truly understand that when you’re using Linux, you’re actually using a complete suite of GNU software tools alongside the Linux kernel. Their vain efforts to correct the misnomer of simply Linux were to address it as GNU/Linux (along with several other strident misnomers). Regardless, the point remains that people running Linux are in fact taking advantage of a complete set of GNU tools developed by the Free Software Foundation way back in the day. The Linux community, however, is ripe with all sorts of misnomers such as the one illustrated here. Free Software/Open Source is quite muddy in terms of who uses what, and more importantly, who cares specifically. A similar phenomenon was at one time witnessed when Android first exploded onto the scene compliments of Google (it wasn’t originally a Google product ;)). The Android OS is running Linux as its kernel. Consequently, most in the Linux community saw this as a striking win for our cause. Long had we waited for the day when Linux saturation was prevalent enough in the user-space to render it a contender worthy of use cases the likes of which only Windows and OSX seemed to garner. However, hardly any of these smartphone users are taking advantage of Linux itself, explicitly. Furthermore, the smartphone space as it pertains to Android is an absolute shithole. Polluted by countless dumpster bin devices with all sorts of malicious software on them, privacy-raping middleware compliments of Google’s nefarious growth trajectory, and an overall exhaustion from being trained to ante up for a new device every six months, the fact that anyone is using Linux at all is both a non-sequitur and buried under the morass.

Some of the truth here is that the misnomers aren’t just about calling a duck a duck; they mean more than correcting bad speech, for better or worse. Not all Linux jobs are glorious administrative escapades where the objective of reformation in the user space is going to earn you badges of honor. It’s not an accident that Linux finds itself reserved for the infrastructure roles. Linux is mostly far too technical for 90% of so-called users, and the fact that Android runs atop it doesn’t mean that you’ve accomplished much other than distributing shadow copies. Emphasis here should be placed on the “quietly running” remark. You’d do well to keep this in mind.

Why Is Linux So Prevalent?

There are multiple reasons why Linux is considered one of the most diverse and powerful operating systems in the world. To understand why Linux is loved by many, it is important to identify its defining characteristics.

Open Source: As Denise Dumas, the vice president of software engineering and operating systems at Red Hat, said in a recent CompTIA webinar about Linux, “Open source is a place where innovation ferments and happens.” When software is released under an open source license, people can view and build upon the software’s original source code. This feature encourages software developers to adopt Linux and apply their own improvements to the code. As result, Linux’s public domain drives constant evolution and advancement.

UNIX-Like System: Linux behaves in a similar manner to a Unix system. This means that the operating system relies on multiple parts/programs that carry out specific jobs collectively. This is a fundamental principle of good system design and is at the core of what makes Linux so great.

Stable: As a public domain that is constantly evolving, Linux remains an incredibly secure operating system. In the words of Eric S. Raymond, “Given enough eyeballs, all bugs are shallow.” Linux’s general public license allows a plethora of software developers to rapidly identify issues in code and just as quickly respond to fix the errors.

Free: Linux is priceless. Literally! The underlying software of Linux has been free to download and install since its creation. For this reason, Linux remains one of the most accessible, diverse operating systems to this day.

Priyanka Sarangabany

All of this is 100% true. But it also 100% only panders to programmers or people looking for software to do something that doesn’t cost them a thing in terms of material price.

Flagshipping Linux’s success in contemporary terms as simply its adherence to Free Software and Open Source ideologies is missing the target just a bit. It’s an attractive aspect only if you’re a software developer or belong to a software engineering group specializing in Linux itself or creating software to run on it. By extension, an end-user benefits from this in that they have some assurance, as ESR puts it, that bugs are simply squashed faster than with alternative monolithic or bureaucratic projects. But end-users most likely don’t care about the fact that the source code for their favorite programs, let alone the entire OS, is available to them whenever. Concurrently, most IT management doesn’t care either. The questions of can and how are the servers going to be supported are the real tests, and we’re so far down the line from the days when there was real competition between IIS and Apache that the lines aren’t as clear as they once were. The fact that Linux is open-source matters only to the kernel team, its contributors, and upstream distributions that repackage the kernel and a collection of software. Your garden-variety sysadmin isn’t going to fondle this too much, at least for billable hours. In general contexts, management presented with the proposition of dedicating resources to retrofitting an open-source project to meet internal needs usually falls out of their chair laughing, and simply resorts to searching for another hopefully complete package. Of course, this says nothing of the emergence of IoT and cloud technologies. Many major industrial vendors are leveraging Linux as a second-class citizen in customer-facing equipment, a handful of specialized server vendors are selling products that are possible only because of Linux, and a vast majority of the cloud-focused architecture is built on or is exploiting Linux in a non-trivial capacity. Although the cut here between administration/architect/engineer is obvious, it’s mostly either this or programming.

Another thing: Implementing Linux isn’t free. While you can download the software and, depending upon the license, run it in your enterprise without legal incident, you most certainly had better have the internal support available to compliment it. Most SMBs are in a position where they could benefit substantially from the use of Linux and derivative technologies. But most SMBs are woefully ill-equipped to float the administrative overhead that running Linux actually entails. The work of Canonical and RedHat have made employing Linux easier over the years, but it hasn’t yet given people the Windows-feel that they hopelessly crutch against. Yes, it costs money as well to administer Windows systems. However, there’s no doubt that a more technical skillset is required for Linux.

One other thing: the use of the term public domain here is inaccurate. RMS, ESR, and Bruce Perens – amongst many others – have historically been cited as having railed against the claim that Linux transacts in this specific realm.

Over the years, companies such as Red Hat have put effort toward making system administration and development easier to master. In turn, today’s Linux graphical user interfaces (GUIs) are highly functional and significantly less intimidating.

Priyanka Sarangabany

This is, unfortunately, false. At least the final statement is. While Canonical, Red Hat, and SUSE have done a tremendous amount of work to streamline new technologies and shore up existing ones, these efforts have very little influence over the GUI/DE projects. These things fly free at their own pace and, frankly, it’s one of the most toxic components of the modern Linux user experience IMHO aside from the stupid number of distributions to choose from. Some insight:

  • Hardly any of these DE are completely functional. Some of them are close to highly functional, but not quite what’s available from traditional Windows/OSX. The very flexibility that these projects benefit from is the same aspect that ultimately undermines their acceptance. The divergence from traditional – but more importantly established – desktop metaphors witnessed in most DE are entirely unacceptable in an enterprise space; they’re barely passable in the user space. For the two or three that still look like they care about helping users rather than hindering them, they’re either too watered down or too full of flourish, coupled with programs that are too convoluted.
  • Consequently, the intimidation factor remains a plague as it’s more real than what the author of this post or perhaps others would proselytize. Take a look at the following DE projects:
  • Not only are there a wealth of choices, but they all express the usual metaphors in different ways which are sometimes really non-intuitive. It’s not a pedestrian user that’s going to find any safe haven here. And if the DE isn’t delivered as a first-class citizen in the DE roundup from a given distribution, it likely isn’t going to be given the time of day; shoehorning a DE into a distribution flavor that didn’t ship native is a bit of a gamble. This all sounds great for a Linux user who’s chomping at the bit to learn the new shiny, but imagine yourself as an IT Manager. Who in their right mind is going to look at this and think they’ve got a snowball’s chance in hell at adoption? What should a budding sysadmin learn? The intimidation factor here is real for both users and prospects, similar to what one finds in the realm of “Which JavaScript framework should I use to develop my web program?” All religion, no substance.

To begin your journey through the Linux space, you will have to make a few choices:

Choose a Linux Distribution: Linux is not developed by a single entity, so there are multiple different distributions (distros) that can take code from Linux open-source projects and compile it for you. Since these distros choose your default software (desktop environment, browser, etc.), all that’s left for you to do is boot up and install.

Choose a Virtualization Solution: Linux virtualization is used to isolate your operating systems so you can run multiple virtual machines on one physical machine, and in turn save time, money and energy on maintaining multiple physical servers. Some popular selections include VMWare, VirtualBox (Oracle) and Hyper-V (Microsoft).

Set Up Your Linux Play Space and Explore: Once you log in to your virtualization environment, you can start learning and practicing. The best way to become comfortable with Linux is to jump in and get your hands dirty.

Priyanka Sarangabany

Choosing a Linux distribution shouldn’t be a cavalier decision. CompTIA Linux+ is, like its LPI contemporary, a vendor-agnostic certification track. Essentially, passing this exam requires knowledge of not just the general administrative topics of Linux itself, but a selection of the more esoteric differences in the major distributions (Debian-based, Red Hat-based, or SUSE). The effort, I suspect, is to suggest or imply that certified individuals are capable of handling virtually anything thrown at them. There’s nothing wrong with this in theory or practice since you’re not guaranteed to be working for/with an organization that has landed solely in one camp or the other. The problem here is that you need to spend some time in at least all three to some extent. I’ll cover more on this later, but there should be a bit of consideration before downloading. Learning Linux can certainly be accelerated by distro-hopping, but this behavior should dramatically slow as time goes forward.

Selecting a virtualization technology isn’t as trivial as this section would potentially lead users to believe. VMware has historically been quite difficult to install and run on various distributions. Legacy versions of the software maybe work on older kernel versions, but newer kernels are hit-and-miss. Furthermore, VMware has a fairly lackadaisical approach to supporting Linux as a viable platform to run its software on. More often than naught, you’ll be scouring the support forums to find that not only are most other people experiencing difficulty installing the software, but they’re either not finding good solutions for their problems or they’re running into other issues that inhibit a good user experience. VirtualBox is an okay Type-2 Hypervisor, but anyone working seriously with virtualization technologies isn’t going to be deploying this any time soon. The implication here is that if you’re not committed to running Linux on baremetal, you’re likely running Mac OSX or Windows and should virtualize it via a hypervisor or two. This may work well, but some of the exam content for Linux+ requires a subset of knowledge that you’ll get only through installing baremetal.

But wait a minute… what does any of this have to do with actually learning Linux?

I’m trying to help set realistic expectations here. Despite the work to push forward, things still aren’t as crystal clear as the author of this blog post would have you believe. Allow me, then, to offer what I think are the best ways to learn Linux.

Manage Your Expectations

Linux is hard. Remember to separate the kernel from the DE, because it’s important. So long as the DE you choose provides an adequate terminal emulator, you can get away with focusing exclusively on the kernel interface and nothing else. Be sure not to get lost in the convoluted nature of the DE, otherwise it’ll add another layer of complexity that you’ll likely want to avoid.

Understand also that doing Linux professionally isn’t the same as doing it for a hobby. Swapping DE every five seconds, or advocating for the use of the flashy or nuanced one isn’t going to get you anywhere. This matters more than you think. Learn GNOME and KDE, and fiddle with the rest in your spare time if interested.

Distribution Selection

Pay attention to the leading distribution vendors out there, and try not to get lost too much in the new shiny that comes out of left field. Take a look at this image and try not to throw up. We in the Linux community say we’re welcoming, and options are great, but this is nauseatingly asinine. The major players here are Canonical, Red Hat (CentOS), or SUSE. Distro-Hopping is okay if you’re just looking to have fun, but that should be relegated to virtualization. Run either Ubuntu, RHEL/CentOS, or SUSE baremetal, and leverage KVM through virt-manager or Cockpit, or VirtualBox, to run VMs locally.

Books

Despite the notion that Linux iterates quickly, the widespread adoption of newer kernels is left to a select special group of distributions. Most are running kernels that are a few versions behind for the sake of sanity. That said, a handful of books exist that help learn Linux itself (not the DE) that will matter for a majority of the versions in the common arena. A few of my recommendations:

Online Documentation

I don’t mean the manpages here, although some of them are useful. I’m talking about wikis, forums, and upstream documentation from distribution vendors. The Arch Wiki is an unbelievable treasure trove of highly technical information for all kinds of software that doesn’t necessarily peg itself to Arch (most of the time). Red Hat/CentOS publish a wealth of documents to give all kinds of administrative information. LinuxQuestions is a great forum for getting help with nearly all matters. Of course, if you’re feeling up to it, you could always get in touch with the developers of the software you’re using directly and get advice or help from them. I’ve talked to a few people from the GNOME team occasionally to get help on certain matters, and it’s proven quite valuable.

Taking Classes

I’ve personally never attended a Linux training course, but that doesn’t mean I haven’t heard wonderful things about them. Some certification authorities like CompTIA, LPI, and Red Hat, will offer both e-Learning and instructor-led courses that will accelerate your learning track right up to the day of examination.

Banging Your Head Against the Wall

I started with Linux in 2004 with Red Hat 9 that was given out to a friend of mine who was attending ITT Tech at the time. All I had was the book it came with, the installation media, and a lot of time on my hands (I didn’t even have access to the internet at that time). The best way to learn, albeit the hardest way, is to simply rake yourself through the coals. Grab a shitbox, abuse it, abuse yourself. Plain and simple.

Community

Get involved with a community. Don’t let the rumors about the Linux Kernel Mailing List scare you away. Most mere mortals are more than willing to discuss Linux, especially if you’re willing to put yourself out there.

Podcasts

Although the landscape is far too saturated, podcasts are still a viable source of information. I miss Linux Outlaws terribly, but shows like Destination Linux, SMLR, and Late Night Linux are great for getting the latest 411 on the happenings and hearing from people who’re incredibly skilled in what they do with Linux.

CompTIA Linux+ XK0-004 Thoughts

Lately I’ve been seeing a lot of steam about the CompTIA Linux+ exam. Evidently they’re separating away from the LPI partnership that’s long been in place – not sure if that has anything to do with the bruhaha – but I thought I’d dig into the exam outline to see what the competency focuses were, and issue some of my opinions about them. Bear in mind that I’m not a proctor or advisor of any kind, and that opinions are strictly that. I’m going to run down the objectives in the same order they appear in the official outline document, so nothing comes out of order.

You can view the outline here: https://certification.comptia.org/docs/default-source/exam-objectives/comptia-linux-xk0-004-exam-objectives.pdf

1.0 Hardware and System Configuration

1.1 Linux Boot Process Concepts

Man, am I happy to see that someone finally understands that not a single person on this planet uses LILO any longer. Say what you will about technical merit, the clear winner here was GRUB. Any mention of the former has been wiped clear from the objective list. Hopefully this isn’t one of those Cisco-style documents where what’s on the exam isn’t anywhere near close to the outline document, unless of course your abstract thinking expands to the realm of what’s par for LSD abuse. Also happy to see that there’s a focus on UEFI/EFI rather than BIOS. Having deployed more than a fair share of contemporary computers both manually and via PXE, it feels dirty to reconfigure the system to run BIOS. Practically speaking, I don’t think UEFI/EFI is as big of a monster as it once was several years ago. We in the Linux community have already crossed this bridge, so let’s stop taking a piss on the side with wilting grass here.

1.2 Kernel Modules

Part of me feels as if this section is gratuity on every entry-level Linux exam. Why? There have been maybe a handful of times I’ve had to manhandle modules, and its come in the user-space on workstations rather than servers. Dealing with Type-2 Hypervisors that don’t play nice with Linux (looking at you VMware) or Nvidia graphics drivers seem to be the only real plays here. For the most part, the kernel does a good job of taking care of what you need for common use cases, and this is especially true if you’re deploying any enterprise distribution whose philosophy is that users shouldn’t have to eat their own skin off their arms to get these systems to work in the 21st century. That said, it’s still valuable knowledge. I’m just unsure that it requires a point allocation on an exam.

1.3 Network Connectivity Configuration

Not really too much to comment on here, except for the inclusion of NetPlan configuration. Along with Gradle, YAML is one of those technologies that was likely written by some hipster and is just a dumpster fire of epic proportions. Since that’s all dandy, let’s change from semi-palatable traditional network configuration scripts that look much like an INI file – which is well understood – to some arcane indent-based copulation between Python-like syntax (because, you know, Python is the greatest thing since sliced bread) and the never ending ML-based projects that seek to change the world. No thanks. Learn it for the exam, learn to hate it, and move back to better things.

1.4 Linux Storage Management

RIP btrfs. Not really.

I’m not sure I’ve understood the migratory path to XFS over EXT4. In my deployment contexts, especially with M2 drives, XFS has caused all sorts of problems that I can’t really explain away. The result, however, was a revert to EXT4 after several FS-level repair attempts were made to fix the corruption on the root partition. One instance I chalked up to a silently botched install, but the other five I couldn’t attribute to really anything. But this FS seems to sit in the first-class citizen spot with EXT4 not too far behind it.

Glad to see that there are some subtle hints at RAID management here. It’s never a huge factor in entry-level exams, but still worth mentioning.

1.5 Cloud and Virtualization Concepts

YAML makes yet another appearance. Yay…

With as long as virtualization has been around, I’m a bit shocked that its taken this long for it to appear in entry-level exams. Most enterprises these days are at a minimum leveraging Type-2 Hypervisors, but this comes in the form of VMware. The focus here, however, is on KVM. Looks as if there may be a little bit of a touch on containers as well, although I seriously doubt it’d be a heavy hitter in comparison to the contemporary content.

An aside, I’m not aware of many enterprises that leverage KVM explicitly for virtualization needs. This mostly gets passed off to VMware or Citrix. I usually find KVM in a Type-2 context on workstations.

That said, there appear to be more here that serves general-purpose understanding of virtualization technologies. Definitely worth taking a look at if you’re unfamiliar.

1.6 Localization Options

Most people don’t really pay attention to these sorts of configurations, but they’re important, especially those concerned with keeping accurate time on a computer. If not for the workstation, then at least be sure that you’re familiar with these commands, especially in the context of virtualized guests. Time drift here can be a pretty common problem.

2.0 Systems Operations and Maintenance

2.1 Software Management

As with many vendor-neutral exams, this one appears to target the most common installation methods for three types of distributions: Debian-based, RHEL-based, and OpenSUSE (Zypper is an explicit target here, for some reason). Not sure why there’s no mention of Flatpak or Snap. Both are emerging as pretty common ways to install user-space programs on a Linux computer.

2.2 User and Group Management

Run-of-the-mill stuff here. The only addition I would’ve added would be domain-based local user management. I believe there’s a section later in the Security topic that covers LDAP integration, but there are some user-space tools that go along with this and I don’t personally consider these to be mid-level knowledge points.

2.3 File Management

These sections should be renamed Grep/Sed/Awk 101. At least you’ll get exposure to some of the more esoteric commands for file management like wc and tee, but again, there’s nothing here that isn’t off kilter.

2.4 Service Management

I thought we were beyond the point where SysV was still a major player, but evidently it remains more pervasive than I estimated. Most enterprise-focused distributions will focus only on Systemd, and it’s more than adequate enough for even the prevalent Debian-based distributions (unless of course you think running Devuan is a good idea, to which I’d say you need clinical help). In these situations, most of the SysV commands translate to Systemd commands anyway.

2.5 Summarize Server Roles

Not much to mention here. Just know the roles.

2.6 Job Automation and Scheduling

If you don’t know the five finger mnemonic for remembering how to configure cron jobs, take a look at this post: https://www.networkworld.com/article/2709784/unix–timing-your-cron-jobs.html

2.7 Linux Devices

You’d be surprised how little most people know about udev, and it’s critical to understand when talking about managing devices on contemporary Linux computers. My recommendation would be to read through the Arch Wiki article on udev to get a better understanding of it if you’re unfamiliar: https://wiki.archlinux.org/index.php/Udev

2.8 Graphical User Interfaces

In the wake of recent events with my attempts at deploying Linux to workstations in the enterprise I manage, I’ve since developed a substantial amount of beef with sections like these. Without getting too much into detail, because honestly it could warrant its own post, I’ll say the following concerning the exam outline:

No serious enterprise professional is going to leverage anything other than GNOME in their environment because it’s easily the most supported in terms of contractual support from major enterprise distribution vendors. Anything outside of that is going to require internal support abilities which may or may not exist. Furthermore, Unity as a DE was officially deprecated by Canonical within the last few releases of Ubuntu, and it was so jarring to begin with that supporting it is completely out of the question. In my opinion, requesting that a prospective student be familiar with DE like Unity, Cinnamon, or MATE is just an absolute waste. This isn’t a game. Managers will have a hard enough time selling the idea of getting Linux on workstations to begin with. Along with that decision comes which DE to standardize on, and this is frankly more contentious than the predicate aspect of getting Linux installed. Rolling the dice to every single option out there is an incredibly insane notion. X11 forwarding via SSH isn’t as common a function as it may have once been. Most all servers run headless, ergo there’s no need for this.

My advice here is to understand at least what the DE arena looks like, familiarize yourself with how each expresses various UX metaphors, and then move on with your life.

3.0 Security

3.1 User/Group Permissions

The focus here is on traditional DAC concepts as well as MAC through both SELinux and AppArmor, with the lion’s share being the former. There appears to be some concern with ACLs, which both EXT4 and XFS support, but most people don’t realize that ACLs are entirely optional in these file systems, and that their translation to other file systems is generally unclean in the sense that they just get clobbered. Furthermore, you can have several EXT4/XFS mounts on a system, one of them supporting ACLs and the other not. The point here is that because they’re not first-class citizens, honouring ACLs in Linux has been and continues to be an odd conversation.

The fact that the bulk of the weight appears to be on SELinux isn’t an accident. Again, in the arena it has emerged largely victorious despite Canonical’s need to be different. As arcane as SELinux seems to be, the truth is that there’s a tremendous amount of enterprise support behind it.

3.2 Access and Authentication Methods

Not too much to comment on here. One thing worth mentioning, however, is the part that focuses on LDAP integration. In most cases, Linux servers/workstations will integrate with AD rather than a LDAP implementation like IPA, regardless of the benefits. Most tests will operate under the latter context, unfortunately, and may focus exclusively on pure OpenLDAP, which is to my knowledge hardly ever deployed itself.

3.3 Security Best Practises

Not too much to comment on here either. These are things that most everyone should be doing if they’re serious about getting Linux secure, even in the server environment.

3.4 Logging Services

Another not too much going on section. Garden variety things here.

3.5 Linux Firewalls

Here’s another one of those fun sections where cross-vendor technologies come into play. Most people are familiar with iptables and Netfilter, but when we’re talking about firewalld VS ufw, the former is the clear victor in the enterprise space, and doesn’t appear to be changing any time soon.

3.6 Backups

I’m glad to see some focus on this for entry-level exams. This still seems to be the last thing anyone thinks about concerning their computing architecture. Three techs are covered here: SFTP, SCP, and rsync. I still maintain that rsync is the winner here, even for off-site. SCP has noted performance concerns, and SFTP has FTP in it, so we don’t want to touch it.

4.0 Troubleshooting and Diagnostics

4.1 System Analysis and Remediation

In general, I feel as if this section is one that most Linux users gloss over, especially since in the day-to-day, a reinstall combined with smart partitioning will usually cure all serious ails.

Some of the network diagnostics here are a bit odd since they’ll usually always end up at a network-level rather than at the host. For example, unless you’ve been modifying your network interfaces, routing issues hardly ever emerge at the host level. Further, some of the network diagnostic commands aren’t trivial, like the use of nmap or tshark. Sure you could stumble your way through these, but you might not realize half of what you’re looking at when viewed with an untrained eye.

Root password recovery has shifted a bit over the years. Even select contemporary enterprise distributions are shipping with the root-account-disabled model, instead relying exclusively on sudo for escalation. The techniques for recovery are still valid, however.

EDIT: Reading over this some time in the future, I realised that I omitted here that although the root-account-disabled model is becoming prevalent, systems without the proper configurations can be vulnerable when booting into single user since the root account will just login by default with no password. There are provisions for this in your boot configuration files. Look them up for your distribution.

4.2 Optimize Process Performance

Again, another aspect where users might get a taste but not dive too deeply. Being able to dynamically adjust process priority is crucial when diagnosing system performance issues. Furthermore, being able to identify a process is a bit of an art. Being able to go between top, ps, lsof and pgrep are important.

4.3 Troubleshoot User Issues

If you’ve understood topics from previous sections concerning SELinux, DAC/MAC, and file systems, you’ve pretty much got this section in the bag.

4.4 Troubleshoot Application and Hardware Issues

Most of this is garden variety, with the caveat on select storage points such as the focus on HBAs and degraded storage in a RAID context. Not very common problems encountered by junior admins, but still worth mentioning.

5.0 Automation and Scripting

5.1 Deploy and Execute Bash Scripts

I think the title here is a bit misleading, as it seems the content is focused more on being a Bash primer more than anything else. If you already are familiar with Bash, this should be a breeze.

5.2 Git

Very basic git usage is covered here. You’re not going to be doing cherry picking, rebasing, or blaming here.

5.3 Orchestration Concepts

It’s not really clear what they mean here. General principles are one thing, but are they hinting at any specific implementation such as Puppet, Chef, or Ansible? Orchestration also occurs in the virtualization space, and it means something a little different. Methinks some ambiguity is here simply because of the aforementioned virtualization section not being exclusive to Linux itself.

Overall, I think this looks like a pretty good vendor-agnostic exam, despite my personal opinions on the matter. There’s a nice effort to blend rudimentary enterprise concepts with general knowledge, which seems to be a trend, and I think exam takers would get a lot out of it. It’s unclear to me what the industry adoption would be, especially since there’s a split between them and LPI.

Basic firewalld

As someone who dreaded having to interact with the esoteric networking gatekeeper that was iptables, firewalld presented an opportunity for mere mortals to feel like more of a badass when crafting ingress rules. Although firewalld manages iptables, some abstraction is most welcome, if incomplete. For example, playing in the firewalld arena will only handle ingress traffic. If you need granular control over egress traffic, you’ll still need to dive into iptables, but you’ll triage these through firewalld’s so-called rich rules.

firewalld sees fragmented adoption across various distributions; maybe because firewalld isn’t the only netfilter abstraction in town, or because we all want to be different. Most distributions offer firewalld through their default repositories even if it’s not the existing sheriff, so if you want to run it instead of whatever else was on offer, you’ll want to remove the original program first. Most every RedHat-based distribution will be running firewalld as a default. Ubuntu-based shite will likely have ufw or something else to that effect (ergo, if you’re forced at gunpoint to use any of that garbage, get yourself firewalld immediately).

Contained within firewalld are the concept of zones. Each zone encapsulates a different set of rules that are logically associated to the zone itself. Not only are there a decent handful of default zones – which are more than sufficient for garden-variety use cases – but you have the ability to create and delete other zones (You’re unable to delete any of the stock options. I tried about ninety times.). Each zone can be applied to a particular interface, be it physical or virtual. The rules within each of these zones will dictate how ingress traffic is handled. For example, you can configure a zone to disallow ICMP traffic to the host, or drop all traffic other than a select handful of services.

As with zones, firewalld offers a plethora of default services that can be used. Services are a collection of colloquial protocol/port mappings consolidated under an easy to understand identifier. They’re intended to save time with building zones, by being readily available to any zone that wants them. You can also add or delete custom services, just as you can with zones. For example, the firewalld service http will map to tcp/80, https will map to tcp/443, ssh will map to tcp/22, and so on.

And this is essentially all you’ll need to know in order to get some reliable mileage out of your firewalld installation. This says nothing about the details of rich rules, IPSets, or Helpers, but these are more advanced topics that can be understood by reading the official firewalld documentation. Think of this document as a way to whet your appetite and help you play with a tool. Note that going forward, all commands displayed will be assuming that you’re running a RedHat-based distribution that leverages systemd.

To start, you can ensure that firewalld is running by querying systemd:

systemctl status firewalld

And obviously, you can toggle the state of the daemon by using one of the following:

systemctl start firewalld
systemctl stop firewalld

You can use the reload command as well for forcing configuration changes, but there’s an alternative method to this which we’ll cover momentarily.

Interfacing with firewalld is facilitated by either the terminal command firewall-cmd or by the GUI client firewall-config (which can also partner with firewall-applet, assuming you’re running a GUI). This document will focus only on the terminal interface, especially since most enterprise production servers will be operating headless.

The obligatory commands are available for your typing pleasure:

firewall-cmd --version
firewall-cmd --help
man firewall-cmd

Trust me, the man pages for this program are very good.

Now, although the daemon may be running, the firewall may be in a state where it’s not enforcing. You can query the current state of the firewall using the following:

firewall-cmd --state

You can determine which zones are active (i.e. a binding to an interface that has an active connection).

firewall-cmd --get-active-zones

If you wish to see the zone that’s associated with a particular interface:

firewall-cmd --get-zone-of-interface=<ifname>

You can get the names of your interfaces by using either of the following:

nmcli c
ip addr sh

A list of all the zones known to firewalld can be obtained.

firewall-cmd --get-zones

The same can be done to get a complete listing of all hardcoded services that can be used in zone configurations.

firewall-cmd --get-services

Now that you know how to see zones, regardless if they’re active or passive, you’ll want to see the configuration of the zone itself.

firewall-cmd --zone=<zonename> --list-all

Again, the name of a zone can be obtained by either listing all of the zones or determining which zone is associated with your active network interface.

A similar breakdown for services is available. Sometimes a service can encapsulate multiple ports or other targets, so knowing what the service identifier is referencing is important. For example, if you want to know what the service ssh contains, you’ll issue the following command:

firewall-cmd --info-service=ssh

Now that we can obtain some rudimentary information about both zones and their services, we can move forward modifying existing zones. However, there is still a bit more to know before going too far down the rabbit hole.

Aside from services, there are a few other basic properties of zones that you need to pay attention to, especially when considering which zone to use or if you’re designing your own.

Every zone has a target. The target is effectively a so-called next-hop for the packet after applying the filter rules in the current zone. There are three targets available, and any given zone can only have one target.

ACCEPT – Any packet not matching any rule is permitted.
%%REJECT%% – Any packet not matching any rule is rejected.
DROP – Any packet not matching any rule is dropped.

In practise, what this means is that if a zone has a target of ACCEPT, virtually all packets are permitted. %%REJECT%% and DROP will deny packets based on rules, but a denial under the former will trigger an ICMP response back to the source, whereas the latter will simply discard the packet with no response. Ergo, under a DROP target, it might not be obvious to clients if something is amiss, and the absence of diagnostic messages could make troubleshooting for lower-tier support more difficult than it need be otherwise.

Next are ICMP Blocks. ICMP provides a few neat features for querying devices on your network. One of the most common ICMP functions is ping, which is used to determine host visibility (which, in reality, is a somewhat erroneous assumption once you understand how the service is classed). However, being able to obtain this kind of information may not be desirable in certain contexts. For example, while you may want certain ports on an infrastructure server exposed, you may also not want the server to be pingable by any random associate. And while there are definitely more robust and reliable ways of achieving this goal, for the sake of this discussion, we’ll say that we simply want to disallow pinging.

ICMP Blocks under firewalld are broken into two categories: individual and masked. To understand this, one need look no further than the zone information for the default zone public. As default, icmp-block-inversion is no, and there are no individual icmp-blocks. Effectively, this permits all ICMP traffic. Now, we have two options here for blocking ICMP traffic. We can either add individual ICMP services to the zone, or we can develop a permutation that utilizes both individual blocks and/or a block inversion. The block inversion simply takes the configured ICMP Blocks and flips them around, or inverts them. Thus, if we add no individual ICMP services but add an ICMP Block Inversion, we are now blocking all ICMP services. If we add an ICMP Block Inversion as well as specific ICMP services, we are now permitting ONLY the specified services.

That sounds like quite a bit, but we can summarise it as thus:

Basic building blocks of zones are targets, services, ICMP services, and ICMP Block Inversions. Knowing how to manipulate these will go a long way.

This is a gross over-simplification, but knowledge here can make all the difference in most cases.

One last thing regarding changes to firewalld. Any changes issued are as default memory-resident only. Unless explicitly committed, changes will be wiped when the system goes down. Adding the –permanent option to your commands will ensure that modifications survive power cycles.

Let’s walk through the process of creating a new zone called ZONE_OF_POWER. It’s target will be %%REJECT%%, it’ll permit SSH, HTTP, HTTPS, and NTP traffic, and deny all ICMP except for ping. We can accomplish this with the following:

firewall-cmd --permanent --new-zone=ZONE_OF_POWER
firewall-cmd --permanent --zone=ZONE_OF_POWER --add-service=ssh
firewall-cmd --permanent --zone=ZONE_OF_POWER --add-service=http
firewall-cmd --permanent --zone=ZONE_OF_POWER --add-service=https
firewall-cmd --permanent --zone=ZONE_OF_POWER --add-service=ntp
firewall-cmd --permanent --zone=ZONE_OF_POWER --set-target=%%REJECT%%
firewall-cmd --permanent --zone=ZONE_OF_POWER --add-icmp-block={echo-request,echo-reply}
firewall-cmd --permanent --zone=ZONE_OF_POWER --add-icmp-block-inversion
firewall-cmd --reload

Time for a breakdown.

First, notice how all of the statements issued have the –permanent option in them. This is to ensure that our changes are rendered gospel by the firewalld overlords.

The first statement creates a new zone called ZONE_OF_POWER. Zones in firewalld are actually structured XML files, but we’re not going to dive into those here.

The following four statements add ssh, http, https, and ntp services to our new zone. This means that ingress traffic matching these services will be permitted to pass. Other kinds of traffic will be passed to the %%REJECT%% chain.

Next, we assign the %%REJECT%% target to our new zone.

Following that, we add two ICMP services, echo-request and echo-reply. These two form the foundation of a ping, and if we stopped here, we’d be instructing firewalld to block pings and permit everything else, which is not precisely what we set out to do.

Finally, we add an ICMP Block Inversion. This means that we take our current ICMP Blocks and flip them. With this added, we’re now permitting only ping requests and denying everything else.

By the way, as was mentioned before about both zones and services, you can obtain a full list of ICMP types that are stock to firewalld, so you know what to add or remove when dealing with them:

firewall-cmd --get-icmptypes

It’s also possible to add your own ICMP types, but this is beyond the scope here.

The very last statement will force firewalld to reload its configurations. This will permit you to assign ZONE_OF_POWER to an available interface. Speaking of which, if you want to add an interface to this new zone, you’d do it like this:

firewall-cmd --permanent --zone=ZONE_OF_POWER --add-interface=<ifname>

Note that this may throw an error, depending upon how angry DBus is on that particular day. I actually still don’t know why it happens, but occasionally you’ll get a quark error when attempting to place an interface into a new zone, requiring you to reboot the host to resolve it (at least my current understanding makes this the path of least resistance). If anyone has any ideas, filling me in would be great.

Finally, let’s talk about custom services. Custom services are useful if you plan on using custom ports or migrating existing services to non-standard ports. For example, if you decide to have SSH operating on port 2500 instead of 22, you’ll likely want to create a new service. While you might be able to modify the existing service definition, it’s probably best to create a whole new service for the sake of clarity and maintenance.

The following statements will create a new service called CUSTOM_SSH and add TCP port 2500 to it. Then, we’ll remove the existing ssh service from our custom zone from above and replace it with the new CUSTOM_SSH service.

firewall-cmd --permanent --new-service=CUSTOM_SSH
firewall-cmd --permanent --service=CUSTOM_SSH --add-port=2500/tcp
firewall-cmd --reload
firewall-cmd --permanent --zone=ZONE_OF_POWER --remove-service=ssh
firewall-cmd --permanent --zone=ZONE_OF_POWER --add-service=CUSTOM_SSH
firewall-cmd --reload

The first statement will tell firewalld that we want to create a new service definition called CUSTOM_SSH. Then we want to add the TCP port 2500 to that service definition. We’ll then reload the daemon so that we have the service available for distribution to other objects. Next, we’ll remove the existing ssh service, and then add the new CUSTOM_SSH service. Once we reload, the firewall should be ready to start permitting SSH traffic on TCP port 2500.*

  • – There are several peripheral caveats with this particular example. First, sshd needs configured to listen on port 2500. Second, if your computer is running SELinux, you’ll need to manipulate it to permit SSH traffic on a non-standard port. Both of these configurations are beyond the scope of this document.

Having finished this document, you should be able to start using firewalld in a basic, if not isolated, sense.

unsplash-logoBit Cloud

Quick C++/SFML Tips

While I’m writing a series on working with SFML and C++, I thought I’d share some quick dirty tips for working with SFML that I’ve experienced lately. Some of these emerged while branching out to other development contexts that I’m normally not entrenched in – so you’ll forgive me if they seem axiomatic to you – and others simply failed to make the transition from mind to paper (or screen in this case).

Getting Started with SFML and Visual Studio

It’s evident after seeing some posts on the SFML forums that people don’t RTFM. TL;DR isn’t a thing to worry about here, so be sure to check this page (linked below) out. Visual Studio doesn’t require counter-intuitive thought concerning environment configurations – a compiler is a compiler – but the way one configures the compiler is measurably convoluted, especially if you’re used to programming in the Linux world. These steps are also valid if you’re considering creating a DLL to leverage shared code.

SFML on Visual Studio – https://www.sfml-dev.org/tutorials/2.5/start-vc.php

DLL Woes

Speaking of creating DLLs, there’s a nasty little caveat with the default Windows header file. Evidently the versions of the min and max functions implemented in it are grossly incompatible with the ones in STL. While not a SFML issue per-se, it’s important to be aware of because it’ll likely creep in when you least expect it, and trying to determine what the root cause is from the output of the compiler is going to require several witchdoctors and an irrefutable, globally-accepted proof of String Theory. The red herring for this typically comes in the form of error C2589: ‘(‘ illegal token on right side of ‘::’ (a.k.a. The go f-yourself error).

The fix for this is the NOMINMAX preprocessor directive. You can either add it as a file-level define at the head of the file, or you can use the Project Properties dialog and add it to All Configurations and All Platforms by navigating to C/C++->Preprocessor, and appending the NOMINMAX option to the Preprocessor Definitions field. If ever you come back to this dialog to ensure that the value was set, you’ll need to drill-down into each configuration and platform to see that the value was applied.

Deleted Copy Constructors, sf::NonCopyable

A core component of a game engine that I wrote has an Asset Manager that’s very similar to the one used in MonoGame, except that it doesn’t use the Pipeline concept. Assets are loaded into memory via PhysicsFS, and they’re translated into SFML Asset Constructs that are stored in a STL Container, specifically std::unordered_map. Some SFML Asset Constructs, specifically sf::Music, inherit from classes that leverage sf::Thread, and, of crucial note, sf::Thread inherits from sf::NonCopyable. While this utility class doesn’t explicitly delete the copy constructor and assignment operators, it marks them as private. Children of this class will likely, if you’re using C++11 or greater, have these functions implicitly deleted since they’re not valid. In the absence of STL Containers, this isn’t too much of an issue, especially since attempts at copies or assignments would result from explicit statements that you yourself wrote. When STL Containers are around and you encounter an error from implicitly deleted function calls, we’ve traipsed into another arena where compiler output is infamously horrid to the degree of being near useless.

To give some concrete to my exposition, the offending statement was this:

...
typedef sf::Music sfmusic;
typedef std::unordered_map < std::string, sfmusic > ab_bgm;
...

std::unordered_map leverages std::pair to join the key to the value, and while I haven’t been able to dissect the issue deeper than this, it would appear that std::pair is likely subsuming the lifecycle of the objects it contains. Because there is no copy constructor or assignment operator for an object that inherits from sf::Thread, and because std::pair is attempting to leverage either one of those functions in some way, the latter is going to throw up in the most flamboyant of ways.

Although what’s next is likely not a representation of a clean or efficient way to mitigate this, I’ve found that it works. For starters, the declaration changes slightly:

...
typedef sf::Music sfmusic;
typedef std::unordered_map < std::string, sfmusic* > ab_bgm;
...

Next, the member function of the Asset Manager that is responsible for copying the asset data from raw bytes into live SFML Asset Constructs takes an extra step of manually allocating the memory for it before using the sf::Music copyFromMemory function:

...
case targetloader::bgm:
    bgmb [file] = new sfmusic ();
    bgmb [file]->openFromMemory (d, f.length ());
...

Of course, because we’re now wandering down the path of explicit memory allocations, we’ve got to be responsible for cleaning it up, so the intermediate destructor does some work to delete allocations in this bank, if there were any, before removing the bank itself.