PC builder’s primer

I have helped a lot of people of the years.  One thing I really enjoy is to help people learn to build PCs, to help them pick out the best parts and components for their use, wants, needs and budgets.   I’ve figured out some very basic things that anyone can use to help make decisions regarding their new PC.  These decisions can be used for self-built or professionally custom built PCs, or to decide if an off the shelf computer would be better.  Here I make little distinction between types of computers, be it a desktop, laptop, server, matchbook computer, etc.

The very first thing anyone looking to build or buy a new system is the budget they wish to have for it.  Ideally, that budget should be cash in hand, or capable of being so.   In enterprise situations, some companies allow for loans, pay over time, leases, etc.  We’re not going to discuss these payment methods here – if you’re at that level, you should already have a good idea of what’s suitable for your company.

If you’re “cash strapped” “broke” or otherwise unable to afford all the parts, components, or systems right away, consider the importance of spending your limited funds on something you wouldn’t be able to use until you spend even more.  With that said, it’s quite possible to piece meal a system over time.  It is very important that the best parts are purchased, as they will age and by the time the system is bootable, it may well be outdated.   At this point, if you’re unsure if you can afford this, then continue reading for future knowledge, but I highly advise not ruining your life for any material luxuries.

Overview of decisions:
  1) Budget – as mentioned above.  This is most important for most.
2) Form-factor – The physical size and aspects of the device.
3) Architecture – The type of system, specifically based on the CPU
4) Primary use – What does this system have to do?
5) Secondary use – What else should this system be able to do?
6) Environmental conditions – Where will this device live?
7) OS (Operating System) – There’s 3 primary OS types, & many alternative OSs.
8) Number of displays / outputs – One of the more important decisions.
9) Storage – type, capacity, speed
10) Comfort – What is the usability of this system?

  As I said, budget is probably the most important concern with building a new system.  This is ultimately a personal issue, and here is just advice for consideration.

Everyone would love a $10,000 system – not everyone needs one however.  Some only have $100 to spend at a time, and that’s OK too.  Most basic systems can be built for less than $1,000 however.  These systems can be expanded and upgraded later.  Depending on the upgrade or expansion, some items may need to be completely replaced.  This should be considered when budgeting.

If piece-meal building, it is best to purchase the components which are somewhat future-proof.  One of those components is the case.  If building a standard PC, the case used on PCs from the 90s are still suitable for PC builds today, because they are designed around the ATX standards.  More on that later.  The power supply (PSU) is also something that’s more or less future proof, provided a reasonable wattage is purchased, and the future system will use less than that wattage.   RAM, the specs change often, however as long as the RAM purchased is the same generation (DDR3, DDR4, etc) of what will be used in future systems, it can be used.  RAM speed may be a limitation, but as long as it’s the same gen, it’ll work until you can afford to upgrade it.  Hard drives, especially SATA drives still have many years of usability ahead of them.  However, M.2 NVMe drives are the future.   These are all things to keep in mind when building a system, the wrong component purchased now can quickly become outmoded and unusable, becoming a waste of money.

There are many many form-factors, some are decades old standards that won’t be going away any time soon, and some are new standards that may survive whilst others are completely proprietary and won’t be usable with standard components.

Phones, Phablets, Tablets, wearables, etc are all forms of proprietary form-factors.  There has been a few attempts at creating modular standards for phones, however these have not come into fruition.

Ancient, outmoded standards include the AT form factor, as well as other forms such as IBM’s original PC, Sparc desktops, etc.  Unless you’re an enthusiast, hobbyist, etc – stay away from these.

Some of the new standards which are available, some being more proprietary than others are in the matchbook or palm-size computer field.  This includes things like the Raspberry Pi. Many of these use the pico-ITX standard, either in full or in part. Compatibility isn’t guaranteed.

The more standard PC form-factors are based on the original AT standard.  These include ITX (pico-ITX, nano-ITX, mini-ITX) DTX (mini-DTX, DTX) BTX (pico-BTX, nano-BTX, micro-BTX, BTX) (Note, these standards have mostly fallen out of favor; additionally first gen BTX cases were simply “upside down” ATX cases, later BTX were completely different however)

Finally, we have ATX – the tried and true form-factor, for which most current (and more than likely future) desktops, workstations, workgroup servers, and DIY rack-mount servers are based upon.  This includes Flex-ATX, micro-ATX (aka μATX, uATX, mATX), ATX, EATX, and many purpose-specific variations.
[note: All of the above standards are loosely listed by physical size]

These standards are the basis of motherboard and pc case dimensions, and mounting hole positioning.  Nothing in these standards are exclamation of the intended purpose of the end product.  It is possible to build a high availability server using a mini-ITX motherboard and case; or to build a Gaming rig with an EATX board and case.  The component quality, component function and OS generally dictate the use of a system.

It is also not only possible, but very common for a smaller motherboard to be used in a larger case, such as a mini-ITX motherboard being mounted in an EATX case.  Conversely, a larger motherboard cannot fit within a smaller case.  Though some cases may be the physical size of an EATX case, it may be designed as an ATX case, and as such an EATX motherboard is not usable – even if it can physically fit.  Forcing this can cause potential hazards to the components, even electrocution and fire hazards.

Generally, the vast majority of home PCs are based on μATX and ATX.  Newer desktops produced by name brand manufacturers are also building “SFF” PCs based on Flex-ATX and mini-ATX.

Some of the major applicable differences between these standards is the number of RAM slots available for the standard, and the number of expansion slots (PCI, PCIe, and now M.2) available for the standard.  In these more common standards, there is space allocated for 1 to 6 RAM slots, though not all slots may be installed.  As for expansion slots, there is the ability to have 1 to 7 slots.  With the advent of M.2 NVMe storage, some PCIe slots are removed to make way for the M.2 slot.  More niche standards include up to 10 PCIe slots, and up to 16 (or more) RAM slots.  These standards are out of scope for this discussion, mentioned only for completeness.

The form-factor standards do not necessarily specify other aspects of the case or motherboard.  One of the more important components which are directly affected by these standards is the power supply unit (PSU) – with ATX PSUs being usable in most cases compatible with mATX or larger.  ATX PSUs are the most standard, as such are much more readily available, and usually less expensive per-watt.

This is the logical specification the system is based around.  There’s some confusion in various media sites online as to what constitutes an architecture.  Here, we’re going to focus on the common implemented  ISAs (instruction set architecture)  There are 2 more common, modern, in-use ISA types: CISC and RISC.  CISC is “Complex Instruction Set Computer” while RISC is “Reduced Instruction Set Computer”  There are several ISAs based on CISC and RISC.  Unless you’re doing something very specific with a lot of knowledge, these decisions boil down to x86 or Arm (Previously written as ARM, forgive me if I mistakenly use “ARM” in lieu of “Arm”)

Arm (Advanced RISC Machine) is an ISA based on RISC, and is the most common processor in use on the planet.  Surprised?  Arm CPUs are used in the lion’s share of tablets, phones, wearables, embedded systems (such as cable boxes and DVRs), and if Apple holds true, in their line of desktop computers via their A- line of processors in use in their mobile devices.  There are additionally other companies which have been producing ARM based laptops and desktops for several years.  These systems have caught on only with hobbyists, in academia, and other niche markets.  There are many, many implementations of Arm CPUs, and not all of them are compatible with each other due to proprietary extensions to the ISA, which can be highly relied upon by the OS or software to function.  Very basic software written to use the base Arm ISA can, however, be used on other ARM CPUs.  However, the OS still needs to be able to load the software, and such incompatibilities may persist.

The most common ISA in use in “standard” computers, from laptops to servers, is the x86 ISA.   This ISA started in the late 70s with the Intel 8086.  The early successor to this is the i386 set, used in the Intel 80386, Pentium, Pentium II and Pentium III processors.  To this day, processors based on x86 are technically still compatible with the 16bit i386 set.  In reality, it is quite difficult to bootstrap 16bit code on modern CPUs.  Currently, the most common implementations of the x86 ISA is AMD64.  This is named after the extensions to the x86 specification AMD pioneered (or at least brought to market before Intel’s 64-bit specs)  Today, all 64-bit x86 processors are based on the same AMD64 extensions, including Intel CPUs.  Intel, via their Itanium products, had a 64-bit CPU before AMD even thought to endeavor into 64-bit land.  Though Intel created these CPUs, they were not compatible with x86, and are identified as Itanium-64, or less commonly IA-64 and x64, with 32-bit variations have been referred to as IA-32.   Over the years, confusion has been created by under-educated enthusiasts and technicians referring to AMD64 extended ISAs as x64, and generally the term refers to 64-bit x86, properly  written as x86-64.

The decision here is thus a three-way decision: x86-64, Arm, or go off the deep end and go with one of the plethora of alternatives.  The most readily available CPUs are a stand-alone product come from Intel and AMD.  The most common Arm CPUs as stand-alone products require a LOT more work to build a system, and so should be purchased as part of a mainboard.  There are very few ATX-compatible Arm based motherboards on the market, and those have the Arm CPU included, usually soldered on.  The most common ARM based systems are phones, tablets, and other mobile and embedded systems, including the Raspberry Pi, and (soon?) Apple Macintosh desktop computers.

The only real option for the every-day man to build a PC comes in the form of x86-64.  So, thus, the decision becomes a choice between Intel and AMD CPUs.  Anything else requires a large amount of investment in money, time, or knowledge, and probably a fair amount of all three.

Primary Use:
If you’re building a computer to run Windows or standard distributions of GNU/Linux, you’re probably going to want to go with x86-64.  However, if you’re wanting to build something more purpose-specific, such as a Vehicle PC, a Raspberry Pi or similar may be more suitable.  The primary use has a large amount of weight in deciding upon architecture, CPU, form-factor, etc.

In a “expense is no issue” world, throwing all your money at this would give you a system you can use for almost anything.  Almost anything.  Specific directions taken can eliminate other potential, specific uses.  That’s likely not to be an issue though.

There’s several common primary uses for PCs.  These include Gaming, Development, Design, Art and Rendering, heavy office work (such as massive spreadsheets, stock trading), light office duty with single or limited application use (such as scheduling software), light home use, and then more general purpose / heavy home use.   We can boil these down to being “low end” “high end” “gaming” “enthusiast” and “scientific/academic” with variables being “low power”, “high reliability” and “specialized”

Low power PCs are suitable for light office work, monitoring systems, entry-level home use, or where low or no noise and low heat production is paramount to  high performance.

High reliability PCs are intended to continue to run and be available to use at any time.  This may be a requirement for industrial, manufacturing, school work, etc.  Often, this term is applied to servers, with standard PCs generally having a high level of availability.  This term may be applied to desktop PCs which are never powered off as well.

Specialized PCs are those which have an unbalanced focus on one or two specific aspects.  File servers may have a very low end CPU, a bare minimum of RAM, no video, but tens or hundreds of TBs of storage.  Digital art computers may have lots of RAM, and very fast GPUs designed for art rather than gaming, and a moderately fast CPU, with just enough storage for the finished art.  Gaming computers, on the other hand, will often focus on faster CPUs with more cores, coupled with high end gaming graphics cards, and very fast storage I/O.

It may seem odd to mix these variables with certain use case types.  However, a secretary who only answers the phone to schedule appointments or relay communications may require a low-end highly reliable PC.  This PC may have a low-end CPU, on-board graphics (which honestly are getting better by the year), limited storage space on slower drives, and bare minimum RAM – but all of the components are higher quality to ensure that no hardware issues prevent the secretary from working.

Low-end PCs are usually the least expensive options and build-outs.  They will work to do light duty work, from Grandma checking for her grandson’s emails to Uncle Bill using it to update client websites.  These systems are the first to be outmoded by technological advances, becoming useless with advances in web tech, or unable to run new OSes.  Generally, they should expect to have a 2-3 year life span, unless they are specialized systems where power and performance will never be an issue.  These may have an Intel I3 or AMD Ryzen 3 CPU.

High-end PCs are generally more common, and more general purpose.  These are PCs designed for light gaming, heavy web use, developers who compile code on their PC, heavier office work,  etc.  These are often the $800-$1600 PCs sold by name brands, and include CPUs in the i5/Ryzen 5 series.

Gaming PCs often pull from High-end PCs and from Enthusiast PCs.  A good gaming PC would qualify as a High-end PC, with the addition of a high quality GPU, and specific changes to give the user better performance for gaming with AAA-title games, and high-level graphical games.  Often these PCs will be used for digital art, audio/video rendering and streaming, CAD/CAM production, stock trading (mostly due to having the capability to drive multiple high-resolution monitors), and other similar situations where high bandwidth, low latency, massive work is done.  These are often used with i7/Ryzen 7 or better CPUs

Enthusiast PCs take the gaming PC to the next level, often being used for gaming, where a very high resolution with a very high FPS is desired, regardless of the complexity of the game.  These systems are often used for render farms, crypto-currency mining, high-level development, virtualization environments, home servers, and “just because” These will have at minimum I7/R7 CPUs, and often are based on higher end CPUs, such as I9/Ryzen 9 series, Ryzen Threadripper, Xeon workstation CPUs, or sometimes Ryzen EPYC and Intel Xeon server CPUs.  They are often overkill for daily use, or where financial gain is not involved.  These are often designed as higher end gaming PCs, with emphasis on some other aspect, relying on high quality, higher tier components to ensure high reliability and availability.

All of these can be scaled to various levels, where the amount of RAM, Storage, CPUs/cores, GPUs, etc are balanced in a way to best suit the needs of use.

Secondary Use:
  Not all computers are multi-use, some have a single use and will never see any other use until it’s tossed in the scrap pile.  However, the great thing about PCs is that they are by nature much more general use than purpose-built computers, as seen in industrial controls, embedded systems, or even appliance and automobiles.

The secondary use may not even be how the system is usually used.  However, if the computer will be sued for gaming some of the time, and writing web pages the rest of the time, then it’s primary use should be as gaming.  It is much easier to under-utilize a computer than to use it for heavier loads than it was designed for.

There’s very little difference between primary and secondary uses.  Let’s say your PC is used for light home use most of the time, and occasionally for gaming, and that heat and noise is levels are important, but availability is not too important.  Finding a solution that balances all of these is important, and not really difficult.  There are technologies built into modern PC components to put the CPU, GPU, and overall system in lower power modes.  This allows a decrease in noise and heat production when it’s lightly used, but can then ramp up for gaming.  On the inverse, a gaming PC that is occasionally used for web browsing and is off the rest of the time may be better built using higher tier components, so more enjoyment and less hassle is encountered while gaming, at a potential cost of additional heat and noise production.

Environmental Conditions:
  This is one of the most important things to consider when building or buying a PC.  These conditions are air quality, ambient air temperature, humidity, dew point, air flow, aesthetics, accessibility, safety, mobility, shelter,  even power conditions and network access, and changes in these conditions.

Ideal conditions are:
Air quality: Computers should always be in the cleanest air possible, there are specialized computers built in IP66 cases for highly contaminated air, but require external cooling.  The dirtier the air, the more often the system will need to be powered down and cleaned.
Temperature: 40-70 degrees Fahrenheit ambient air, with 80+ being abusive.
Humidity: Under 60%, above 60% the system is in danger of various things, including excessive dust collection. The lower, the better.
Dew Point: This is a continually changing variable, however, you never want a PC to be powered in an environment where dew can form in, or even on it.
Air Flow: This is a two-sided condition.  The first is the amount of fresh air available to the computer, and the second being the amount of air flowing through the computer.  If either of this is hindered, the other will be adversely affected.  A computer with no through-case air flow, even being in a 40 degree room will be unable to move heat away from the components properly.  A computer with lots of through-case air flow, but which is in a small or sealed space may be recycling the same hot air through the case – causing heat dissipation from the components to be sub-optimal, or even non-existent.
Aesthetics: This is the most trivial of conditions, however it’s important to consider the aesthetics when it is non trivial.  A cheap, ugly case may be inappropriate for a public area in a business with high aesthetics.  Cases can come any almost any color as well, and a black case may be an eye sore in a brightly colored and themed space.
Accessibility:  No matter how awesome of a computer is built, there will be times when cables need to be plugged in or removed, the power and restart buttons will need to be pressed, and of course, the case will need to be opened for cleaning.  Bolting a computer 9 feet up on a wall may look cool, and may even be great for cooling, but can it be reached easily?  Adversely, security may be a concern as well.  Should it be accessible by just anyone?
Safety:  Lots of people love to put their computers under their desk – where it is vulnerable to being kicked, have drinks spilled on it, pets and children to mess with it – which is potential a hazard to both the pet/child and the PC.  This is an electrical device, and all the hazards therein apply.  This is not just in regards to electrocution hazards, but also flammable atmosphere concerns – PCs should not be used near flammable chemicals (such as next to gas can in a mechanic’s shop)   Cases can have very sharp edges, and even those plastic fans can mangle a child’s finger.
Mobility:  Will your computer be moved often?  Would a high-end laptop be a better choice? In a time long ago, there was a social gathering called a LAN party, before the internet was fast and quick enough to support large multiplayer games.  Gamers would build computers that were able to be setup and disconnected quickly.  Something else to consider here is wired vs wifi networking.
Shelter:  The vast majority of PCs are installed and use indoors.  However, there are occasions when a PC is to be used outdoors.  Along with all of the previous environmental conditions, steps should be taken to ensure that the PC is not going to be rained on (or sprayed with a hose), is not allowed to be in direct sunlight in states where sunlight beats down with lots of heat, etc.  This is more to point out that a PC can be installed in a non-indoor, non-conditioned space, but that precautions must be taken to protect the PC from the elements.
Power and Network:  In some states (LOOKING AT YOU FLORIDA!) one of the worse dangers to PCs is the electrical power it’s plugged into.  From over-voltage spikes to under-voltage dips and brownouts, these power conditions can put undue additional strain on the PSU, and thus the whole computer.  To combat these issues, an on-line battery backup unit (Uninterruptible Power Supply – UPS) should be used.  The UPS should also be plugged into a good quality surge protection unit, specifically to protect the UPS from power conditions.  In the old days, dial-up model expansion cards would often take a surge through the phone line, and if the modem itself was not destroyed, it would pass the surge through the PCI (or ISA) slot into the motherboard.  Now, with Ethernet being much more prevalent in home and office, and often with hundreds of feet or even miles of connected wiring, surge dangers are also much more prevalent.  This can be remedied either by using wifi, or utilizing surge protection on the network.  Note that wifi is much more vulnerable to other issues, including RF interference which can cause connectivity issues.
Changes in these conditions:  It’s possible that the perfect setup is achieved, and one day things need to change.  How capable is this PC to surviving these changes?  Honestly, this isn’t something once can properly plan for in all situations, but is something to keep in mind none the less.

Operating System (OS):
There’s many,  many OSes which have been produced, marketed and made available over the years.  Many of these are still in use and keep viable with updates and upgrades.  Not all of these OSes are compatible with each other, or have support for general use.  There’s 3 types of OS: Server OS, Desktop OS, and niche OS.  There are some lines blurred between these, but we’ll keep away from those lines here.

First, it must be understand that an OS is simply software which is loaded onto a computer to provide some form of user interface to run and load other software.  With the advent of virtualization, OSes can be installed in other OSes (called Hypervisors) or other software.  We’re going to focus on OSes installed on hardware however.

Server OSes are installed on hardware (traditionally to high available, high reliable, high quality computers)  These OSes are intended for continual operation, multiple user access – either directly or through software services such as a web server or multiplayer game, or which host high level services for other systems to access, such as SQL, file servers, etc.  Common, modern server OSes are boiled down to four categories: UNIX, Linux, Windows Server, and macOS Server (which is based in part on aspects from UNIX, but with many, many changes)  These OSes have hardware and ISA requirements, and as such, not every server OS is capable of being run on any server host.

niche OSes can be further organized into sub-categories for scientific, industrial, embedded systems, academia, and proof of concept.  Some prominent niche OSes would be MINIX, Blackberry OS, Symbian, Windows Phone, ReactOS, and MenuetOS.  Android and Apple’s iOS also fits here, as they’re intended and practical use is in embedded systems and phones.

Desktop OSes:  This is more likely the route you will take, if you’re reading this blog post for your first bits of information.  These are generally available as commercial products, but there are many freely available and even open source OSes available as well.  Some of these OSes work only on specific computers, and some have ports available for many computers and ISAs.  The most common OSes are Microsoft’s Windows, Apple’s macOS (with all of it’s name changes over the years), GNU/Linux (with all of it’s variant distros and specialties),  and Chrome OS.  Many distributions of GNU/Linux have ports available for ARM and x86-64.  Windows has been ported to various systems in the past, however their primary architecture is x86, with heavy emphasis on x86-64 now.  Aside from some legal hindrances, Apple’s macOS is available only for Apple built computers, even though these computers are merely modified x86-64 PCs (so little so that Windows and GNU/Linux can be installed on modern Macs)  Chrome OS is also generally hardware locked to Chromebook netbooks produced by Google and partners.  There are also efforts of porting various opensource OSes, such as Apple’s macOS base system, “Darwin” and Android to the x86 platform, with mixed results.  The difficulty level is increased exponentially with these alternatives.

Assuming you’re building an x86-64 PC, and you’re not wanting to violate licensing and terms & conditions, you’re left with two choices: GNU/Linux and Microsoft Windows.  There are many alternatives, with varying results and difficulties of installation as well, but again, being you’re reading this for info, we’ll go with the two simpler options.

Currently, there’s (practically) only 1 supported Microsoft OS – Windows 10.  Windows 8 is nearing end of life, and will need to be upgraded to Windows 10, and so should not be seen as a viable route.  Windows 10 has several varients as well: Home and Pro, as well as several editions for Enterprise, Education and others.  The decision is between Home and Pro here.  For the average user, there’s little in the way of relevant differences.  The most important being data security functions which are included with Pro, but not in Home.  Additional higher level add-on components may also not be available to Home edition.  There are many pages dedicated to explaining the differences, and as such will leave that to those pages and sites.  From herein, we’ll refer to Windows 10, and not specific editions.

As for GNU/Linux – there’s a handful of trusted and easy distros.  A distro is similar to Microsoft’s editions, however, as Microsoft produces multiple editions of Window 10,   distros are produced by different companies, groups and organizations.  Some GNU/Linux distros are not freely available, and are offered as support commercial products.  These companies have freely available distros as well, however.  Some of the top GNU/Linux distros are Linux Mint, Arch Linux, openSUSE, Debian, Fedora (from Red Hat), Elementary OS, and my personal favorite, Ubuntu – specifically Xubuntu, which is a variant of Ubuntu which comes with the XFCE Desktop Environment.  There are way too many companies, distros, varients, desktop environments, and software packages to cover them all here.  As there are freely available distros with all kinds of combinations of these, this is something that only personal experience can prove to decide the best distro for each person.

For a more general use, friendly and easy to use GNU/Linux distro, I highly suggest starting with Ubuntu.  It has a well polished interface, high compatibility for software, and is a stable and proven system.

One very important aspect of the OS to consider is the availability of software.  The vast majority of commercial productivity and entertainment software is written for Microsoft Windows.  This includes almost every AAA title game, with many have some ports to GNU/Linux with less than great success.  For the last few decades, the “good” games have been written to use DirectX.  This is a platform available only on Microsoft Windows, and XBox consoles.  If you’re intending to build a gaming system, you’re pretty much left with choosing Windows 10.  There are some open source projects to implement DirectX compatibility on GNU/Linux, again, with less than great success.  If you never want to run many high end games, GNU/Linux is a less expensive choice.  Check with the games and other software to see if there is a GNU/Linux port.

Displays and outputs:
It used to be that PCs could have a single monitor, a printer, and a speaker – and that was the extent of any output from the computer.  Now, it’s common for PCs to have 2+ monitors, network connected printers, full surround speakers, VR headsets, LED displays (and RGB lights), projectors, connectivity to wireless devices and headsets, and many many more human interface devices.  There are also many more input devices now than there used to be.  The mouse, for example, did not always exist.  Game controllers, 4k cameras, high resolution microphones, video capture cards, even TV tuners were a big thing for a while.  We even have network attached storage systems now.

We used to have a multitude of connectors on PCs too.  RS-232 Serieal, Parallel/printer ports, video out (sometimes not even VGA) PS2 keyboard, PS2 mouse, 3.5mm stereo out and mic in, and often there were dedicated game controller ports.  Connectivity was by external modem, and then with internal modems, token ring cards, and eventually Ethernet ports built-in on the motherboard.  There were expansion cards to give additional ports, some being proprietary.  At one time, hard drives and CD drives required expansion cards.  We are moving closer to everything being connected via USB, with technology steaming ahead to utilizing USB-C ports for USB 3.

It is important, at least for the near future, to ensure your new computer has the ports and functions you’ll need, in the quantity you’ll need.  USB 3 ports are becoming more and more important, in both USB-A and USB-C (as for the PC)  Many devices, however, are still only USB 1.1 capable, and have the possibility of negatively impacting USB 3 devices.  Keyboards, mouses and even headphones don’t require the extra bandwidth provided by USB 3.  The other important port to ensure you have is Ethernet.  Currently 1000mb/1gb Ethernet is the most common, however 2.5, 5, 10 and even 100 gigabit Ethernet is available (with increasing costs associated)  The future may determine that 2.5 or even 10gb will be the mainstay for high end networks.  Please note that just because the PC is connected to the network with 10gb Ethernet does not mean you will have 10gb internet, it just means computers on the same local area network (LAN) can communicate with each other at the faster speeds, provided all the devices between the two computers are capable of the higher speed.

I highly suggest finding a motherboard or pre-built PC with 2 USB 1.1/2.0 ports, 4+ USB-A 3.x ports, and at least one USB-C 3.2 port.  A “six-pack” of 3.5mm ports is optional, for multiple speakers and audio inputs.  There is limited common use for serial ports, parallel/printer ports, firewire, PS2, or game ports.  Even modern external storage devices are moving to USB 3.x – thumb drives, ssds and hard drives.  The only exception here is if you have a nice keyboard, or are intending to be a “pro gamer” and want to use a PS2 keyboard.  PS2 keyboards are segregated from other devices, and as such are less latent and have less chance of interference from other devices or system issues.

With a motherboard with plenty of USB 2 and 3 ports, there’s limited use for expansion cards in most cases.  The last few expansion cards most people will ever use are graphics cards, high-end audio cards, and video capture cards.  Other cards for Enthusiasts and business use include RAID cards, additional network cards, high-end storage cards.  There are few uses for expansion cards outside of niche cases.  Someone building a 2 monitor gaming rig might only need 1 PCIe x16 slot for a high end GPU.  Others might have need to install 3 or 4 x16 cards, however some of those cards will need to be installed in x8, x4 or even x1 slots, limiting throughput and performance.  Almost always, the x16 slot nearest the CPU should be used for the (primary) graphics card.  Check the manual for any specific motherboard to see what the physical and logical form of each expansion slot is.  Many times, the x16 slots furthest from the CPU are logically only x8 or x4, and in some cases, x1.  This allows these slots to be used with some x16 cards which are capable of being used in slower modes of operation.  Generally speaking, most people will never need to worry about this.   The biggest concern being if the motherboard has 1 or 2 x16 logical slots, for 2 graphics cards.

In the old days, computers had to be programmed every time they were turned on.  Let’s skip a whole bunch of generations of storage technology to the Floppy diskette and IDE drive era.  Next were CD and DVD drives, at first using expansion cards, then using IDE/ATA.  Then SATA came along, and hard drives and disc drives used SATA, with floppy drives falling out of favor.  SATA has been around a long time now, but is being replaced again.  There’s several new technologies now, but the dominating form-factors are M.2 and U.2, with SATA-III remaining relevant.  Things get more complex with M.2 being either SATA or PCIe.  These two standards are not interchangeable. M.2 M-key connectors are used for M.2 NVMe SSDs;  M.2 B-key connectors are used for M.2 SATA SSDs, as well as some other small expansion cards, such as wifi adapters.  U.2 drives are physically similar to standard 2.5″ SATA drives, with a higher bandwidth connector based on PCI Express and SATA.

Until advances are furthered, M.2 NVMe and SATA-III (SSD or HDD) drives should be target products for purchase.  NVMe drives are a little more costly per GB, than SATA-III SSDs or HDDs.  NVMe drives utilize the fastest current possible method to read and write data to the drive.  SATA-III HDDs are by far the least expensive per GB, but are also the least performant.  The cost of SATA-III SSDs have come down in the recent years.  It’s not uncommon to see modern PCs with a mix of 2 or all three of these drive technologies.  Often, a smaller capacity NVMe is used for the OS boot drive, with a SATA-III SSD being used for applications and commonly used data, and HDDs used for large quantities of storage.  It is possible to use a single NVMe, SSD or HDD drive for all data.  SATA-III SSDs are still the best balance of cost, capacity and performance, as well as still much more compatible as not all motherboard ship with M.2 or U.2 connectors, and if they do, the M.2 may be a B-key, SATA type (which is not the same as standard SATA-III, and not directly physically compatible)

A good mix-device setup might have a 120GB NVMe, 1TB SSD and one or multiple 3TB+ HDDs, for storage and on-system backups.  A USB-C/3.2 external hard drive is a great solution for near-line backups.  Off-site backups for important data should be performed as well, either to a NAS at another location, or to an online storage provider, such as Google Drive or OneDrive.

Most people don’t require more than 120GBs to 1TB for their computers, and a single NVMe can handle all of their storage needs.  Gamers, on the other hand, may have 4-5 TBs of game installs, or if you’re like someone I know, almost 8TBs!  These are all things to consider when designing or picking your new PC.  It’s far easier to have extra space than to have to upgrade storage.  It’s also more costly.  That’s a balance each person needs to figure out for their own use and needs.  External USB drives are always an option, but can be cumbersome.

The main aspects of storage are capacity, I/O speed, cost, physical size, and compatibility.  A ultra fast 120GB NVMe may cost more than a 2TB SATA-III SSD, but both may still be more expensive than a 12TB 3.5″ SATA-III HDD.  Gamers, designers, artists, and similar users may opt for SSDs, so they have a good balance of read/write speed and capacity.
A secretary answering phones may only need the performance of an SSD, but an NVMe would ensure the system don’t stutter on I/O, and the much smaller size means her workstation can be in a smaller case.

This is more intended to deal with various aspects outside of the PC build itself.  However, some things to consider with the PC case and overall build is lights, noise and heat production.  RGB lights have been “all the rage” for a while now.  They’re pretty, but they can be distracting and annoying.  They can even cause sleep issues if the PC is in the bedroom.  The same with noise.  Heat, on the other hand, should never be an issue provided environmental conditions are maintained properly.

Other aspects of comfort in regards to a PC is the desk height for the keyboard and mouse,  depth between eyes and monitors,  height of monitors to the level of the users’ eyes, number of displays – turning one’s head constantly to view extremely wide screen areas will cause fatigue.  For a white, gloss bezels on monitors and tvs was in style, however it has been proven time and again that lot shine bezels is much less distracting and much easier on the eyes.

It is important that any computer chair be tested by the user.  If the chair will be used for more than an hour at a time, it should be very comfortable.  There are three types of chairs that are best suited for computers – and they’re all essentially the same thing: an office chair.  Computer chairs, Gaming chairs and office chairs are all very similar, with the differences being most in cost.  Computer chair is a marketing term used for chairs more suitable for use at a computer… some how different than office/desk chair.  Gaming chairs’s fame comes from the more interesting designs and colors.  Standard office/desk chairs are often much more suitable for long sessions at the PC.  These chairs are designed for 8 hour shifts.  Whenever possible, a chair rated for 300lbs is better, even if you’re only 120lb little girl.  They are often much better built, with better quality padding and surface materials, and will last a very long time for anyone under it’s weight limit.  For those who are heavier (read: HEAVIER, not FATTER, example Jason Momoa is heavier, but not fat) getting a chair rated above their weight is critical for longevity and personal safety, as well as prolonged comfort.

Every other component which you will directly interact with – keyboard, mouse, headset, VR headset, what ever you will physically be touching or wearing, should be tested for comfort, and any annoyances be corrected.

The experience can be completely ruined, even with a$10,000 PC, if the user is not comfortable.

SSH Jump Server

SSH Jump servers are nothing new.  They’ve been around a long time.  The very first implementations were simply an outmoded server, with an SSH server which allowed the user to create a new SSH session to any number of servers on the LAN.  This limited public-private connection security concerns to a narrow arrow, just the jump server.  All other servers would be non-accessible from outside networks (including the internet when it came about)  This is a very primitive method, and though it can still be found in some non-critical networks, it’s by far ideal and no where near best practices.

The very basic premise of an SSH jump server is to allow users and admins to have a single point of connection which can then be used to connect to privileged servers and services.  The more basic method this is achieved, the less secure the whole system can be.  The one main fail in the original design, is that once an intruder has access to the jump server, they potentially have access to the entire network.  This is a conversation that can delve very deeply into inter-host security, firewalls, application level security, etc.

Any time a person designs, implements, modifies or (re)creates a network, network security scheme, system, or connection scheme, that person is acting in the role of a systems and network engineer.  Acting as an engineer and being a qualified, certified engineer are two different things.  Not all engineers are proficient, qualified or at all good at the tasks.  With that said, I will assume that any person performing these tasks is an engineer – either qualified or not.

It is up to the engineer to keep in mind many aspects of their task, in this case, designing and implementing an SSH jump server.  Those aspects can be boiled down to: security, performance, availability, efficiency, and overall connectivity.  These aspects of systems and networks apply to every system, network, application, etc.

Security should always be in mind, if not the one thing every engineer should be primarily concerned about.  The most basic form of network security is the firewall, and the most basic setup is to “block and poke” where every protocol, port and service is blocked, and holes poked in the firewall for specific needs.  This is what anyone with any advanced knowledge of firewalls should know.  However, this is only the first step in security, the first in many.  I have a blog post concerning SSH and firewall setup here.  With an SSH jump server, there’s more security concerns to deal with, this is especially true with multiple users.  Much of the security for the jump server itself is the same as covered in the above blog post.  The hosts and services which are available to access from the jump server should also be secured.  Some of these services may be accessible, but any access from the jump server to that service may be undesirable.  The best way to handle that is with the firewall controlling access to that service, be it a physical appliance, or a software firewall (I still prefer and recommend UFW for software firewalls)

The jump server needs to be readily available.  This means it needs to be running at the times it will ever be required.  Alternatively, some newer methods are available (and won’t be covered here) which relies on a knock connection to start up the service, or the whole system, such as a docker container, etc.  Different levels of systems require different levels of availability.  In a high availability, mission-critical production environment where admins may need to resolve issues as quickly as possible, the availability should be “always on” but there are use cases where waiting a minute or so for the connection (such as a network lab server) is worth the trade off for saving some money on power and cooling.  This goes hand in hand with performance.  High level systems will require high level availability and performance for the jump server, or any other service.

By performance, in this case, would be indicative of bandwidth, latency, CPU & RAM resource allocation, and the ability to have a fast, quick, stable connection with as many concurrent users as may ever be required.  An SSH server uses very minimal resources once the connection is created.  However, two considerations: connection creative with a high amount of encryption will spike CPU resource usage; any data transfer through the SSH connection (tunnel) has the potential to increase CPU and RAM usage, which is especially true for tunneling non-trivial data such as video, game connection, even http proxying.

Balancing performance and availability both, is efficiency.  On the most basic of approaches, one can ask “Do I really need a $50k Dell server with 20TBs of storage, 512GBs of RAM and 4 Threadripper CPUs for my 8 person SSH jump server?”  The answer to that specific question is most likely “no” – however, if those 8 people are using the machine for 8k video transfers through an SSH tunnel, with each user having multiple connections through the tunnel – it’s quite possible that such a server may be required (or one not quite so overkill)  It would also depend if you’re collecting traffic logs, and at what detail and granularity you’re capturing them, how long you want to keep them, etc.  For simplicity’s sake, let’s assume we’re going to have a potential of 4 concurrent user connections (regardless of how many actual /people/ have made these connections) and that these connections are for server administrative use only – so only commands, text editing, and the occasional large quantity trivial data paste through the connection (pasting config files to the host)  If this is the only things the jump server is handling, then our resources could be down to a Pentium III 800MHz CPU, 128 MBs of RAM (yes, megabytes), an 8GB system drive (we’re not logging anything other than connection attempts, and storing for 30-90 days).  So maybe it would be acceptable to have a containerized system on that $50k Dell server dedicated to an SSH jump box, with the other 99.5% of the servers’ resources being assigned to other tasks.

As for network connections, this system could function properly with minimal WAN and LAN connections.  The WAN could be as little as 56k dialup, though this will affect data pastes, it is still acceptable for running commands and even text editing.  More realistically, any modern jump box would be connected to via at least a 10/1 connection (10mbps down, 1mbps up – from the server’s perspective, giving the user 1mbps of bandwidth to paste config files to the server)  These bandwidths are more than adequate for these tasks with this number of users.  Where the LAN side is concerned, again, modern networks should be operating at 100base-T or faster, but let’s assume you /are/ using an old PIII server, it may be limited to 10base-T, and that’s still going to be adequate for this type of use.  There’s no reason to redesign or limit your 10GbE network for efficiency – in actuality, that would be less so.  (If you do find yourself in a situation where your host is capable of only a fraction of your network’s bandwidth, do yourself a favor and add a network switch between the host and network running at the full network speeds, it will limit speed reductions imposed on the rest of the network.  A high quality main switch should also do the same.  A modern SSH jump server should be compelled to allow for a reasonable number of users and connections, with reasonable steps taken for increased throughput capacity while maintaining low latency connections.  Remember, the user will be connecting through at least 2 network segments.  The time from key-stroke to on-screen change, the data has to go from the user’s terminal, through the jump box, to the end host, where the change is registered, and that change is sent from the host back through the jump box, down to the user’s terminal.  The connection between user terminal and jump box is not always something the engineer can control, but the connection between the jump box and host is, and the lower that latency, the better experience the user (or you, as the admin) will have.

As you can see, there is much to consider when designing a jump server (aka jump box)  I have some ideas on how to minimize time to deployment while keeping in step with all of the above, and doing so without purchasing any software or services (the host and network are not included in this, as, well, these are baser requirements for having a network of hosts)

Others may already have designed and implemented systems similar to, or exactly the same as what I have come up with.  I have no intention of disputing the origins of ideas, and have no claim that my ideas are in any way original.  In fact, I read so many documents, white papers, blogs, forum posts, etc, that even if my complete system idea here is actually unique – it is heavily influenced by the works, ideas and issues others have stated in the past.

Let’s define our basic setup here.  We’re going to have 2 1GbE networks of hosts and services.  We’re going to have a 100mbps sync internet connection (not far fetched by any means these days)  There will be a gateway segment (router and firewall), switches, a LAN local terminal and an internet remote terminal.   We’ll be using Ubuntu Linux with UFW on several hosts, and one (our jump server) using HAProxy.   This is where things break down a little bit, however, as there’s a couple possible ways to do things.  One is to use SSH’s built in forwarding (which is complicated and isn’t friendly to large groups of users, though if you’re setting things up for yourself and have a number of hosts to connect from, and can copy your client configs to all the client hosts you use, it’s much more usable.  This does not require HAProxy)
Another way is to use DNS and subdomains, with a sub for each host, all pointing to the same IP, and with using HAProxy, could use the same port (limiting potential security holes by allowing only one SSH connection port)  This is the method I intend to use, though I already own my own domains and adding subs is free.  For those who wish to not use subdomains but wish to have a single port for access, there’s another method.  It relies heavily on HAProxy to scrap the connection and redirect to the appropriate host.

[It should be noted that at this point, I have not tested this nor confirmed the validity of the concepts and functions described below for this use. HAProxy can, however, forward standard SSH connections.]

Here’s the basic idea:
HAProxy is set up on the jump server.  SSH connections are pointed to that jump server.  HAProxy is configured to watch for those connections, and route that connection to the appropriate back end host on some data in the connection string.  This could be either a subdomain, as I intend to use; a separate port for each backend server (but, part of the point is to have only a single port open); native SSH forwarding (again, this is cumbersome and difficult to propagate to multiple users); or, with quite a bit of advanced configuration in HAProxy, some other data in the connection string.  The best direction here is to use some of SSH’s own connection options.  The ability for SSH to run trivial commands on connection may be usable here.

SSH is designed to listen for commands after the connection string.  When the SSHd receives a connection string with a command attached, the command is run and the connection is terminated.  You can test this by connecting to your SSH enabled host, such as:
ssh myusername@mysshhost.com ls
Assuming “ssh myusername@mysshhost.com” would otherwise allow you to connect to a tty, the addition of the “ls” at the end would cause ssh to send you a list of the directory and then close the connection.

Without breaking the ability to pass commands via ssh, we’ll configure HAProxy to accept SSH connections, watch for a keyword after the connection string, and then either forward the connection to the back end host specified with out keyword, or to terminate the connection if no valid keyword is found (alternatively, connections without keywords could be forwarded to localhost to control the jump server itself; to a honeypot if you’re into that; or something silly like a quote server or some such)

We’ll want a keyword that will not break HAProxy, but will be easily used in connection scripts, and readily human readable and repeatable.  I’ll start with “host=hostname” as such:
>ssh George@jump.bluntaboutit.com -p 1984 host=Orwell

The thought here is that when the above connection string is issued, HAProxy on jump.bluntaboutit.com (not a real address, btw) will scrap the connection string (as set in the frontend section), see “host=” and attempt to match “Orwell” to a known back end host, which if valid, would be in the “backend” section of HAProxy’s config file.  HAProxy would then remove the “host=Orwell” portion of the string, forwarding the connection, and the connection string to the host named Orwell.  Orwell would then perform authentication for the user/connection, and allow the login, if valid.  This would allow us to have another host, Clark, which we could then connect to via the keyword “host=Clark”, having a separate “backend” entry in HAProxy, forwarding to a different host.

“BUT! SSH already let’s you do that” – Not exactly.  SSH allows for forwarding, however it’s much more convoluted for each connection, and requires remembering port numbers for the back end hosts.  With the “host=name” keyword abuse we’re doing here, the most a human has to remember is the name of the host.  This does, of course, fall apart if there are multiple hosts named with serial numbers or some other obscure naming convention, such as “dc-cos0032” in which case you better have a good memory.

This method would also provide an avenue for increased security precautions, as well as security by obscurity.  It allows an admin to easily connect to a back end server with only the most basic of requirements (ssh key on the device, and connection string info) without having to look up complex connection string options.  It’s potentially usable in scripts as well.

Connecting from within the LAN would work similarly, if direct SSH connections to the servers isn’t acceptable.  The jump server’s LAN IP would need to be used in lieu of the domain name (if applicable, and if the network does not have a local domain for the hosts to be identified by name)

SARS-CoV2 (covid-19)

if masks work, why weren’t prisoners issued them instead of being released?
if masks are so important, why don’t the members of congress, fauci, and every other “ass” on TV not wear them at all times?
if masks are your savior, have you forgotten Jesus Christ?

I work from home. I have very limited interactions with people outside of my home. I have no social life which includes anyone outside of my home. The only good thing about this is that everyone else now also wants to stay away from me as much as I have always wanted them to stay away from me in public.

My wife works from home again, 2 weeks home, 1 week in office. She took a week vacation so she wouldn’t have to go into the office, because masks are now mandatory there. Her choice, not mine.

My mother-in-law and Cindy’s son both work out of the house. She don’t wear a mask at work, because she’s the only person there 90% of the time. He is forced to wear a “mask” – a spandex sleeve over his face. Neither one of them wear masks when they go out – bingo, dinner with friends, shopping, etc.

So when I say I don’t wear a mask – understand that I don’t go out in public by maybe 2 hours a week. And when I do, I stay away from other humans.

If anything, I’m the one at risk of being infected by other people not wearing a mask. Do I care? No. Why? Because first off, this thing has been blown way out of proportion – the tests have been proven to been tampered with, from false positives to inconclusive positives to untested positives; the medical coding has been tampered with, with hospitals having been proven to put people on ventilators for financial gain, others “playing it safe” and coding patients as having covid-19 when symptoms and test don’t prove it 100%. Morgues and hospitals assigning COD as covid-19 when the patient died of something other than covid-19, but they had it – and then those who never actually were tested for covid-19. Then there’s DeBlasio putting infected elderly in compounds with highly at-risk uninfected elderly. The numbers have been artificially inflated – both infected and cod. Hermain Cain, had stage 4 colon cancer… which apparently has absolutely nothing to do with how he died, because he had a positive covid-19 test (are we even sure the sample was stored properly, or tested properly, or that the results were reported properly?) The man was going to die anyways, because there’s very, very little chance of recovering from cancer of that severity at his age. He was literally more likely to win the state lotto than to recover from his cancer.

Is this virus real? Yes, it is. Is there things you can do to help prevent being infected – yes, there is. But forcing ME to be responsible for YOU is not one of the ways. If you’re that scared of catching the virus, then YOU need to take appropriate actions. Buy a proper biological hazard mask. Have it fit tested. Change the filters on it every day. Don’t go out in public with out it, don’t take it off for any reason until you get home and can sanitize it, your hands and the rest of your body and clothing.

Telling me I have to do a thing because YOU are scared, because YOUR opinion “matters”, because you think my existing alone is an offense to YOU… there’s a word for that: Fascism.

I don’t have to be near you. I don’t have to let you in my house. You have no right to force me to do anything.

Oh, and if masks and lockdowns worked – why are we still wearing masks and having lockdowns across this country? What happened to “14 days to flatten the curve”?

This virus is the new whip. Masks are the new chains. Fauci is your new savior. You are a slave to a narrative you refuse to understand. I refuse, because unlike you, I have done a lot of research, a lot of reading, and paying attention to how this virus has developed and evolved. There is a conspiracy here, one of control, of the population and the election. The virus is real, how it came to be, probably the lab in wuhan, that’s what everything I’ve read points to, the timeline, and when my friends in Asia started talking about in November. But just because it’s real does not mean it’s as dangerous as it has been played out to be by leftists and the media. The thing has been hijacked for political narratives – to control you.

And tell me, if you would, why is it that everything fauci is saying now directly contradicts everything he said in years before covid-19 regarding coronavirus? I’m no virologist, but I know enough to understand that one virus in a family does not show functions that are so adversely different from other viruses in the same family. Not on every single level of every single aspect. Covid-19 is either a coronavirus and has the majority of the same function as every other coronavirus – or it’s not a coronavirus. Why is that important? Because as far back as 2005 fauci, the CDC and WHO has stated masks do not work to prevent the spread of coronavirus. Do masks help prevent people from spitting on others and things while talking? Yes. That’s half of the point of why surgeons wear them – the other half so that they don’t get blood and fluid splatter in their mouth or nose while operating.

The virus can live in the air for days. This makes it truly airborne. Your mask, which is not sealed, which does not filter your breath, does not prevent the virus from living in the air around you. Yes, your mask can redirect your exhalations downward, upward, etc – but it is still dumping your “raw” breath into the atmosphere. Likewise, when inhaling, you are not breathing only through the fabric of your mask, you are getting air from around it.

Go watch some thermal image videos of airflow around the human body. Walking and breathing spreads your exhalations into the area around you by a large distance. That, coupled with the FACT that the virus lives IN THE AIR for DAYS means that all you’re doing when you’re walking through a room is stirring the air together with more air that is potentially contaminated. Combine that with fans, circulated air, etc and within minutes to just a couple of hours the virus, introduced by one person in a room can be spread to every breathable area of that room. Want to see this in action? Get a large clear container, pour water into it, add a drop of food coloring in on one side, from the opposite side, put a spoon in and move it slowly towards the coloring. You will see that the coloring will move throughout a larger area. Move the spoon back away. The coloring moves and expands more. The same thing happens, albeit at a slightly decreased rate, when bodies move through air.

Oh, and I forgot that a large portion of the virus is smaller in size than what even N95 rated filters can remove from the air. Even N95 biological filters can pass anywhere between 20% and 60% of the airborne virus organisms.

To sum this up:
Being infected coupled with the act of walking through a room, wearing a mask or not, will both introduce the virus to the atmosphere and spread the virus throughout the room, contaminating the entire room very quickly.

Not being infected, and walking through that same room, wearing a mask or not, exposes you to that contamination and can cause you to be infected.

If after everything I’ve said here, you still believe you have a right to tell me I have to wear a mask or die – you can go right here, because you are obviously too stupid or too brainwashed for me to ever wish to discuss anything with you.

Hopefully, however, you will have read everything I have said, it has opened your eyes enough for you to do some real research and not just take the word of some medical dictator who has been continually contradicting himself, other medical professionals, scientists and common sense. But you better do your research quickly, because Google, Microsoft, Facebook, and the rest of “big tech” are scrubbing this information off the internet as fast as they can find it, because it goes against their control and narratives. And if you tell me Johns Hopkins University is fake news – you really need to get your head out of your ass and consider that if JHU is fake news, why couldn’t Fauci’s vomitous spewings be?

Something I’m not proud of, but am in a way…

I wrote this to a private group on Facebook.  After posting it, I felt maybe a wider audience could learn from my mistakes here, and use my story here to better themselves.   Within the group, we refer to ourselves as Savage Gentleman.  It’s a group to help guide each other to be the best men we can be, from shaving advice to fatherhood to, well, things like you’re about to read.

I will start by describing what I believe to be a Savage Gentleman. I’ll break that down to Gentleman, Savage and then what it means to be such together. I’d like some input from everyone on my take on this.

Then, I’m going to get into describing a situation I was in, from this morning’s visit to Walmart. This isn’t rant about walmart. If anything, it could be mistaken as a rant on the degradation of society, proving why we need more SG’s in this world. No, it won’t be that either though – this will be about me, as a person, as a Savage Gent.

I will also cover some more important factors in my life which has helped, guided and been a stable foundation for my growth as a person, emotionally, spiritually, and as a Gentleman. This is relevant to the situation I experienced today, and my actions and attitude towards handling it. Spoiler: I could have done a whole lot better, but hey, no one is in the hospital and no cops were called.

At this point, I haven’t even started and you’re wondering if you should continue reading. It’s going to be a long one. It’ll probably push some peoples buttons too, but that’s not the intention here. So, I do hope you enjoy!

For me, a Savage Gentleman is a balance game, without our minds, bodies and souls. It’s a state of being, an attitude towards life, love, society & civilization, and the world in general. Most importantly, it’s a description of a man’s sense of being, how they attack life each and every day, how he overcomes life’s… issues. At least, this is how I view, and live as a Savage Gentleman.

To be a Gentleman, is to be a man with high morals and ethics. To be a man who takes personal as well as societal responsibility. To have adherence to the law of the land for which he stands upon. To hold himself to a higher standard, accountable to more than himself. A Gentleman shows respect, when it isn’t specifically undue; holds dear to his heart his principle and values, and does not let anyone tarnish these. A Gentleman is a pillar of encouragement for the betterment of himself, his loved ones, and every aspect of life, society and the world around him which he has influence over. (Take a look at your life, and you will surely realize, you have much more influence than you might suspect)

To be Savage, however, is another point. There is savage, for the sake of being savage. Then, there is being savage for the sake of protecting oneself, family and way of life, one’s values and all that makes up those values. Put bluntly, “not taking shit from anyone” To be purely savage would be to unduly offend, to be mean for no reason, to act irrational, uncivilized, anti-social. Even then, there are reasons to dip into these depths. There are times in life when releasing that brutally honest, fierce savage within each of us may be warranted. When a man must take a stand, no matter how small or how big, to defend his morals, ethics, princples, values. When a man sees wrong in society and has no choice but to correct it, or at least make the attempt. This level of savagery is our last ditch effort to make things right in the world. It is this savagery with meaning which we may use as a tool. We should shy away from that beast within each of us, but draw from it only when needed. It is a tool we should weild with high regard, and use only for those situations which have not been corrected with other efforts.

To be a Savage Gentleman, we take the best of both. We bolster our defense with our savagery, we use our intellect and wisdom for our offense, when absolutely needed. Every situation in life can boil down to something we need to attack, defend against, embrace with our hearts and minds, or has no bearing on us what so ever. This is true just as much for choosing our physical tools – from wrenches to pedicure files and beard balms – as well our metal tools: knowledge, wisdom, behavior, values and principles. From situations as choosing what vehicle to procure, to dealing with unjust actions taken against our loved ones. To be a Savage Gentleman is to use both sides, to balance ourselves and dip into each side more and more as necessary, to stand for what we believe in and ensure no one scrapes that away from us. To be a man who can sit in a bar and have some laughs with strangers and friends alike. To be a man who can stand up for what he believes in and do the best he can to protect and correct whenever possible. To live his life and not let others negatively affect it.

We make decisions every moment of every day. Every situation we are presented with, we have the option to attack it, defend against it, let it alone, or embrace it. This, to me, can be seen as Most sage to Most Gentlemanly in behavior. A good mark of a man is for him to know when to attack and when to embrace. Society is getting to the point where we, as Savage Gentleman, should be standing up to attack those things which are contradictory to our way of life. To attack with personal, social and political responses. To attack by voting out those leaders whom would see harm done to us. And, if it were ever to come to it, to attack in a more physical means. We are well past “letting it be” and our defenses, though strong, are often not enough to prevent the degredation of our way of life. No, I’m not issuing a call to arms. I’m issue a call of personal responsibility, if anything. We need to live our lives as shining examples of what we believe, and not let anyone take that away from us.

This, to me, is what it means to be a Savage Gentleman.

Today, I was feeling much more Gentlanmanly than savage. I’ve nothing pressing which has to be done today. I work for myself, and I’m more or less taking a day off. My wife and I are planning to go shopping this weekend, and neither of us like to shop at Walmart on the weekend. So, being in a good mood, I decided I would go to walmart and get the few things we can’t get at other stores. That’ll make this weekend much more enjoyable. Or, so the day began with such thoughts and wished.

I’m not here to rant against walmart, but to explain what happened, how it could have happened, and what I could have done better. I’ve had anger issues since I was 11. I’m nearly 40. Every day, I wake up and have to work to keep myself composed. Some days it’s much easier. Other days, though much easier now than previously in my life, is still challenging to me. Today quickly turned into one of /those/ days.

I spent about 45 minutes in the store. Nearly every other product I went to collect was out of stock. By about the third item, I started to get annoyed. I would have to come back tomorrow, or try to find the items at the other stores we shop at. Regardless, my like of 15 or so items was not going to be completed this morning. Twice, I attempted to get an employee’s attention, because often there are products on pallets in the back which havn’t been stocked. Twice I was promptly ignored. Now, I’m starting to get a bit upset. But, I say to myself “it’s walmart, this isn’t unusual for these people. Chill”

Soon I realize there’s no one stocking anything but 2 employees. One of which is stocking an already fully stocked produce department. Ok, I get it, that’s where she works. But, no other employee is working to stock the bare shelves. Maybe the truck hasn’t come in. Maybe the employees aren’t on duty yet. But no amount of justifying the situation made me less upset, in fact it served only to make me more upset. All that kept going through my mind is “$15 an hour and these people can’t keep product on the shelves” Yes, this was pretty petty of me. The savage, for which we should dip into only when needed, had taken it upon himself to poke his head out and grumble.

Then it happened. The woman stocking produce looked right at me. There was no way should could had missed me walking into the department. Instead of waiting for me to pass through, she, from a full stop, pulled her stock cart right in my way, cutting me off. “Really!?” I muttered aloud, but quietly. I was maybe 8 feet away. If she were quick, she could do it and get out of my way. She didn’t. She stopped the cart again in the isle. I’m on the verge of being pissed off. I’m not thinking clearly. I’m moving along, wanting to get out of this incarnation of Hell upon Earth. I attempt to go around the cart, but there is no room. My cart hits the stock cart. Not hard though, I wasn’t walking that fast or hard. I stop, exclaim “Fuck!” – but the stock cart is now out of my way. So I walk briskly away, pushing my cart.

She did it. And frankly, I’m proud of myself for not reacting. As I’m walking away, I hear her cussing me. Yes, maybe I was a bit of an ass for hitting her cart. I honestly didn’t mean to do such – at least not consciously. Maybe I could had apologized, but I didn’t. To me, at that moment, I had done nothing wrong. I had not caused the situation. But I had done something wrong. I escalated the situation. I didn’t stop before hitting the cart. That moment nearly broke me. Because of that, I had now afforded her the self entitlement she so wanted to cuss a customer. My blood is now boiling. Such a piece of work. To cut me off and then cuss me!? The balls on this woman. Suddenly, I realized I needed to just let it be, to move along. A single feeling started to come over me. I did not want to go to jail. And I knew, had I turned around, had I screamed at that woman like my heart was yearning for, has I made that scene – the cops would be called, and I would be going to jail. Thankfully, I did just move along.

The camel’s back, now fully loaded. Oh, look! One more straw! There are two registers open. And each line has several fully loaded cart. Oh! Look at that, the one cashier is talking to her friend instead of scanning their items. I’d had it. I’m done. I’m not going to jail over these idiots. But they sure as hell aren’t getting my money either. Considerably loudly, as if I were talking to a friend 5 feet away, I exclaim as such. “Fuck it! I’m done. Going home, this place is fucking retarded!” as I pushed my cart deep into women’s clothing, between racks of cloths. I’m sure at least 3 employees heard, and seen me. I felt no remorse, no regret, no shame in these actions and words. These people need a wake up call. They can put away the $100 or so worth of merchandise which I’ve abandoned. I’m not doing it. As I’m walking towards the door, a manager, could do nothing but stand there with her jaw wide open just staring at me. Not a single attempt to ask me what the issue was, to try to correct anything – nothing. Just stood there like an imbecile.

Now I’m certain this woman is going to be calling for security. I’m 40 feet from the doors. There’s plenty of time for them to catch up to me. I’m certain they’re now going to escalate this into an reported incident. Thankfully, I was wrong. Not a single employee or person attempted to approach me. The only wise thing walmart employees did today. I continue on to my truck, still weary that someone might be following me. I turn around and look, no one. I get in, turn the truck on and just sit there. I need to calm down. I’m mad. Seriously mad. I don’t drive when I’m mad, any more.

So, no cops, nothing. After all, the very worse I did was accidentally bump the stock cart. I hadn’t berated any employee. I hadn’t made any physically threatening gestures. I simply walked out of the store exclaiming my dis-satisfaction. I do some mental exercises to calm down some more. The last thing I need is for a speeding or reckless driving ticket. I know, however, in my heart as well as my mind, that those thoughts are a sign of guilt. I am guilty. I allowed myself to be non-proportionally savage and on the verge of out of control when I should not had allowed it to get as far as it did.

I am not alone in this, however. Those employees are just as guilty in the events which lead up to me leaving the store. But they will never understand that. They will never fully realize how much of an emotional and mental drain they are on their customers. These people demand to be paid more, to be treated better, to be entitled to work and pay. They put no effort into this, where it matters. Quality, service, personability, worth ethics, civility. They lack this. Maybe not completely, and maybe not every one. But this is where society has been led. On a golden leash of promises.

I hope my actions and words have served as a wake-up call to these people today. I hope that some good can and will come out of this situation. But, those employees will probably, if not already, forget about me existing at all.

For the better part of 30 years, I have fought every day to improve myself. I have fought to hold back the savage in me. I have fought to control my temper and anger. Maybe I’m just getting older, but the last 8 years, I have truly started to get a grip, a real control on that side of me. About 5 years ago, I met the woman who would become my wife (We’ve been married just over 2 years now). She has seen me at my absolute worse – punching holes in our bedroom door, throwing stuff across the garage. She bore those times with me. She helped me get through them. She has helped me every day to be less anger filled and to not have to fight to be in control. I have had the cops called on me because of my irrational responses to employees in walmart before. For much less than what happened today.

I owe a very large part of my self control to her. She truly loves me, and I her. It is that love that has kept us together, that has helped to shape me into a much more respectable man, that has given me the control to walk away from situations that could end up so much worse. She brings out the best of the Gentleman in me. She is my torch. Together, we stand on a pillar of morals and values. I might slip now and again, but even when we’re apart, she helps get me back on top.

I still have to work hard every day. But now, it is not to control myself, not to keep my temper and anger subdued. I work every day to better myself. To make that fight even less of an issue. To get to the point, where one day, I can wake up and not even think about my issues. I fight every day to grow to be the man my wife deserves. Today, I failed in that. She may never know, but I will. I have tarnished my own values, yet again. So, every moment, every day, every situation, I will work to make the best choices, the best decisions I can to lead to being the Savage Gentleman she deserves in her life.

I neither deserve nor want accolades. I have spent the last few hours beating myself up over this. I respect the situation in that I have an opportunity to learn, to grow and to be better for the next situation as such. I share my story in hope others can learn, and know they’re not alone For the mentors out there to better understand the struggles they themselves may not personally have.

I’d love to read your thoughts on this. I’d open to any legitimate advice that can help me make situations like this a thing of the past. Don’t feel obligated, but anything you share would be greatly appreciated – not just by me, but potentially hundreds of others. How can we fix our civilization if we can’t even admit to ourselves that we need to fix us? We learn from each other, but only if we share for others to learn from.

If you’ve made it this far, you’ve read about 3000 words, or roughly 16,000 characters.  (maybe a tad under that, but not much).

Warehouse concerns

Here’s my thoughts on this whole thing. It’s your choice to ignore what I say, or to read it. Completely up to you.

Holding back the ocean with a broom. It’s a silly, old saying. It’s quite apt as a parable here however. Your words are the broom, the ocean is change, and Amazon is just the current wave in front of us. Like a broom and the ocean, there’s nothing that can be said to prevent the project. At this stage, even a lawsuit will only work to hinder progress.

However, before ground breaking, there were words said. When it mattered. Those who spoke up had many changes made to the plans. Honestly, these changes should had been implemented from the beginning by the designers. Some of the notable changes made to the plans is the addition of a high living barrier (dirt wall with plants), trailer parking and warehouse docks on the interstate side of the warehouse, and traffic restrictions for trucks entering and leaving the warehouse.

I joking commented earlier about wanting the area to be a pig farm. (I love bacon, ham, porkchops) The absolute stench a pig farm would produce would be terrible. However, until Amazon became interested in the area, there was a high potential for the land to be used for agricultural uses. Mind you, the city would not had allowed a pig farm there, I’m certain. But that being said, there are worse things which could exist there than a warehouse.

Yes, from my understanding, there was a sign placed on the property indicating the arrival of a Publix. However, I happen to know that Publix never had any solid plans of building at that location. Their interested waned greatly due to new housing “crash” – Their interest was based on the developing and growing area, which ceased for quite a while. Publix also has a long standing habit of pulling out of a project if their name is attached without their consent. This is doubly so if their interest in a parcel or building is not solid. The sign itself may have been enough for them to withdraw their considerations.

As far as traffic is concerned – there will be two types: commercial/delivery and employee. From my understanding, the commercial and truck traffic will be using an entrance much closer to interstate, but still on the main through-way. Employee traffic will be routed to the entrance at the end of the residential feeder. There may be the occasional vehicle at that intersection throughout the day, but most of the employee traffic will be during 3 or 4 times a day. Trucks will probably be leaving and entering at all hours. They’re actually not as loud as people think, and won’t be disturbing anyone’s sleep.

Consider for a moment that Publix had built a store and retail center at that area. This would cause a constant and continual flow of traffic for most of the day. Consumers entering and leaving at all hours of day. Trucks and delivery vehicles would still exist (albeit not at the same quantity) and there would be no traffic restrictions preventing residential roads from becoming a through-way for those shoppers. There’s the possibility of an alcohol serving restaurant, or even a bar existing at that strip mall. Now, there’s a high potential of drunk drivers on these roads, which lead to our communities. Roads where big trucks won’t be driving. Personally, I say that’s a win for the area. Employees won’t be driving inebriated.

Let’s talk about the economics real quick. The sale of the land, lots of taxes to the county and City. Yearly taxable income. 500 new jobs (say 10% are filled by transfers, that’s still 450) This has already resulted in another parcel being considered for commercial building – so yet more taxes, more employees. $15 per hour wages. Ok, so let’s consider the work those employees will be doing, and the conditions in which they’re doing it. Warehouse workers are the new coal miners, the new boilermakers, the new high-rise riveters and steel workers. It’s a job not for everyone. Extremely physically demanding. Very mentally draining. There’s going to be a lot of burn-outs who will look at less stressful positions, even if it means taking a pay cut. But in the mean time, those employees will be living and shopping in the area. They will be spending their money in our home. That’s more taxes, more little stores getting income. That’s the potential for /more/ little stores.

Housing and property valuations. This is a big concern for high end communities. I get that. It’s also quite true that the values of houses will drop slightly. With the amount of employees Amazon will have at this location, there are certain to be new homes in the area built. Older homes purchased. The area will grow. If these homes are high value family residences, that will ultimately bring the value of existing homes and neighborhoods up again. However, if these new constructions are allowed to be dozens upon dozens of low-end or starter homes – I can’t say the values will raise at all. Now, I’m not talking about new neighborhoods of high-end communities, but rather $250-$500k homes. So, it is with that concern that the residents in the area should concentrate. Ensuring that any new developments have stipulations of being mid-level communities. Not just for area housing valuations, but so they are actually affordable by those who will be working at Amazon, and thus will purchase them and live there.

If the area were to have been used for a retail center – Publix or not – it would drive a lower end push for housing in the area. Sub $200k homes. Starter homes. In a few years, they would be sold off to others. There’s no attachment, and thus those neighborhoods would quickly fall into less optimal aesthetics. With Amazon, there is a chance of new neighborhoods being valued higher – both financially and emotionally, and will be better taken care of… and add to the value of existing neighborhoods.

There will be new homes, new commercial, retail and industrial projects in the area. It’s not stoppable. Not unless someone purchases all of the surrounding land. It’s progress (like it or leave it) It is up to us, the stewards of our community, to help shape the direction this growth takes. There will always be things not in our control. There is still plenty which can be done however. We just need to pick the fights we have a chance to win. Having the new construction and businesses make concessions to ease the burden they will put on the area. To compromise on designs to increase the overall happiness of both parties. To help direct the City in area restrictions and code. But it must be done appropriately, at the times and places when it matters, and always with high moral, ethical and legal intentions.

For which we all know

Thou haste through life, willfully impressing upon time thy own will amid suffrage of passage. Fleeting as sparrows in the wind, for which once was, now passes. Yet! Upon the horizon, cometh anew for all whom shall witness.

By remembrance and spirit, before us whom hath passed, share in our celebrations. This night shall end a chapter of life. Ferry on courageously with passion and sight. Rejoice amongst friend and neighbor, for upon thee is the birth of a day, a month, a year.

2019 has brought many losses to many of us. Fathers, Mothers, Brothers and Sisters, friends and family. Bring their spirit with you into the new year. Remember them, and their goodness. Live with them in your heart and thoughts every day. Live for them.

Do not loath their loss, for as long as you live, they live within you. Make them proud of the life you continue on. Remember too, those who your loved ones have left here, and embrace them warmly and lovingly.

The new year brings a new day. Make the most of that day, of the coming year. Every new day can be a new start – The day one starts living, dieting, working to improve themselves, their lives, their situation. The day we forgive those who have wronged us. The day we ask for forgiveness from those whom we have wronged.

Bring in 2020 with your goals, aspirations, and a renewed joy for life and those around you. Though, every day we can choose to set aside our differences, issues and bigotries, a new year brings acceptance of those changes and can symbolize not just the passage of time, the birth of a new year and death of an old one – but it can symbolize the birth of a new you, and the death of those things which should be left in the past.

I, for one, have already started working on the goals I wish to achieve for 2020, and have been for some time. New Years Resolutions are only useful if the time, effort and purpose is continually poured into and used for those goals. Be it to lose 5 pounds, or to start what will become a Fortune 500 company. Now is the time to start, and every day is Now, with tomorrow being a symbolic start, birth and renewal of a great many things.

Make the most of 2020. Live every day as though it is “THE Day!” Life is full of obstacles, rules, and goals – the biggest obstacle to any of our goals is ourselves. Break through that and become the greatest version of you that’s ever lived!

Be safe, and please have a Happy, productive and exciting 2020!

e-waste – how to handle it

This post is applicable to EVERYONE – not just the original poster. This is my reply to their question on SpiceWorks. When I started, I wasn’t planning to write a full on instructional manual, but I have, more or less. I have years of experience with e-waste reclamation, in south Florida. One thing I don’t mention below – you’ll want to watch your local scrap prices for steel/irony, copper and aluminum. Don’t jump the gun and take a small amount of scrap in when the prices reach a high, they fluctuate, and will be higher again. Now, onto my actual reply:

No one buys e-waste. Recyclers sell their services to businesses, to take the gear. They then strip it all down, and recycle the various components. You’re paying for that time, and the cost to recycle the non-metal portions. Any money made on the back-end with metals scrap is their actual profit.

Your old but still usable gear… don’t pay someone to haul it off. Reset bioses, firmware, DoD wipe drives… and give the stuff to some kid who can use it to learn, schools and libraries, sell on ebay, or keep as spare parts. This is how to do that, within a business, environmental, social and legal sense (at least in most places, I think), and maybe even for a small profit of your own ("your own" being speculative, your business may want the funds, or allow for the funds to be added to a departmental slush fund, after work party, etc)

First and foremost – TRACK EVERYTHING ON PAPER! Any gear which leaves the ownership of the business NEEDS to be written down, and preferably authorized by management above you. Especially if you are the recipient of said equipment. The recipient also needs to print their name and sign. This is totally a CYA thing, and though may not be required by law or company policy – make it your own policy and NEVER DO WITHOUT! I’ve seen this bite people in the ass years later.

Your best bet would be to find an entrepreneurial high school student, and guide them into building an e-recycling business. No one in the US will buy your old gear, much less pay shipping for it. Not that level of waste, anyways. RAM, CPUs, motherboards and expansion cards have higher value due to the potential gold content – these you might be able to sell to someone willing to take the time to reclaim the gold and copper. Old drives, heatsinks, cases, PC power supplies – these all have the highest value for metal scrappers, and if there’s a metals recycling center near you – it would be worth your time to collect the stuff until you have enough to spend a saturday breaking it all down to "clean" metals – removal of any plastic, boards, and to separate the metal types. This is where that high school student would come in. There’s at least three different metals in most hard drives (aluminum, steel/irony and rare metals from the platters and possibly magnets). Heat sinks are almost always aluminum and/or copper these days, and are pure profit for scrappers. The PCBs, if they do not have re-use value (You’d be surprised what people buy in the way of old working tech! Check ebay!) can be gathered up, with the steel and plastic stripped off and sold in lots to gold reclaimation businesses. This is a mightily dirty, toxic and dangerous work, so you’ll often be lucky to a dollar or two per motherboard, and less for PCI/PCIe cards. This isn’t the work for most people, and the start-up costs are considerable if done properly to protect the environment – lots of waste heavy metals and acids that need to be properly handled.

So, ultimately, this is what I would do:
For any whole/complete or mostly complete equipment which still works (or can with the addition of some parts): Sell on ebay, locally, or give to a student aspiring to get into IT, or for old desktops, donate to a local school. This includes printers, monitors, etc as well.
For old /working/ components (RAM, CPU, motherboards): sell on ebay. Being they’re small components, the shipping isn’t horrible, especially if you give a local-pickup option, or charge more for off-island shipping.
For non-functional components: Start a collection box, strip off the larger pieces of plastic, steel and aluminum, and put the "cleaned" components in another box. Take the steel and aluminum to your local recycling center to get paid for your time.
Computer cases, heat sinks, HDDs, PSUs, etc: strip off any plastic, and separate the steel, copper and aluminum. Use cleaned PC cases as collection bins for steel items (such as brackets from expansion cards) – once a case is full, take the whole thing in as steel/irony scrap. The same can be done for copper and aluminum too, one metal per case.
Non-functional "cleaned" PCBs and HDDs: Collect the HDD platters separately from the rest (above) and sell in lots by number or weight to metal reclamation business – possibly the same people who will buy the PCBs for gold and copper reclamation.
For non-functional systems & gear: Deconstruct these to their base materials and components and start at the top of this list again.

Your biggest cost then will be the plastics recycling, which you might be able to pay someone locally to pickup for recycling. SOME plastics can be sent through a shredder and melted down into 3D printer prototyping filament spools, but not something you’re liking to do yourself. Of course, the rest of the stuff, you’ll need storage space for, but that can be as little space as a couple office chairs, especially if you dedicate a shelving unit. Anything you sell on ebay will cost posting fees, and anything you sell in working order will need to be shipped in anti-static bags and bubble wrap. To be cheap, you can re-use the bubble wrap and AS bags from equipment you’ve ordered. Boxes too.

Anything above where I said "you" can also mean anyone else. But, don’t expect them to purchase the gear from you to do that work. If your company would allow it, you can possibly "hire" a high school kid to be your e-waste recycling "contractor" – Someone who would be willing to pick up your gear for free and do all the work. If you build up a nice pile, enough for a week’s worth of work, that would be enough to get them started as a business. Advocate for them to other businesses in your domain, to get them going. But, you’ll need someone who you can trust to actually do the work properly, as your concerns are environmental and not profit based. Doing this would ensure the components stay out of the landfills (and the environment) – AND help to start up a new business. Win-Win!

Oh, and if you want to get really into it, fans have copper coils which can be removed, collected and sold as scrap. It’ll take a couple hundred to have enough copper to really be worth anything, but popping the motor out of the shroud and removing the blades is done easily enough, resulting in much less space needed for storing until enough is collected to tear down.

CRT and LCD monitors, these can be de-constructed as well, but they require extra special care and handling. The light bulbs are similar to the tube lights in your ceilings, and LCD panels can’t easily be recycled, so they would have to be shipped off. Thankfully, most "dead" LCD monitors only require new PCBs to become functional again – so repairing them is often more cost saving then replacing, and for the ones which aren’t to be re-used, the internal components and LCD panels can be sold on ebay – ESPECIALLY if the monitor was working when decommissioned, so you can say that it is in working order, the same for the PCBs (power, control, I/O boards) On common and more expensive monitors, and ones which use standard VESA mounts, the stands can be sold separately. I would keep any working external power supply bricks though, especially if your company has the same monitors on a lot of desks.

Which brings me to my last point: Spare parts. Keep them, at least for a while. If it’s something that is replaced with another – such as someone in the company getting a new desktop, but their old one is still in working condition and it’s newer than a Pentium II, keep it. If you hire a new employee, you’ll have a PC on hand, at least to get them started with until a new PC can be purchased or built. Once a quarter or so, take the oldest half to a school, library, etc – make a drive of it and go out to areas where they’ll really be appreciated.

Dark mode and more

This could be considered an RFC of sorts.

It’s intended to be a starting point for professional designers to create a color standard for 4 different brightness modes of UI display – from near black to high contrast bright.

There are a set of 4 base optometry categories – Normal vision, Deutaranopia, Protanopia and Tritanopia. Each category has 4 basic scheme modes based on brightness. There are 2 modifiers, modifying 2 modes each. Each scheme has 4 hue offset options for gray colors.

In total, we have 4 vision categories, 4 base schemes, 4 modified schemes (equalling 8 schemes) with 4 hue offsets each. That’s 4x8x8, or 256 schemes. There are 8 grays, plus black and white, giving a total of 10 gray colors per scheme, or 2560 color value entries, with some colors to be duplicated.

The scheme names are based on daily solar cycles:
Daylight – "normal" light mode.
Sunset – Daylight plus orange mask.
Twilight – Dark & light grays, higher contrast, no black or white.
Dusk – Twilight plus orange mask.
Midnight – Darker than Twilight, lower contrast, no mask
Sunrise – Twilight with a blue mask
Morning – Lower contrast version of Daylight, white/black are offset slightly towards gray
Noon – Daylight, higher contrast, brighter hues and blue mask

The schemes have 4 color hue offsets. These hues allow for B&W grays, or offset to red, green or blue. This allows for a more comfortable experience for the user, if they prefer a slight color to the grays. This also allows for aesthetic integration with the rest of the color pallet used in the UI; color correction for monitor temperature differences; and user comfortability resolutions.

The schemes can be represented as:
normal – daylight – key (white)
normal – dusk – red
Deutaranopia – morning – blue
normal – midnight – key (black)

Ideally, there will be mechanisms in place for the use, and ommission of these color schemes. Once such ommission example would be for the display window for graphic artists and video editors. These professions rely on color accuracy for their jobs.

There would also be mechanisms in place for automatic controlled selection of schematic based on time of day and environmental light levels. Also to allow user-created scheme rotation sets which adjust based on time of day, light level and trigger events. There should also be the ability to permanently set a scheme by the user. The schemes should be toggleable for use with full screen video and games.

The four masked schemes should be used for circadian manipulation – blue to help the user wake up, stay awake and focus; orange to help the user to begin to relax after a long day, and which can help with computer use related sleep issues.

All effort should be made to create all 8 schemes available for the 3 categories of colorblind users as well. A set of grays, based on rgbk will be needed, preferably 8 grays, plus black and white, for the basic color schemes.  This will allow for enough contrast for monochromatic UIs, with true monochromatic settings eliminating all color hue, with a total of 256 different grays, where r,g,b are equal values.  A full-hue color scheme, representing all of the 8 bit rgba (for a total of 32 bit color) gamut can have the color hue decreased or eliminated, resulting in a high definition monochromatic display, and is easiest done with hardware, however, there is limited advantages of this compared to 256 grays with 256 alpha channels (resulting in 65,536 potential colors)  The focus here, however is to create a set of color pallets used for UI designers which can be used as the basis for basic display colors for all "color modes"

These schemes contain the basic grays used, and should be more than sufficient to provide a basis for any non-gray color theme for any OS, app or web design. There is also the potential for these schemes to be useful in print and other visual displays.

I have create an incomplete table of all optometry categories, masks, schemes and color hues. The normal vision category being the most complete, missing only the color values. Example values provided are just examples and may need to modified for any real world application.

The purpose of this table is to allow designers to quickly and easily create schemes for their project which will be 100% compatible in gray scale with other projects using the same standard schemes. However, this still leaves artistic space for non-gray accent and base colors, allowing for full themes to use any color atop of the base scheme. This should result in a total overall expeirence with mixed themes that feels natural and integrated. A mixed theme same scheme environment will allow for differnt programs and elements to have different colors atop of the same scheme. An example of this would be using VLC and Facebook Messenger on Windows, where the Dusk scheme is applied; Windows could then have white theme, VLC and orange theme, and Messenger a blue theme – however the window, background, text are all based on the Dusk scheme and so all UI elements use the same grays. This would carry between OSes, Desktop Environments and browsers, resulting in the same gays being displayed for the same scheme on all devices.

This, unfortunately, can still result in different hues presented to the view while using multiple monitors and displays, as there are differences between manufacturers, pixel colors, backlights, and color temperatures – as well as age affected color distortions. However, these issues can either be manually adjusted per monitor settings, or ignored.

The focus of this, again, is to provide a standard means of designers to have access to a set of color values to present their works in a unifed way. However, it would be much more ideal for APIs and libraries to be written for each OS, browser, and Desktop Environment (KDE, Gnome, Explorer, etc) where the program, app and web page are slave to the user’s setting, thus elimating any need for manual adjustment or bloated code bases for each element, window, app, etc. This would also be a user-optional system to use, with custom full themes being able to override the system theme/scheme, such as the case with Linux desktop environment theme packs.

Below is the incomplete image of an ascii table representation of this, with "normal vision" being the most complete, yet missing most of the color value data:

Hurricane Charley and the life it changed.

Let me start by saying I get nostalgic about the old times. I miss them, and my friends. But I would never change a thing that happened. I am grateful for all the times we spent together, and will cherish those memories for as long as I can. We made a lot of memories – some good, some not so good – but those and the great memories will forever be in my heart!

Today marks 15 years since Hurricane Charley formed. In 4 days, it will be 15 years since it "wobbled" up the Peace River in Punta Gorda. It destroyed my home, my life, separated my group of friends. It destroyed my home town, and the quaintness it once had. It took so much from so many.

Out of destruction comes anew. The plot where our home was has long since been cleared, and I think there’s even a new house there now. My friends all started new lives, doing new things, and excelling as wonderful people. My home town has been re-built, losing it’s once beautful quaintness, but gaining a renewed aesthetic beauty and even a better economy. All of those other people have moved on, gotten stronger, doing better, and are for the most part living their lives as normal. It’s been an interesting journey for me, however.

It was about this time, 15 years ago (and maybe a month), that I met Jeff Bushey, and his little company SurityNet, and The PC Hospital. I had just applied for a job at a new restaurant opening soon in the strip mall, and from the sounds of it – I had gotten the job. I walked into the office looking to buy a PCI SCSI (80 pin at that) card for my PC, so I could retrieve some stuff off of my old Macintosh drive. (When did Macs stop using SCSI?) After talking to Jeff for a bit, telling him what I needed, and that I’m shopping around and can’t promise to buy anything. I explained the situation with work. By the time I left the office, I had gone through an interview, and was offered a position. Flabbergasted! I was exstatic! I’d been working for him for a very short time when the hurricane hit.

Enter August 13th, 2004. Charley is aiming for Tampa Bay. Seemingly everyone in Punta Gorda is having a hurricane party. My friends and I included. We’re baking a pizza, so I stay home when everyone else went down the road to our friends house to get them to come down. The usual thing – most of the group goes. They never came back. 20 minutes later, I get a call from my mom, telling me the hurricane changed directions, and is now heading for us. She told me she sent her husband to get me (and my friends if needed) By the time he drove the 10 minutes to where I was living, the winds had picked up, palm trees were bent over, leaves and debris flying through the air. Oh boy was I glad to be getting away from the water! I found out later, my friends had been told to stay at our other friend’s house, which was considerably better built. They all made it out OK!

It wasn’t until about 5 or 6 when Bear was able to take me back home. The roads were blocked with downed trees and power lines. I had to walk the last mile. Only, when I got home, there was no home. The roof was in the road, the windows were blown out, there was debris everywhere. My pizza! It was still in the oven. It had finished cooking! WOO! I have some food. Awesme! I’m now in the middle of a completely destroyed area, with no one else, and almost no supplies. I realized there was pretty much nothing I could do at the house, but there was plenty I could do to clear the roads. I made my way around, moving trees, and (DO NOT TRY THIS AT HOME!) power lines out of the way for emergency vehicles and residents. I made it back to the corner store. People were looting it. Kind of sad. One person wrote a note and put some money with it in a safe drop tube, and put it in the safe for the items he had taken. I needed water, and there was no running water – and if there was, I wouldn’t had trusted it. So, yes, I took a gallon water, a snickers and a can of mountain dew. I can’t justify my actions, they are what they are. I did go back about 3 months later, when the store opened, and offered to pay for what I had taken, and was told not to worry about it. The manager told me how many people had done the same, and honestly, I was put aback by that. Very cool. Insurance covered the losses, and it would had technically been illegal to now sell those items – or something.

I walked the 5 or so miles from there to town, moving branches and lines as I went. (Again, DO NOT MESS WITH DOWNED POWER LINES!!! I was dumb to do so, but did so as safely as I could) In all, I cleared probably 7 or 8 miles of roadway for traffic. When I got down to the old neighborhoods downtown, I spent a couple hours helping some friends of my friends clear some very large trees from the roads. From there, I walked down to the Highschool, where there’s some (at the time) newer apartment buildings. I knew Barb’s daughter lived in one, so I walked around looking for Barb’s car. Come to find out later, I had just missed her. It’s getting dark, and dark in that situation is dangerous. I had no choice but to find shelter or, preferably, to find a way back to Mom and Bear’s house. I went back to the previous neighborhood, and someone had a working cell phone. I tried calling, after the third time, I finally got through. Yay! I don’t have to sleep in a random place!

The next day, Bear and I drove up to SurityNet. I didn’t have my own transportation, and Bear wouldn’t be able to take me back and forth. I had to let Jeff know I couldn’t work any more, and would have to work with Bear, to be in exchange for now having to live there. Not a big deal, just sucks I can’t work at SurityNet any more. We go out and do some errands – checking on customer houses, getting some water and MREs and head home. Jeff calls. He’s got an offer for me. If I would be willing to pay rent, he had a spare room for me. He said he could take rent from my pay, and take me in to work with him, and bring me home too. Jackpot! I get to live in a nice, new home with airconditioning, get transportation to and from work, and get to keep my awesome PC repair job! This turned out to be one of the best things for me. Jeff and SurityNet introduced me to the wide world of IT at large.

Note: I’m going to skip a pretty regretful situation involving me moving to Kansas, being cheated on and having to move back to Florida. It wasn’t a plesant time at all.

It’s later 2005, and I’m ‘renting’ a room from Barb, helping her to take care of her dogs (about 20 or so) which were used for breeding. Every dog she had was loved and cared for greatly by her, myself and many of our friends. They were pets… that just happened to help pay for themselves. I stop by SurityNet one day, just to say hi to all the people, and Jeff offers me my position back. I took it. In hindsight, I probably could have made some better decisions – but I didn’t. Life was good, and I worked for Jeff for a few more years. We’re swamped with work. We need someone new.

Jeff gets some resumes in, and asks me to look over them. We settle on one, from Shawn, and Jeff calls him in for an interview. Shawn has amazing experience – from PC and printer repair to networks and firewalls. Top notch person to be working with us! Needless to say, Shawn gets hired. At this time, I’m back at Jeff’s. After a week or so, Shawn asks if we know of any RV parks near by, as he and his wife live in an RV. Jeff had sold his to help build up SurityNet, but still had his RV shed at the house. Jeff, being a kind and wonderful man, offered Shawn to park there. Wow, it’s like some kind of tech beta house now! Shawn, his wife and I became good friends. Some time later, they left, as he had an amazing opportunity for work. But Jeff nor I could blame him. We all missed them though.

The whole time this is going on, I’m chatting with friends in IRC. Good friends. I love me some IRC – so many interesting people, such good friendships can be built! We’re playing Neverwinter Nights, Guild Wars and some other games. In 2009, one of my friends suggest this new game from this Swedish indie game dev, called Minecraft. I’m broke, living in a travel trailer behind Mom’s, and no longer working for SurityNet. I tell him I can’t afford it, and my computer (a little laptop) probably wouldn’t run it well. It could barely run a 5 year old Neverwinter Nights. Turns out it was free, web based, and played much smoother than any other game (Oh my how I miss those times for Minecraft!) I hop on some servers, and am annoyed by players griefing my stuff. Every server I log into, just chaos and idiocy.

Then I found BuildSomethingFool. An amazing community run by a couple of potheads (at least I think they were) They had an amazing staff, and the players were very well behaved (or banned if they weren’t) I spent like 3 days building an Eiffel Tower build, with an underground area, gardens, etc. All without using hacks. No flying, nothing. The owners were amazed and offered me to be staff. I could now ban the little trouble makers! WOO! I also learned the game and could explain it very well to new players. One in particular was so confused, and so hapless – I couldn’t help but take pitty on her. I spent probably 20 hours with her, teaching her Minecraft, and embuing the knowledge I had gained. She was a quick learner – and soon became staff as well. By this time, Minecraft Beta was being released, and premium accounts were being sold. VueJohnson, her player name, wanted to thank me for everything I taught her, and purchased an account for me. I was so grateful. The only thing we could do at the time though was to change our skins.

I took a break from Minecraft for a while. Everyone was focusing on this new beta of Minecraft, and I wasn’t able to run it nearly as smooth as the old classic version. When I came back, I had a better computer, thanks to a wonderful friend who I haven’t spoken to in years, but always wished to be annonymous. I could play again! But it was kind of boring. I tried starting my own servers, and quickly ran into problems. Low and behold, where is support for these things, on IRC! I was right at home! I came for help, and stayed to provide the assistance I could to new people. I did this on and off for a few years. It was a great hobby for me. I ended up helping someone get their servers up and running, and really for the first time was doing something I truly enjoyed doing with Minecraft since beta came out. This is not where my life would have led me if I had not moved in with Jeff the first time.

Shawn and I get back in communication. I end up going to his place for a weekend to visit. Some time later, he’s going to move again, and I help him move.

I’m sitting doing some work on this other fella’s server, and keeping an eye on the IRC support channel I was in. This person, presumably a girl, asked some questions – I answered the best I could. After a few weeks, we had become pretty good friends. But I couldn’t tell how old she was – not that it mattered, our friendship was open, public and innocent, but I was just a bit confused. Some days it seemed like she was this mature adulting person who had life together, but then she sometimes seemed like a 12 year old – playful, creative and curious – You know, the good parts of 12 year olds. So I told her. She never told me her age, but confirmed she was much older than 12. Then one day, she shared a picture of her brand new swimming pool and spa. Oh! She’s either much older than 12, or REALLY has her life together for being 20 something. It’s a beautiful picture. I told her "One day, you wait and see, I’ll be swimming in that pool!" More to tease her, intentionally coming across as a bit creepy. We were at that level of friendship. Or so I thought. She didn’t reply for what seemed an eterity! I was crushed! I just ruined a good friendship over something silly.

Well, later that day, she did reply. She insinuated that me swimming in her new pool was not out of the question. Woah! I’m thinking we’d meet up for a lunch or something, I don’t know. We both live in the same state, within an hours drive. So, completely possible.

About this time, Shawn is moving out of state, to Pennsylvania. He asks me to help him move, again. So I do. I figure it’ll be a week or so. We get up there, and there’s a lot of work that needs to be done, so I offer to stay and help. I’m doing this for a good few months. Great times. We learn a lot about construction, remodeling, and even gardening! We’ve got one more trip down to Florida, for more stuff, and Shawn has some business to tend to. I let this woman I’ve been friends with know, and that I’d like to meet up with her one day while I’m down, to have lunch. She agrees! Amazing! She drove out to meet us, and had a fantastic time! But time’s up, and Shawn and I have to head back up north. She sends me along with a cell phone, so we can keep in contact. She’s going to London for Minecon. Yeah, things were a bit more serious than friends, I’m quite happy to say. We talked every evening. I had been fighting it for months, but after that weekend, I knew I was in love. She was too, apparently!

It’s August again, 2015. I’m done helping Shawn with what we can do. I’m planning to head home, when she tells me I should come visit her first. She’ll pick me up from the air port, and I can stay at her home for a while, and swim in that beautiful swimming pool she has! I never left. In fact, I married her in 2017. Something I never thought I would do in life is get married. It’s been an amazing 4 years. An amazing 4 years that I never would have had if Hurricane Charley hadn’t so wonderfully destroyed everything I knew those 15 years ago. It was a long road, but one so very worth it. It was a journey I had to take to be ready to be the person I am for myself, and for her. I can’t help but look back today and say "This was God’s plan all along, and I know he waited until I was ready to let her I meet!" To this day, I love my wife some very much, and would give the world for her.

There’s not a day where we’re separated due to work where I do not miss you with every ounce of my being. For so long, I felt a void in my life, in my being, in my soul, one which only you have ever been able to satisfy. I love you Cindy!

New Linux Install!

Today, I found a wonderful deal on a small VPS over at Ionos: 1 vCore (Xeon Gold 5120 @ 2.2GHz), 512MBs RAM and 10GBs SSD storage – all for $2 per month.
This might not sound like a whole heck of a lot of resources to you, and you’d be right. But for specific use cases, this is perfect.
(disclaimer: The above link is a referral which may provide financial gain for us, with referral rewards)

If you’re using another hosting company’s VPS, Dedicated server or VM, you might find a good bit of useful information here, especially the stuff past the Ionos setup and configuration stages. For initial hardening of an Ubuntu server, you might want to read this article here.

So, the first thing you’d need to do is to create an Ionos account (presuming you’ll be ordering a VPS from Ionos), and then order your VPS. Like most hosting companies, you can create an account with your first order. I actually really like Ionos account pages and provisioning and management interface. The one thing I do not like is having to use a customer id to login, but to each their own.

This is my first VPS with Ionos. Ionos was previously named 1&1, but has changed considerably since their merger and name change. We (My wife & business partner) have a dedicated server from Ionos, which we’ve had for about 8-10 months now. It’s a solid server with no issues. I’m expecting the same with this VPS.

Well, that’s partially true. I ran into a snafu with provisioning ipv6 on this VPS, and resorted to a fresh install. Both times, I had Ubuntu 18.04 installed, because it’s what I’m comfortable with. I really like apt/apt-get, and some tools made by the Ubuntu team, and feel they’re better suited running on Ubuntu itself.

The issue with the ipv6 provisioning was actually not an issue with provisioning, but a mis-understanding about Ionos’ hardware firewall, which sits outside of the VPS. I failed to realize that their firewall was what was blocking my attempts at ipv6 connectivity. Upon re-imaging of the VPS, I read the little pop-up, which stated something about firewalls – At that point, 2 hours of work were gone and I was face-desking pretty hard, because I knew that was my issue all along.

So, step 3 (1 & 2 are above) is to create a new firewall configuration, and, for the time being, allow all connections so that the firewall is not an issue for setup. I personally will be taking advantage of the hardware firewall, once I’ve got all my services provisioned and working. That way, if I run into any issues in setup, or afterward I can narrow down the cause. Some IS professionals would argue with me about not initially taking advantage of this firewall. They may be more correct. After creating the configuration, you’ll need to assign it as your active firewall rules for the VPS. The Linux server does not have to be restarted for this. (Note: I use a software based firewall within the Linux environment to restrict access to services, ports, etc. I personally use and have found UFW to be more than adequate to do the job in lieu of IPTables, another software firewall for Linux) The only other thing to mention here is that you must manually setup an IPv6 address through the management interface for the VPS, and to set up ipv6 firewall configs the same as for ipv4, if you plan to use IPv6 at all.

At this point, you should have an account and interface access for your hosting company, a VPS, and hardware firewall config(s).

Now, let’s get to it! Use your favorite SSH client to log into your fresh VPS. You’ll need the root password given to you from your hosting company, usually sent to you via email. Ionos, however, has the new-image generated password available on the management page. Pretty nifty! You should be able to connect with any SSH client over port 22/TCP. PuTTY, KiTTY (a fork of PuTTY), WinSSHTerm, SSH client built into Linux, as well as any other SSH standard-compliant client will work.

At this point, we’re through with Ionos, and everything here will be Ubuntu, if not GNU/Linux generally relevant.


Once logged in as root, type {passwd} and then enter your new password.
(Again, for a better start to hardening your server for security, read the "Linux SSH login – a good starting point", linked above)
At this point, it is advised to create a new user account, with sudoers access, with a new password, and then log out of the VPS as root and log in with the new account. We’re going to ignore this for the time being as everything we’re going to do first requires root/sudo access, and in the event that someone manages to get into your system before you’re done, it’s not too troublesome to reimage the VPS.


Before doing much else, you should run {apt update && apt upgrade}
This may (more than likely /will/) cause a kernel update, and will require a restart (shutdown -r now)


Now, let’s get some administrative things out of the way. Namely, hostname and fqdn (fully qualified domain name), additional utilities, and some software & services.
UFW – Uncomplicated Firewall, easier to use firewall than IPTables. (IPTables has it’s place, but most don’t need that power) {apt install ufw}
fail2ban – Intrusion mitigation software to ban access after N unsuccessful authentication attempts. {apt install fail2ban}
Linux PAM – Pluggable Authentication Module, part of most modern distros. Ensure it’s installed.

  • additional reading and consideration for libpam_shield and pam_tally2 for additional levels.

htop – a better hardware resource monitoring tool, with CPU, RAM and cache graphs, process list, etc. {apt install htop}

GNU Screen – a virtual terminal service allowing easier management of full-time processes (tmux and fg/bg work too!) {apt install screen}

HAProxy – an HTTP(S) and TCP proxy, for routing connections (layers 4 & 7) to different ports and hosts. Not required, but useful {apt install haproxy}

HATop – a monitoring tool for HAProxy, requires reading documentation to use. {apt install hatop}

MariaDB – An enhanced fork of MySQL SQL database server – You’ll know if you require an SQL server. {apt install mariadb-server}

  • MariaDB setup will require you to have certain information available, and written down for later access. This can be done later in the overall setup process though.

Java – If you require a Java Virtual Machine (JVM), I highly suggest using Oracle’s JRE. This, however, requires adding an apt repository. Read more here to install Java 12 in ubuntu!

  • If you choose not to use Oracle’s JRE, you can use OpenJDK, with a simple {apt install openjdk-11-jre-headless}

Hiawatha – a security focused light weight (compared to Apache, anyways) web server. Requires source tarball to install latest version.

This should about do it for the additional software and utilities. At least as far as installation goes. Now, onto configuring hostname and fqdn!

With time, change comes. Change is good, needed and wanted. Sometimes it isn’t. Sometimes older technology works just as well, or even better in some cases. There’s various ways to set your new server’s hostname. We’re going to use the tried and true method.

There’s a couple places to set hostname and fqdn.
/etc/hosts and /etc/hostname are two files, where changes will be made.

In hosts, you’ll add your public IP (the same IP you used to connect to the server via SSH) and the fqdn you wish to associate with that IP.
{12.345.67.89 blog.bluntaboutit.com}
This assigns blog.bluntaboutit.com to the IP 12.345.67.89 (fake IP, do not use!)
{ff02:816:f00d:3475::1 blog.bluntaboutit.com}
This assigns the fqdn to the provided IPv6 address (also fake)
With this, blog.bluntaboutit.com will connect to either the v4 or v6 address.

In the hostname file, you’ll add your hostname, which will appear after the @ in bash, as well as identify the machine on the network and other spaces.
This is a single simple string.

Make sure to use domains you actually own and can assign the IP addresses to in your DNS server. Otherwise you might find yourself in a heap of trouble, possibly even with your hosting provider.
Once you’ve edited your files, confirmed the data is correct, saved the files, confirmed the data is correct again, you can restart the VPS. This will solidify the settings and cause your server to use the new hostname and fqdn on start up. Another option, for temporarily setting the hostname is to use {hostname blog.bluntaboutit.com}

Now, I’m running this server as a POP server – point of presence. It’s a server dedicated to running a reverse proxy (HAProxy), where users will connect to and be forwarded to the real server. This is due to the real-time nature of the connections. Having this server will give more stable client-proxy connections to those in the region than doing a client-server connection directly. It adds a tiny bit of latency to the connection, but overall it’s more stable. The proxy-server connection is running through private infrastructure, and so is unencumbered by public traffic, and less hops. Ultimately, there is less latency for the client-proxy-server connection than for client-server connections for most users in the region of the world closer to this server.

With that, I won’t be using mariadb, screen, java, or hiawatha. However, I will still be using UFW, fail2ban, PAM, SSH keys (for login), htop, HAProxy and hatop. The afformentioned software is noted, mostly as these are things which I would normally use on a server, for various reasons and to varying degree. They may also be things which others may forget to install at a more appropriate time. And so, they’re listed as a reminder – just in case. Others may have other software which they consider to be basic stuff, and may want to add to the list of initial setup installables.


Now, there’s two firewalls you can use. I highly suggest using both your hosting provider’s hardware firewall, as well as UFW (Or IPTables, if you need the power it provides). UFW is super simple. But, there is a bit of a learning curve.

Setting up UFW:
Before you do ANYTHING with UFW (once you have it installed, that is) PLEASE do yourself a favor and add your ssh port.
{ufw allow 22/tcp}
This adds a rule to UFW to allow any connection (inside or outside the private network) to connect to the server to port 22 via TCP on IPv4 and IPv6 address (if IPv6 is enabled on your server)
UFW is still very powerful, but for admins looking only to open/block ports/IPs/IP ranges to/from their server, UFW is the easier, and honestly safer choice. IPTables configs can become very complex and can easily be mis-configured to a point of failure. UFW has sanity checks on the commands run against it, and will hint at why the command wasn’t accepted.
If you have, say, a service listening on TCP port 25565, and want everyone in the world to connect to it, but only to your IPv4 address, you would run
{ufw allow from any proto tcp to 12.345.67.89 port 25565}
This will allow any IP address capable of routing to the server’s IP of 12.345.67.89 to connect to TCP port 25565. Likewise, to allow any IP to connect to v4 or v6 addresses, from anywhere, the command can be simplified to the level of SSH’s rule:
{ufw allow 25565/tcp}

UFW also provides firewall access to allow/deny/route in-bound and out-bound traffic on several protocols.

Setting up hardware firewalls:
You’ll need to find the docs for your hosting provider or your own hardware firewall in order to configure and use. Being Ionos is still growing, I feel there is a chance that anything I write here about their hardware firewall setup may become outmoded and useless as time goes on. Their documentation is pretty clear however.

We’ll be using fail2ban as one of several layers to our unauthorized access mitigation solution.
Being that fail2ban has a lot of really good write-ups already, I’m going to have you read A2 Hosting’s instructions. I could copy and paste their instructions, or just the commands they use, but since I use their docs often, I might as well toss them some love!

I will make some notes, however:
"enabled = false" – This setting, on or near line 117 of the default config as of fail2ban 0.10.2, should NOT be changed as indicated by A2 Hosting’s page. Doing so will enable EVERY jail, causing fail2ban to fail to start… and ban. In the individual sections for each jail (such as "[sshd]") add the line "enabled = true" to enable that jail.

"ignoreip" – If you have either a jump-box or a static IP, then you would add that IP to this list, and uncomment it. Otherwise, relying on this to save you from failed logins can bit you in the behind if your IP does change. Especially since now someone else is now potentially white-listed on your server to attempt to brute-force it over time. A jump-box is another server or VPS which only, or primary use is to SSH into, and then connect to other servers. This can be achieved either by logging into the jump-box and then starting a new SSH session from there to the target server, or by means of automatic redirection (i.e. a reverse proxy) If you do not specifically pay for a static IP, or specifically told you have one, usually with business accounts – then you more than likely do not have a static IP, even if your IP hasn’t changed in 6+ months. You can test this by removing all power from your cable/dsl modem for an hour, and comparing the IP address(es) from before it was powered off and after it’s powered back on. Disconnecting the ISP’s wire (telephone wire, coax cable or fibre cable) for at least an hour will usually also work. Exchanging the modem for a new one will too – if you have a Static IP, the new modem WILL have the new IP (unless your ISP sucks really bad)

"bantime" – The default and suggested is 10 minutes. If you’re not afraid of locking yourself out (Either because you’ve never failed log in more than N times, or are OK with accessing the remote console) OR you’re OK with waiting that length of time before logging in again, you can set this MUCH higher. Otherwise, leaving it at 10 minutes is probably OK. I set this much higher.

"findtime" – This is also default to 10 minutes. If this were set to 3 days, all accumulated login failures over 3 days will count towards N tries. If you fail login once a day on average, you can easily become banned if this is set too high. Usually script-kiddies will give up on a host if they’re banned quickly. Between this and the next setting, determines the solution for N tries per X time. Or, at default: 5 tries per 10 minutes, which results in a 10 minute ban. This is plenty for those script kiddies, but a dedicated hacker will just take a snack break and try again. I prefer 3 tries per 5 minutes, with a much longer ban. But I’m comfortable using the remote console.

"maxtry" – Again, 5 is the default, and I set mine to 3. This is simply the number of failed authentication attempts before the IP is banned.

Additionally, if you’re using UFW and want it to handle IP bans, change these keys to use ufw:
(ensure you have /etc/fail2ban/action.d/ufw.conf before relying on this!)
banaction = ufw
banaction_allports = ufw

As pointed out in the A2 Hosting write-up, there is a large selection of services fail2ban can monitor. Most of these settings are probably best left alone, unless you have a specific reason for changing them. Don’t forget to change enabled to = true!

Restart fail2ban service, and enjoy!
{service fail2ban restart} (and view status with {service fail2ban status} – ensure there’s no issues!)

Linux PAM has several config files, all which are optimally set by default. However if you wish to take a look, and make changes at risk of bricking your server, they’re in {/etc/pam.d/} These can be used to fine tune failure attempts. Be careful though, as you can easily negate fail2ban’s timings.

At this time, the server is minimally secured and ready to use. But it can still be brute-forced over time – just a much longer time. I highly suggest changing from password SSH authentication to using SSH key pairs. Having a password locked private key is also valuable to this, and should not be overlooked for convenience.
If however, you REALLY wish to continue to use passwords, will never be using SSH, or for what ever reason are not able to use SSH keys, you can bypass this section – but I strongly urge you to reconsider.

Passwords are great for keeping the kids off your desktop, out of your game, and away from specific files. But they can be cracked. And with newer CPUs, times are getting much shorter for cracking software. This applies to file locks, as well as account passwords. SSH Keys too can eventually be cracked, as can the password on an SSH private key – but we’re adding layers of security, which helps greatly to mitigate intrusions! Hardware firewall -> UFW -> fail2ban -> PAM -> account name -> SSH keys -> pk password. Lots of levels to get through.

Some additional methods to help mitigate intrusions include, but aren’t limited to disabling root login over SSH, restricting accounts from services and sudo, changing the SSH port from 22 to something else, requiring SSH access from (a) specific IP address(es) and you have yourself a pretty secure server. Denying all outbound connections, with exception to needed IP/ports can mitigate certain attacks as well as malicious software from "phoning home" Disabling and/or uninstalling any non-used services and software will limit attack vectors for exploits as well. There are also other software which can be installed, such as anti-malware software, spam filtering, and additional levels of authentication enforcement. It can get pretty crazy! Some systems need the protection.

Reminder: We’re still using the root account. At this point, it may be beneficial to create a new user account, with sudo permissions to use as your administrative account (using a not common word for the account name), and a user account without sudo power for running anything that will never need root privileges to operate. Most software does not require sudo/root privileges to run. Remote console can be used for true root access, if ever needed – such as if your administrative account becomes locked or corrupted. As the only software I am running requires root access, I will be creating a new user account with sudo power only, and locking root from ssh login.

Creating your first non-root administrative user account:
Let’s say your administrative account will be named stormbringer. (Let’s not use this name for accounts, ok?)
{adduser stormbringer}
This will prompt for additional information, starting with the new account’s password.
Next, the account must be added to the sudo group, which (should) give it sudo access:
{usermod -aG sudo stormbringer}
To test that the new user account functions, and that it has access to sudo:
{su stormbringer}
The bash prompt should now have replaced "root@" with "stormbringer@" – provided it does, do:
{sudo apt update}
This will prompt for stormbringer’s password. Enter it. This should update your apt cache. If it does, success! If not, go back up a few lines and try again.
To exit out of the stormbringer account back to root, simply do:
The bash prompt will now read "root@"

SSH Keys are the key:
NOTE: Be sure to use your administrative user account (NOT root) when performing the below, unless you specifically need to allow the root user to have ssh key login authority.

We’re going to add SSH keys, and disable password login. For this, we need to generate a key-pair on our VPS. This does a couple of things – one, it lets you have a "master key" which can be put on your other servers for convenience, and be easily negated by generating a new key pair replacement in the event of a security breach or loss of key control. It also populates the file system with needed directories. We’re also going to use a separate key pair which belong to the admin. This allows changing the admin’s keys without affecting the other servers. It is good practice to replace key pairs which have been distributed often. Having a separate key for the admin account also allows the admin to retain access to all servers when the server-specific keys are replaced.

SSH keys will not prevent the need to use a password for privilege escalation once logged in with an account with sudo power. Keys can, however be used to disallow password authentication on SSH login.

Since we’re going to be denying SSH login to the root user account, go ahead and log in to the server with your administrative account. It’s best to use a new SSH session for this account, leaving the root session open for the time being. This is so that if there are any issues with connecting via the administrative account, the root account can quickly be accessed to assess the issue, and fix it. All instructions from now on will be done using the administrative account, and NOT the root user account.

To begin, we’ll generate a server specific key pair. Because this key pair will only ever be used for server-server communication, and getting to these keys is difficult, it can be seen as "mildly safe" to generate this pair without a password. In some instances, this can be more safe, as scripts written to rsync data across an SSH connection must store the private key’s password in plain text (unless you want to get really into it, and will encrypt the password, which is beyond most people)
Let the keys be generated to the default provided path. This will make life easier for you. However, security by obscurity is still a thing, and changing this could be seen as obscurity. Unless you have reason to password protect this key pair, simply leave the password request empty.
When the generation is complete, you will be given a nifty ascii art, followed by the bash prompt. Success!

To create an admin specific key pair, the same instructions can be followed above, either on the same machine (which will overwrite the current key pair), on another Linux host, or with some other tool, such as PuTTY’s key pair generator.

Success! You’ve got an admin specific key! (I’m going to assume you figured out how to do this, because I literally already told you)

To grant access to the administrative account via the admin key, the admin key pair PUBLIC key needs to be added to the server/account.
Create, if it does not exist: /home/stormbringer/.ssh/authorized_keys (using vi, nano, etc)
or with {cp /home/stormbringer/.ssh/id_rsa.pub /home/stormbringer/.ssh/authorized_keys}
Add your public keys to authorized_keys file, including the server and all admin public keys.
Add a comment (using "#") to identify the public key’s owner. This will allow the admin to quickly select and remove expired/compromised/orphaned/ keys and those of ex-admins/users. Do NOT delete or alter the id_rsa.pub file, or you will lose half of your server-specific key pair.
Add new keys, one per line, and only taking up one line. If word-wrap is enabled, the strings will appear on multiple lines, ensure they are in fact on a single line.
Repeat this for each admin public key which needs to be added for access to the account.

In your SSH client, you will need to associate your private key with the server/user profile. This is done differently depending on OS and client. On Windows, PuTTY’s Pageant program will run in the background, and require a password to unlock the private key, but will provide the key to many Windows based SSH clients, including PuTTY and WinSSHTerm v2.

Create a new, (if counting, third) SSH session. This time, using the administrative account (stormbringer, in my example here) – and if everything went right, the connection should quickly complete without requiring a password to be entered. (No cheating here, don’t add your password to an auto-fill script!) Once the administrative account is logged in using SSH keys, SUCCESS! Now we can move to disabling password authentication.

(At this point, you should be able to log in with your administrative account, using only SSH keys, and be able to use sudo to run programs, which will still require the administrative account’s password. If you cannot do these three things in this manner, you should review the instructions, or seek real-time/live assistance)

Now that you can log in using SSH keys, let’s get rid of that gaping security hole known as "password authentication"! While doing this, there are some additional changes to the SSHD config that can be made. I’ll go over some of them here:
Edit this file with {sudo nano /etc/ssh/sshd_config} (replace nano with vi if you’re hardcore, or old school)
Being that the administrative account is being used, sudo is now required to alter system level config files.

Note: Towards the top of this file are some config options which can be changed, such as the port SSH listens on, and IPv4/6 addresses. Be sure to make appropriate changes in the hardware firewall and UFW/IPTables if these are changed!

These options are commented out, but are enable by default, allowing overrides with changes here. Changes require uncommenting the lines. I also uncomment lines which still apply default values and won’t be changed, but which I use, just as a visual aide when editing the file at a later time.
LoginGraceTime – default is 2 minutes before the session will timeout for no input. This can be changed to 30s when using SSH keys, unless very latent network connections are expected to be used. Leaving this at default is also fine.

  • PermitRootLogin – default is "prohibit-password" or "yes" – change this to "no" to completely disable root SSH login. Leaving this as "prohibit-password" will allow the use of SSH keys to login to the server via the root user. I will be setting this to "no"
    MaxAuthTries – default is 6, I prefer 4. fail2ban should kick in at 3, but just in case.
    MaxSessions – deault is 10. That’s a lot of sessions for a server with 99% no SSH usage at all. File servers accepting rsync over SSH may require more, however.
    * PubkeyAuthentication – default is yes, and commented out. This can be left alone, or uncommented for visual aide, or paranoia reasons.
    * PasswordAuthentication – default is yes. We’re changing this to "no" to prevent password attempts.
    * PermitEmptyPasswords – default is no. I uncomment anyways, even though the setting is nullified by the above setting.
    * ChallengeResponseAuthentication – default is yes. This can still allow brute-force password attacks. We’ll uncomment and set to "no"
    * UsePAM – default is yes. We’ll keep this uncommented and set to "yes" – This allows for less complex client setups.
    X11Forwarding – default is yes. This is a server, what’s a gui? Set this to "no"
    PrintMotd – default is no (at least on my ionos Ubuntu 18.04 image) – This can be changed to provide various info/data on login.
    Banner – default is commented out and set to "none" – I want a nice banner I can grin at on login. I’m setting to "/home/stormbringer/ssh-banner"

    • The ssh-banner file won’t exist, it needs to be created. This is where some nice ascii art, or a big "NO TRESPASSING" sign can be store for display.

The settings marked with * are ones we’re concerned with, regarding security and hardening the server. The rest are fluff and ancillary.
Now here’s something tricky. My config, at the very bottom, has "PermitRootLogin yes" and "PasswordAuthentication yes" – both uncommented. This would negate our previous settings. Ensure your file does not have duplicate entries. Review the file after restart as well, just in case something is messing with things.

Save your file. If you used sudo to run the editor, you should have no problems saving. If you cannot save, you didn’t sudo. Copy the contents, or save to an alternative location in the account’s directory. Then close the editor (if you failed to sudo, either sudo cp the saved file to the proper location, or re-open the proper file WITH sudo)

(If you want a banner, create the file you specified, with at least a word, so it will exist and not potentially cause issues with the config, or login)

Now, we need to test our config the best way we can – by putting it into use with the SSH service.
{sudo service ssh restart} and enter the account’s password. This may cause your connection to reset.
Create a new SSH session (counting still? Number 4) to ensure login is still possible. If not, you’ll need to fix your config. If your connection was reset, you’ll need to fix your config /using remote console/ – which can be a pain. If you were able to log in (and see the text from ssh-banner file if you created one) Success!

Now, go ahead and start your 5th SSH session, this time, using the root user. You may receive the text from ssh-banner file, but then be disconnected with a "no supported authentication methods available" message. If you, like I, do not want root to be able to log in with SSH – SUCCESS! Go ahead and close all but the original root user session and one of the administrative sessions.
At this time, it may be prudent to test {sudo apt update} and {su -} (from the administrative account)
Sudo will require the administrative account’s password. "su -" on the other hand should require the root user account password to access. Sudo should be enough for most things, however in rare cases, the actual root user account may need to be utilized to gain access to portions of the system or services.

Congratulations! You’ve made it to the end. Your reward? A wonderfully secure server. At least to what I consider to be a basic level of security!

Did we forget about the hardware firewall? Nope! (ok, well, maybe a little.)
By now you should know if you’re running IPv4 and/or IPv6, and what address(es) will be utilized for what purposes. You should also know at least some of the ports your services and software will be listening on. The hardware firewall configuration should mirror (at least mostly) the rules for allowed ports in UFW. There may be instances where UFW may have more open ports than the hardware firewall. This would be due to allowing monitoring services from your hosting provider, connections to/from other servers on the LAN/private network, or maybe other reasons, such as future use. The hardware firewall should never have any ports opened which are not explicitly in use on the server. Open ports are open attack vectors for exploits, Denial-of-service attacks, and other nefarious things. UFW can block a lot, but it uses server resources to do so. A flood of connections (DDoS) not mitigated by the hardware firewall can potentially overwhelm the Linux server, causing a crash, exploit, or full intrusion.

Here’s a starter kit of useful commands you can perform to inspect your server:
{df -h} (disk filesystem, human readable) will display the amount of space allocated, used and free on your drive, and where each portion is mounted. Generally "/" will be the most used, and largest partition. It’s also the partition that can be used up by extraneous software installs and file storage in the user’s /home/ directory.
{du -h} (du -sh) (disk usage, human readable (summary)) Can specify a directory to see how large that directory or it’s contents are.
{free -h} (available memory resource, again human readable) This shows some quality stats about your RAM can cache.
{lscpu} (list CPU information) This shows information about the CPU as reported to Linux via the hypervisor from the hardware. Modern VPSs generally have accurate info.
{lspci} (list PCI information) Not too horribly useful for VPSs, but can provided critical data on dedicated servers.
{jobs}/{fg}/{bg} – If you’ve ever ghosted a program, where it’s still open but can’t be accessed, try these commands.
{htop} – nice system monitoring tool, with colors! Can also

Some of these commands give good info without sudo. Some will give more info when run as root or via sudo.

This is a baseline of a good start to a fresh Linux install. Obviously, there’s many many many more things that can be done with a Linux server, in terms of use, and security both. But this should provide more than a firm footing for any new Linux server.

I, the author, and BluntAboutIT.com take NO responsibility for any loss of data, access, sanity or finances resulting in the failure (or successful) following of this guide. It is a GUIDE, not a set of axioms. Every admin should fully know, understand and carefully choose the routes they take with their servers, as well as with any and all configurations, software, etc. This guide is here for two reasons only: To help the education process for those who need a bit of help getting started, and for myself, so I have a "check list" of sorts when provisioning new servers. What works for me may not work for you, either technically or functionally. You’ve been warned. I, the author, and BluntAboutIT.com take NO responsibility for any loss of data, access, sanity or finances resulting in the failure (or successful) following of this guide. It is a GUIDE, not a set of axioms. Every admin should fully know, understand and carefully choose the routes they take with their servers, as well as with any and all configurations, software, etc. This guide is here for two reasons only: To help the education process for those who need a bit of help getting started, and for myself, so I have a "check list" of sorts when provisioning new servers. What works for me may not work for you, either technically or functionally. You’ve been warned.