Thursday, June 18, 2020

Creating my OWASIS - Part 1 (Setting the stage)


During the second-half of 2007, I did a brief stint at a bank - which was exactly the wrong time to be starting a career at a bank. As you may recall, this was right when the mortgage crisis was beginning to come about, and it lasted into 2009. While I was only there for 6 months to the day, I got to watch the stock value plummet to ~10% of what it was when I started. Within the next couple years long after I left, the bank was purchased by another bank and MANY people lost tons of money on the deal.

How is any of this relevant to my next story?

I was originally brought in (along with one other person) to be part of a new group within the Middleware Management and Support team, specifically to assist with doing research and development into new uses for WebSphere Application Servers. My role was to assist with the development of systems that would improve the throughput of EFTs (Electronic Funds Transfers) between the mainframe and the downstream ATMs and Web endpoints. However, just as I was about to join the team, this project went on hold due to the IT Architecture team deciding to begin development using Weblogic instead. With this, my role immediately changed from building and testing WebSphere in new applications, to just maintaining the existing WebSphere systems like the rest of the team. The problem was, there was already a team of people that supported the existing WebSphere environment, and between the shift in technology focus away from that team and the sudden downturn of the stock market, they were reluctant to want to show the new guys the ropes.

Humans have an innate sense of impending doom which fires up long before the rational part of the brain realizes what is happening. This then engages the fight or flight response in order to preserve oneself. The way this was demonstrated within my team, was relegating myself and my other new coworker to the most basic of tasks, and barely lifting a figure to get us pointed in the right direction. They were afraid training us would train themselves out of their jobs; in hindsight, they were probably correct.

I was literally given one real, personally-assigned project to work on independently, and this was to create an inventory of the existing WebSphere Application Servers. Mind you, the WebSphere servers numbered in the hundreds, and people had long since lost track of what they all were, which ones were still in use, and basically what was even still powered on. Being the resourceful type, and also BORED OUT OF MY MIND, I decided to think of ways that I might be able to automate the process and - most importantly - save myself time if I ever had to do this again.

I've always said that the best programmers I've known are the laziest. This may sound counterintuitive on the surface, but in reality, it is the fact that they are lazy that they seek ways to avoid having to perform repetitive tasks. Logging into hundreds of servers and running a command to see if WebSphere is installed, then document the version number, is the pinnacle of repetitive tasks.

Fortunately, this assignment (which I assume was just intended to be busy-work keeping me occupied anyway) came with no instructions for how they wanted it completed, nor a deadline for when it was to be completed. And so, I took this as an opportunity to learn some new skills and create the best damn inventory process possible.

Continued in Part 2...

Monday, June 8, 2020

Kickstarting Knoppix




&



Those of you who were interested in running Linux in the early to mid-2000s ("20-aughts") may remember a clever distribution called Knoppix that allowed you to run Linux on any computer using a bootable CD or DVD. This distribution was used to form the basis of several others: Helix - used for computer forensics; Kanotix - which added a feature to perform hard drive installs of Knoppix; and the still-popular Kali (originally BackTrack) - which is widely adopted by penetration testers.

The beauty of Knoppix, as mentioned earlier, was that it could run fairly reliably on almost any equipment of that era. This made it an attractive option in cases where it was difficult to predict what equipment you may need to run it on. For this reason, I wondered if this would be a beneficial option for Disaster Recovery processes. The specific problem that I was looking to solve was: "how do we start rebuilding our many RedHat servers from bare metal in as efficient a manner as possible?"

RedHat developed a utility called KickStart that was essentially a local, network-based, RedHat distribution mirror. In it, you could store a customize set of RPMs along with other packages and programs that you wanted to deploy onto your new servers. For general administration processes, we set up a KickStart server for building new servers from scratch fairly easily. And because it resided on the local network, we never had to worry about download speeds; the bandwidth limitation was just the NICs on the servers and the switches within the data center. This worked great on the equipment we had in place, however, it did not work as well in a disaster recovery setting as RedHat had a tendency to be fussy when you tried to restore an image from a different model of hardware. Based on the contract we had with our vendor, we were guaranteed the same or better grade of equipment, but not identical equipment (which would have been much more expensive). This created complications with restoring servers from backups in general, let along restoring our KickStart server. We needed to figure out a way to bring up the KickStart server without having to fix a bunch of driver issues at the time of recovery in order to expediently build the other servers.

Enter Knoppix.

Even though Knoppix was based on Debian which is significantly different than RedHat, the underlying technology (the kernel, runtime environments, etc.) were very much the same. Since KickStart runs off of a web server, all I needed to do was install and enable Apache on Knoppix and configure a site that references the location where I would store the RPMs and KickStart config file. I then configured it to make sure Apache came up automatically upon boot by updating init.d.

Knoppix provides a process by which you can set persistence on changes made to the Knoppix environment and then save it as a new iso image. After doing this and confirming with a new test disk that it would boot and run correctly, I ran into my next hurdle - storage.

A standard high-density CD has roughly 650MB of useful storage capacity, and the Knoppix base image took up nearly all of it, leaving barely any room for dropping in the set of RPMs we needed. Fortunately, some of the newer releases of Knoppix at the time (version 4.0.2, specifically) were capable of running from a DVD as well. Unfortunately, Knoppix was taking advantage of this added space with a bunch of additional programs that we certainly did not need. So I got to work, removing and uninstalling as many of the programs as I could while maintaining the minimum necessary to bring up the KickStart server. I had to make a few concessions - for example, KDE was WAY too bloated for our needs, but we still needed an X Window System environment, so I replaced KDE with Blackbox, keeping just the KDM window manager for the backend. Doing so freed up just enough space to fit everything.

One last configuration I made before packing it up and burning it to disk was to set the network to use a static IP. This way, once the KickStart server boots up, we can stand up the new servers on the same network as the KickStart server, initiate the KickStart install process using network install or PXE install, and off it goes!

Having this option available in the event of a disaster gave us assurances that we could quickly and easily bring up a KickStart server which could then be used to perform bare metal installs of all of the servers within our environment.