Friday, January 8, 2021

Cover Letter Advice for InfoSec Job Seekers



Let’s throw it out there, cover letters are not fun. They are so *not* fun, that many places make them optional, and most people try to avoid writing them. I get it. The entire exercise harkens back to those silly and revolting 5-paragraph essays that you had to write in school.

In addition, you have to talk about yourself (an act that many people struggle with), you don’t really know what to say, and you feel like the stakes are the highest - that you’ll say the wrong thing and your application will end up in the trash.

What if I told you that not only are cover letters an important part to the job application, but writing a good one can actually do far more good than even a solid resume? And yes, for certain jobs, writing a bad one could be a severe mistake, but if you follow the advice laid out here, you needn’t worry about that.

In this post, we'll cover the following:
  • Why cover letters are important (and for what jobs they are most important)
  • What hiring managers look for in a good cover letter
  • Strategies for writing compelling ones

Why is a cover letter important?

  • All cover letters are important, and some cover letters are more important that others

  • The cover letter is important to employers because they show the employer

    • Whether you are capable of formulating sentences into coherent thoughts

    • Your attention to detail, based on the number of typos and mistakes it contains

    • How much interest you have in a job based on how much effort you put into writing a decent one

  • The cover should be important to you because they are:

    • Your first good opportunity to show genuine, personal interest in a job

    • A way to put your communication skills on display

      • And for those of you concerned about masking any communication weaknesses, keep reading

    • A chance to provide additional color and descriptions to best highlight your previous achievements

  • For certain jobs, a well-written cover letter is critical! Examples include:

    • CISOs, Directors, Managers, Team Leads

      • Anything related to management is a definite; as a manager you have to be able to communicate in writing, often to folks who are non-technical, be it senior leadership or folks in business units

    • Penetration Testers

      • What? It’s not all about popping shells and breaking shit? Absolutely not!

      • Writing skills are vital for respected pen testers, because as much as you’d like to think you are paid just to break into stuff, you are actually paid to deliver a report; and the more people pay for your skills, the more they are expecting a damn good report they can present to leadership

    • Auditors

      • Just about everything they do involves writing

    • Incident Response and Forensic Analysts

      • They have to document every single activity they perform in case the investigation leads to something that is court admissible


What are hiring managers looking for?

  • Trust me when I say most hiring managers dread reading cover letters as much if not more than you do writing them

    • They read an abundance of terribly written ones

    • They read ones that were clearly written for a different job

      • Or worse: the person copied it directly from an internet example, slapped in the company name, job title, and maybe the person’s name to whom they are directing it and just called it a day

    • They can tell right away how much interest you have in the job by how much effort you put into your cover letter; and this is why bad ones are REALLY bad

  • Here are the things hiring managers want to see

    • Your ability to write - bottom line, that’s what these are all about

    • Something unique about you that sets you apart from the other candidates

    • Whether you paid any attention to the job description and identified points that align to that particular job

    • A sense that you have genuine interest in the job and/or company

  • Let’s dig a bit further into each of these

    • Your ability to write, aka basic grammar and writing skills, which means:

      • No Typos - proofreading and eliminating misspellings is essential

        • Particularly, make sure you correctly spell the company’s name, the hiring manager’s name, and the title of the position

        • Carefully examine words that spell check may not catch such as, its/it’s, they’re/their/there, and two/too/to

      • Missing or gratuitous punctuation - most common is to overuse commas

      • Overuse of a thesaurus

        • If you occasionally have to search for another word for something that you’ve already used a couple times in your letter that’s fine, but don’t pick an terribly obscure word to use, and don’t omit common words in favor of “bigger” words just to sound more impressive - it’s not

      • Avoid cultural slang and cliches

        • Americans tend to incorporate informal sayings and cliches into their writing, and you cannot guarantee that everyone who will read your cover letter will be familiar with slang and cliche phrases

    • Something unique about you that sets you apart

      • There are various aspects you might include

        • Your approach to solving problems

        • How you engage with other company stakeholders

        • Ways that you have helped instill a culture of security awareness

        • Programs you’ve written (original code) or applied in innovative ways

      • You might also consider hobbies or personal interests - particularly if you can show their relevance to this job opportunity

    • Showing interest in the job and familiarity with the job description

      • Your cover letter should point to one or more tangible examples of how you align with the job posting

        • Specific technology experiences

        • A project your led

        • Incidents you helped resolve (omit protected info of course)

    • Your cover should convey tangible interest in the job

      • Outside of the pay, is there something you can point to as a genuine area of interest?

      • Will this role permit you opportunities to learn or gain skills of particular interest?

      • Is there anything attractive about the company as a whole?


What strategies work best in cover letters?

  • So now that you’ve given thought to the general content, the last step is putting it together into a cohesive unit. In this final section we’ll touch on the following:

    • Guide to basic formatting

    • Cover letter pitfalls to avoid

    • Tips on closing out the cover letter (many people struggle with this)

  • Guide to basic formatting

    • I will link to a couple of examples - as mentioned in the previous section though, do not cut and paste these. Use these as guides not templates

    • The important components are:

      • Your full name and contact info first

      • The date you’re submitting the cover letter

        • Remember to update this if you are starting with a previous one

      • Address the cover letter to the intended recipient

        • If you know the recipient’s name, use it with their proper prefixes (absolutely research this)

          • Mr., Mrs., Ms., Dr., etc.

        • If you aren’t 100% certain on this, simply “Dear Hiring Manager” works fine

        • One of the worst mistakes you can make here is using the wrong prefix - it’s potentially as bad as using the wrong pronoun

      • The body of the cover letter, which should include:

        • The title of the position to which you’re applying

        • What interests you in the position

        • Call out a couple specifics around how your experiences would make you a good fit for the position

          • Perhaps dedicate two short paragraphs to this

        • Include something that is uniquely you

          • A success, experience, interest, hobby - something that another applicant is unlikely to have in common

        • Close it out with confidence (not arrogance)

          • More on this below

        • Sign off

          • A simple “Sincerely,” and full name is fine

          • An actual signature is nice but not necessary

  • Pitfalls to avoid in cover letters

    • Restatements of your resume

      • They already have your resume; this is your opportunity to tell them something that either doesn’t fit on your resume, or formats better in writing than as a bullet on a resume

    • Errors, typos, misspellings, etc.

      • Yes - I mentioned this above

      • Yes - it is important enough to be repeated

    • Sharing your life history

      • If there is a particular aspect of your history that is relevant and important to this role, by all means mention it, but they don’t want to read a biography

    • Speaking in negative terms

      • This isn’t the place to discuss bad experiences in previous employment

      • The only exception would be maybe touching on why you  changing jobs in a short time (anything less than a year tends to raise questions)

        • Tread carefully with this

    • Making assumptions about the position or inflating the role

      • Speak to the elements of the job that are included in the description for sure, but don’t try to impress them with how you’ll turn an entry level job into an executive leader of the company

    • Embellish the importance of mundane experiences

      • So you don’t have a “Wow” experience to include - that’s fine; focus more on your approach to the job and your interests then

      • No one wants to hear how you revolutionized technology at your previous company by enabling a feature in an application

  • Closing out the cover letter

    • You don’t need to summarize all the previous points in the closing piece

    • It’s good to briefly restate your interest in the position/company

    • The last bit (one sentence, two max) should show confidence - things like:

      • Look forward to next steps, sharing vision, contributing value, etc

    • Focus on the future and what you’ll do for them rather than what you’ve done in the past

Best of luck to all of you on your job searches!


Sunday, September 20, 2020

Resume Advice for InfoSec Job Seekers

 

  • Keep it short and simple
    • Even for people who have worked in the industry for a decade or more, a 2-page (one piece of paper, front-and-back) should be attainable
  • Review and update your resume for every job you apply for
    • People who have the most success with job applications, make sure all of the documents they provide (resume, cover letter, references and job application responses) are crafted and targeted to the position for which they are applying
  • Job experiences should be relevant
    • This does not mean that only InfoSec experience counts; this means each experience should be angled towards showing how you gained or applied InfoSec-related skills within that experience
    • As you prep your resume for each job you’re applying for, think about previous experiences in terms of how they relate to the prospective role
  • Each experience should answer the question: What was your individual contribution?
    • Saying you participated in a project or were on a team is fine, but do not forget to highlight what your specific contributions were to those projects or teams
    • If you collaborated with a couple other people on a single task, focus on the elements you provided
  • Ditch the generic “career goals” section
    • The operative word here is generic. If you are passionate about something and can make this sound like a personal mission that is important to you and is uniquely you, then leave it in
    • If all you have to say is you “want to get a job in InfoSec, hack all the things, and protect stuff,” then at best it’s not doing anything to help you stand out, and at worst, it shows you’re just like everyone else who wants to work in InfoSec
    • Instead - use that real estate on your resume to say something that helps you stand out. Talk about something uniquely you - a group you founded, a tool/script/program you created, a policy/strategy/marketing campaign you came up with, or a personal philosophy that explains your approach to InfoSec
  • Streamline your technical skills, and focus on what’s important
    • It’s 2020 people, and it’s fair to assume everyone has at least passing knowledge of how to use Microsoft Office products. Unless the job explicitly mentions you need to be proficient in Word and Excel, there is no reason to list them
    • Unless the job says Windows or Mac experience is required, take them off
    • Caveat: If a job expects that you are proficient in a specific operating system and can perform command line scripting (as an example), something in your resume should highlight your experience in that area
  • Avoid inflating promotions or title changes into multiple positions
    • Sure, if these were distinctly different roles within the same organization, list them and touch on those unique experiences
    • If you were basically doing the same job the entire time, and had some title changes along the way, either pick the most current one and attach all of your experience to that one, or list all of the titles but consolidate them to a single collective experience
  • Avoid doxxing yourself through your resume
    • If you are posting your resume on sites like LinkedIn or Glassdoor so it can be viewed publicly, you probably don’t want to include your home address and personal cell phone number
    • Keep multiple versions of your resume if you have to - one that you use for public display that says “Contact info available on request” or that just displays an email address; and one that has the rest of the details that you would include with job applications or provide to recruiters
  • Do not put your date of birth or Social Security Number on your resume
    • <Sigh> Just. Don’t.
  • Scale back the details of your education based on your work experience
    • If you are applying for your first job or (especially) an internship, the company may specifically want to know your GPA, otherwise it’s not necessary
    • If you’ve been working in the industry for a number of years, then GPA and graduation year are probably both unnecessary
    • In all cases though, do include the school you attended - large or small. This can become a conversation piece in unexpected ways, and that’s a good thing
  • Keep references separate from your resume
    • This helps to conserve space on your resume and let’s you decide when to provide them (and have more control over who you provide at the time references may be contacted)
  • Highlight volunteer work, regardless of whether it is related to InfoSec
    • Shows involvement outside of work, and your desire to give-back to the community
  • Link to any InfoSec work you do on your own time
    • A great to way do this is to start a blog, which can serve as a supplement to your resume
    • If you maintain an active GitHub of personal work, include a link to that as well
  • Check out my tips for creating cover letters (I still need to pare it down a bit)
  • Other suggestions - less critical than the aforementioned ones
    • Create a designer, one-page resume that focuses more on keywords and eye-catching layout in contrast to more traditional resume
      • This is a good one to carry with you and can hand out at career fairs or conferences
    • Include a section for groups you participate in outside of work and/or hobbies
      • This could also contain memberships to professional organizations
    • Job applications should not just be a copy and paste of your resume
      • While it’s certainly more work, you don’t want to miss the opportunity to share additional information about your work experiences
      • One strategy for this could be to emphasize keywords in the job application, and emphasize work experience in resume
    • Include your social media accounts if they are suitable for professional purposes
      • LinkedIn and Twitter are the typical ones used for this purpose
    • Mention some of the learning opportunities or other activities you have pursued on your own time
      • Local or virtual conferences attended, online classes or other self-taught efforts are all good to mention


Thursday, July 30, 2020

Clearing the Queue


After my brief stint with the bank and watching the financial and housing markets crumble, I returned to the university. While the bank had the bad fortune of continuing to tank after I left (I should point out, I had nothing to do with this), I had the good fortune of being offered a lead position on the university's web presence team. One benefit of the position I was offered was I had some latitude as to what my specific role should be.

After meeting the other folks on the team and listening to their challenges, three specific problems emerged as priority items:
    1. They want to get a handle around the intake of new requests and improve the management of the work in general
    2. They are looking for enhancements to their business continuity and disaster recovery processes
    3. They they need to improve the stability of the website's backend services running ColdFusion (yes, in 2007, people still ran ColdFusion)
All of these were clearly important issues to tackle, and I'm pleased to say we did address all of them, but for the purpose of this discussion I'm going to focus on the 3rd issue, as it was the one that altered the way I approached future problems.

ColdFusion provides a number of services to websites, including scripting, database functionality, server clustering and task queues. It could handle much of this functionality very well, however as the size of the web applications would grow in complexity, ColdFusion would not always scale properly. For us, this presented when the services would freeze and webpages would stop displaying updates. For the most part, the pages would still render, but new content would get hung up between the submission process and the back-end update process. As a result, we would receive calls that content was not displaying properly and then we would "fix" the problem by restarting the ColdFusion services.

One attempt at proactively "solving" this problem prior to my arrival was to create scheduled tasks in the OS to restart the services automatically every hour, with the two servers in the cluster set to restart 1/2 hour apart. This quelled the problem well enough for awhile, but not long after I arrived, some additional problems started to arise from this. A residual affect of these restarts was that the task queue would collect events that may or may not release properly when the services came back up. So over time, this queue would fill up with events that would then overrun the memory pool, which in turn caused everything to then hang. To resolve this issue, an administrator had to go in and manually clear the queue log - to essentially delete the hung events.

Initially, this was happening once a week or so, but as time went on, it would happen more and more frequently. The point at which it was happening about once a day, we knew we needed a better solution than waiting for a phone call to know if the queue needed cleared out.

The initial solution we arrived at was to see if there was a way to programmatically monitor the queue to watch for the number creeping up. When everything was functioning properly, there should be anywhere from a few events to maybe 100 events if you had a bunch of people submitting changes at the same time. Everything would function just fine though until there were 1000 or more events. So we built an ASP.Net app to just render a simple graphic that displayed green, yellow, red, and purple based on the number of events. Any time that we saw it go red, we knew we needed to go in to clear the queue. So the first step was monitoring the queue on screen.

After running this for a bit, and confirming that it was working correctly, we added a function that would send an email alert as soon as the queue hit red. This way we could be alerted after hours without having to manually keep an eye on things. This at least gave us some freedom from having to check the screen several times a day to see how it was doing. Since it was an ASP.Net app, we could at least check it from a cell phone easily. The second step to this process was proactively sending alerts.

Once we got to this point, I asked the question - is there a way to clear the queue without having to log into the console to do it manually? After some research, we discovered that we could indeed call a function from ASP.Net that we could use to clear the queue. We added this function to the app we created and just populated the logic behind a button on screen, such that when we got an alert we could just pull up the app on whatever computer we were near, including our cell phones, and click the button to clear the queue. This was fantastic on multiple levels, as it was far less work for us now and it could be done easily wherever we were. This way too, instead of one of the administrators always having to hop on their computer to resolve the issue, we were instead able to delegate this to anyone to resolve. We wrote very simple instructions that amounted to "If the screen is red, click the button." The third step to this process was to simplify the process programmatically.

The final step in our process, came rather naturally. We had a button we could push whenever we needed to fix the problem, and we were getting alerts whenever the problem occurred. All we had to do at this point was join the two processes together - whenever it would go to send an alert, why not have it also call the function to clear the queue. In theory then, by the time we got the alert and checked the app, the problem should have already gone away. Once we implemented this step, this specific problem was fully mitigated and virtually eliminated. This last step to this process was automation.

Seeing the benefits derived from this approach to problem solving reinforced this as an approach that could be applied for many future problems (so of which I will cover in later posts). To summarize this approach to troubleshooting and problem solving:
  1. Set up monitoring - figure out a way to detect the problem before it occurs by identifying leading metrics that are indicators of the coming problem
  2. Set up alerting - once you've determined how to monitor the leading indicators, further enhance the process (and response times) by alerting folks that actions need to be taken
  3. Simplify the process - break down the steps to take in such a way that all of the logic can happen behind the scenes, and document the process so others can follow it without having to be experts
  4. Automate the process - once you're confident that the process is working consistently and you've defined it in a way that doesn't require expert intervention, hook the alerting and resolution logic together so that it automatically resolves itself
This process has proven successful time and again in the years since. As I've worked with other teams along the way, we have built systems that applied these same principles and gained tremendous efficiency in the process.

Wednesday, July 22, 2020

Creating My OWASIS - Part 3 (Putting the pieces together and wrap-up)


In this third and final post, I will walk through the various components that went into making OWASIS work. In case you missed them, here are the links to Part One and Part Two.

This part of the process was the actual fun part - writing and assembling the scripts into a semi-cohesive package that could be run on a repeated basis to refresh the inventory information. I figured out in Part Two that I would rely on a combination of Bash and Perl scripting to make this all work. There were still a few minor obstacles to overcome

For one, I wanted all of the data output in a consistent manner, and some of the commands to do this would not render properly if they were just called through a remote, interactive session. So I wrote a script that could be uploaded and then run on any of the servers, which I called Remote.sh. This really formed the core of the inventory system and could be run on any server version and would return the data formatted in a consistent manner. The challenge was how to get this script on to all of the servers.

I decided to tackle the Telnet-only systems first. Since Telnet does not support file transfers, I decided to FTP (ugh, yep, not sFTP since that wasn't available) the Remote.sh script to the server first, then call the script from the Telnet session. This worked nicely and returned the information to the terminal.

The next step was to write a script that would automatically login to Telnet and then execute the Remote.sh script that had been previously sent to the user's home directory via FTP - I called this script AutoTelnet.pl. This script incorporated the previously mentioned "exepect.pm" module instructions to handle sending over the username and password (see security disclaimer in Part Two).

The last piece was to essentially build a loader script that would call these other two. Essentially, all this last script for the Telnet systems did was upload Remote.sh and then execute it by then running the AutoTelnet.pl script - I named this script FTP_Remote.sh (for obvious reasons).

For the SSH servers, I still used Remote.sh to run the commands remotely on all of the servers so that I could capture the data in a consistent manner, but since SSH supports file transfers as well, the process of moving the file and then executing it was very streamlined - and it too leveraged the "expect.pm" module for automating the login process.  I called this script AutoSSH.pl.

These scripts collectively represented the real bones of the OWASIS system. I had to write some additional supporting scripts though to make this as fully automated as possible. This included scripts like nslook.sh which I used to perform an nslookup on all valid hostname ranges (the bank named their servers sequentially, fyi). I used listing.pl to parse the output of nslook.sh and determine what systems support SSH and which only supported Telnet. Another script called Parse2csv.pl was used to scrape the output files from the Remote.sh scripts into a comma separated value file.

As I mentioned in Part Two - and looking back in hindsight - there were many security issues present with the way all of this worked. For one, while I played around with making the collection of the username and password interactive for the person running them to avoid hardcoding these values into the files, I still had to use a configuration file (called ftpers.txt) to store these values for running the Ftp_Remote.sh script. If you mis-typed the password in either the config file, or in the interactive prompts though, it would lock the account. This required a call to the security team (plus a mea culpa) to get the account unlocked. And this worked fine for the most part - except for the systems that were Telnet only - because I would not be able to access FTP until a successful Telnet authentication took place. So I wrote another script that I called AutoTelPasswd.pl that was my get out of jail/unlock account script. Let that run that on all of the Telnet servers and I was back in business!

For anyone that has not lost total interest in all of this at this point (anyone? Beuhler? Beuhler?), here are the original instructions I wrote up on how to run OWASIS:

Open-source WAS Inventory Package Instructions

Note: When doing the following, be SURE to use your correct password - failure to do so, WILL lock your account on all of the machines to attempt to log into
  1. Replace "<username>" and "<password>" in ftpers.txt
  2. Run "./Ftp_Remote.sh"
    1. After it has automatically ftp'd the Remote.sh script to all of the server in tn_ips.txt, it will prompt you for a username and password to use to telnet into all of the machines and run the Remote.sh script
  3. Run "perl AutoSSH.pl ssh_ips.txt"
    1. This can be run concurrently with ./Ftp_Remote.sh, as all of the processing is done remotely, so it will not slow down your local machine.
  4. When Ftp_Remote.sh completes, view the log file in an editor that allows you to do block select mode (textpad or ultraedit32), and block select only the first character in every line of the file, and then delete that block. (This way both log files have the same format)
  5. Run "cat SSH_connections-<datestamp>.log TN_connections-<datestamp>.log > Master_Inventory.txt"
    1. This will populate a single file with all of the output from Telnet and SSH servers
  6. Run "perl Parse2csv.pl Master_Inventory.txt > <output file.csv"
    1. I usually make an output file with a datestamp similar to the tn and ssh_connections files
  7. Open your <output file>.csv file in Excel
    1. There will be three disctinct partitions/ranges to the file
    2. Add text labels above the first row in each partition as follow:
      1. Partition 1: Hostname, Brand, Model#, OS Version, Processor Cores, IP Address
      2. Partition 2: Hostname, WAS Versions
      3. Partition 3: Hostname, WAS Home, Server Name, Server Status
    3. Select all of the cells in the first partition/range, goto Data, then filter - advanced filter; check unique records only, and OK
      1. Repeat for each of the three partitions
    4. Copy and paste each partition (text labels included) into its own sheet of a new Excel Workbook
    5. Rename the three sheets in the new workbook as follows:
      1. Sheet 1: Machine Info
      2. Sheet 2: WAS Versions
      3. Sheet 3: Server Info
    6. Proceed with any formatting, sorting, etc. of your choice
  8. If you so choose, now that you have a well formatted Excel "Database" file, you can import this into Access to run queries against - each sheet is equivalent to a table in a database - hostname is the primary key.




Friday, July 17, 2020

Creating My OWASIS - Part 2 (Solving the problem)


In this second part of "Creating My OWASIS", we will get into the approach I took to solve the problem of how to create an inventory of systems for the bank where I worked. If you missed Part One, which provided background and an overview of my role with the bank, you can find it here.

The assignment, you may recall, was to create an inventory of the existing WebSphere Application Servers deployed at the bank. This included identifying all of the development, test, and production systems and their associated versions of WebSphere Application Server, Linux, and certificate information. At a high-level, one approach could have been just manually logging into each individual server, running commands to find the requested information, and noting it in a spreadsheet. Taking this approach, I probably could have completed the assignment in roughly a week or two. And for those two weeks, my days would amount to arriving to work, logging into my workstation, opening up PuTTy, and then walking through the list of hundreds of systems one at a time, picking up where I left off the day prior.

I don't know about you, but I do not have the energy, attention-span, nor desire to spend this many hours of my life wasted in tedium. Fortunately, all of the servers running WebSphere Application Server are a variety of Linux flavors - so perhaps I could write a script to make this process more efficient (and interesting)?

I spent some time brainstorming what was possible and how it would ideally work. My goal was to make it fully automated (or as close to it as possible) - whereby I could feed in a list of servers and it would automatically login, run some commands, and return back the desired information. I knew I could easily accomplish some of this using Bash scripts, particularly for systems that were running ssh, but I found out early on that there were a shameful number of servers still only running <gasp> telnet </gasp> of all things. Well, I wasn't going to let this lunacy slow me down - there had to be a way around this.

I shared my ideas with a friend of mine, and they gave me the suggestion to take a look at Perl, and specifically to look at using the "expect" module. This proved to be exactly the secret sauce I was looking for.

MAJOR CAVEAT - what you are about to read absolutely pre-dates my time in a security role, and while judgement is certainly allowed (encouraged, in fact), this no longer reflects recommendations that I would give today.

There were several ways that Perl was an attractive option for what I was trying to accomplish. The major strength comes from just the sheer number of modules (what other languages call libraries) available that provide a vast array of functionality from which to draw capabilities. Another major strength of Perl is its ability to parse data from either fixed-format or completely unstructured data. This strength comes from how tightly regular expressions (RegEx) are integrated into the language. This makes it tremendously easier to take output and format it into something useable and then import it into another application (Excel, for example). The last strength is of course the one I mentioned earlier - specifically, the expect.pm module - which can be used for automating processes.

The expect.pm module performs the unique function of building what are essentially cases that fire off depending on what is output to the screen. While my plan was to use this specifically to interact with login prompts and prompts to supply passwords (again - not secure), it could really automate anything that involves "if X is returned, then do Y". Functionally, if you are familiar with IFTTT, then you already have a fundamental grasp on how this works.

By combining the power of Bash, Perl, and Expect.pm, I had all the tools needed to create a package of scripts that could automate from start to finish the process of building out an Open-source WebSphere Application Server Inventory System (aka "OWASIS").

Coming soon will be part 3 of this unnecessarily lengthy topic, where I will walk through each of the components that went into package of scripts.

Apologies to Peter Jackson for stealing his creative process.


Thursday, June 18, 2020

Creating my OWASIS - Part 1 (Setting the stage)


During the second-half of 2007, I did a brief stint at a bank - which was exactly the wrong time to be starting a career at a bank. As you may recall, this was right when the mortgage crisis was beginning to come about, and it lasted into 2009. While I was only there for 6 months to the day, I got to watch the stock value plummet to ~10% of what it was when I started. Within the next couple years long after I left, the bank was purchased by another bank and MANY people lost tons of money on the deal.

How is any of this relevant to my next story?

I was originally brought in (along with one other person) to be part of a new group within the Middleware Management and Support team, specifically to assist with doing research and development into new uses for WebSphere Application Servers. My role was to assist with the development of systems that would improve the throughput of EFTs (Electronic Funds Transfers) between the mainframe and the downstream ATMs and Web endpoints. However, just as I was about to join the team, this project went on hold due to the IT Architecture team deciding to begin development using Weblogic instead. With this, my role immediately changed from building and testing WebSphere in new applications, to just maintaining the existing WebSphere systems like the rest of the team. The problem was, there was already a team of people that supported the existing WebSphere environment, and between the shift in technology focus away from that team and the sudden downturn of the stock market, they were reluctant to want to show the new guys the ropes.

Humans have an innate sense of impending doom which fires up long before the rational part of the brain realizes what is happening. This then engages the fight or flight response in order to preserve oneself. The way this was demonstrated within my team, was relegating myself and my other new coworker to the most basic of tasks, and barely lifting a figure to get us pointed in the right direction. They were afraid training us would train themselves out of their jobs; in hindsight, they were probably correct.

I was literally given one real, personally-assigned project to work on independently, and this was to create an inventory of the existing WebSphere Application Servers. Mind you, the WebSphere servers numbered in the hundreds, and people had long since lost track of what they all were, which ones were still in use, and basically what was even still powered on. Being the resourceful type, and also BORED OUT OF MY MIND, I decided to think of ways that I might be able to automate the process and - most importantly - save myself time if I ever had to do this again.

I've always said that the best programmers I've known are the laziest. This may sound counterintuitive on the surface, but in reality, it is the fact that they are lazy that they seek ways to avoid having to perform repetitive tasks. Logging into hundreds of servers and running a command to see if WebSphere is installed, then document the version number, is the pinnacle of repetitive tasks.

Fortunately, this assignment (which I assume was just intended to be busy-work keeping me occupied anyway) came with no instructions for how they wanted it completed, nor a deadline for when it was to be completed. And so, I took this as an opportunity to learn some new skills and create the best damn inventory process possible.

Continued in Part 2...

Monday, June 8, 2020

Kickstarting Knoppix




&



Those of you who were interested in running Linux in the early to mid-2000s ("20-aughts") may remember a clever distribution called Knoppix that allowed you to run Linux on any computer using a bootable CD or DVD. This distribution was used to form the basis of several others: Helix - used for computer forensics; Kanotix - which added a feature to perform hard drive installs of Knoppix; and the still-popular Kali (originally BackTrack) - which is widely adopted by penetration testers.

The beauty of Knoppix, as mentioned earlier, was that it could run fairly reliably on almost any equipment of that era. This made it an attractive option in cases where it was difficult to predict what equipment you may need to run it on. For this reason, I wondered if this would be a beneficial option for Disaster Recovery processes. The specific problem that I was looking to solve was: "how do we start rebuilding our many RedHat servers from bare metal in as efficient a manner as possible?"

RedHat developed a utility called KickStart that was essentially a local, network-based, RedHat distribution mirror. In it, you could store a customize set of RPMs along with other packages and programs that you wanted to deploy onto your new servers. For general administration processes, we set up a KickStart server for building new servers from scratch fairly easily. And because it resided on the local network, we never had to worry about download speeds; the bandwidth limitation was just the NICs on the servers and the switches within the data center. This worked great on the equipment we had in place, however, it did not work as well in a disaster recovery setting as RedHat had a tendency to be fussy when you tried to restore an image from a different model of hardware. Based on the contract we had with our vendor, we were guaranteed the same or better grade of equipment, but not identical equipment (which would have been much more expensive). This created complications with restoring servers from backups in general, let along restoring our KickStart server. We needed to figure out a way to bring up the KickStart server without having to fix a bunch of driver issues at the time of recovery in order to expediently build the other servers.

Enter Knoppix.

Even though Knoppix was based on Debian which is significantly different than RedHat, the underlying technology (the kernel, runtime environments, etc.) were very much the same. Since KickStart runs off of a web server, all I needed to do was install and enable Apache on Knoppix and configure a site that references the location where I would store the RPMs and KickStart config file. I then configured it to make sure Apache came up automatically upon boot by updating init.d.

Knoppix provides a process by which you can set persistence on changes made to the Knoppix environment and then save it as a new iso image. After doing this and confirming with a new test disk that it would boot and run correctly, I ran into my next hurdle - storage.

A standard high-density CD has roughly 650MB of useful storage capacity, and the Knoppix base image took up nearly all of it, leaving barely any room for dropping in the set of RPMs we needed. Fortunately, some of the newer releases of Knoppix at the time (version 4.0.2, specifically) were capable of running from a DVD as well. Unfortunately, Knoppix was taking advantage of this added space with a bunch of additional programs that we certainly did not need. So I got to work, removing and uninstalling as many of the programs as I could while maintaining the minimum necessary to bring up the KickStart server. I had to make a few concessions - for example, KDE was WAY too bloated for our needs, but we still needed an X Window System environment, so I replaced KDE with Blackbox, keeping just the KDM window manager for the backend. Doing so freed up just enough space to fit everything.

One last configuration I made before packing it up and burning it to disk was to set the network to use a static IP. This way, once the KickStart server boots up, we can stand up the new servers on the same network as the KickStart server, initiate the KickStart install process using network install or PXE install, and off it goes!

Having this option available in the event of a disaster gave us assurances that we could quickly and easily bring up a KickStart server which could then be used to perform bare metal installs of all of the servers within our environment.