The AWS pass mark is 60%

The pass mark for the AWS certifications is 70% right ? No – it’s 60% !!

A mulitple choice test gives you some free marks because theres a chance of getting a question correct by pure chance, but how do we calculate that ?

Let’s say that the chance of getting a question correct is P that represents a probability between 0 and 1, with 0.7 representing 70%. There is also that chance of getting a question correct by chance alone – let’s call that R.

The question then becomes how do we combine the probabilities P and R to produce a new probability that represents the combined chance, C.

The relationship between these variables can be represented as this : C’ = P’R’ where X’ (x prime) = (1-X).
We could represent this with the logic expression ¬C = ¬P ^ ¬R
In words, we say the chance of failing a given question is the combined chance of failing the question from knowledge and failing the question from chance.

Now that we have the general equation (1-C) = (1-P)(1-R), we can re-arrange this and fill in the values. We know that the pass mark for the exam is 0.7, and we know that the chance of getting a question correct by chance is 0.25 (because there are typically 4 answers).

So, (1-0.7) = (1-P)(1-0.25) … 0.3 = (1-P) x 0.75 … (1-P) = 0.3/0.75 … P = 1-0.3/0.75 = 0.6 (60%)

Occasionally of the 72 questions, there are more than one answers sometimes from a selection of 5 answers, this is alot more complicated to model and the number of these questions is so few – my intutition is that this is unlikely to add much more than 1% to the final answer.

AWS Roles and Tokens.

https://support.pipdig.co/articles/wordpress-how-to-change-the-width-of-your-website-and-sidebar/

Alot of new users to AWS will install the CLI. They are advised 1. Don’t log in using the Admin account. 2. Use roles where possible. 3. Use Multi-factor authentication where possible.

The difficulty here is that implementing those features in the Web Console isnot trivial for a user completely new to AWS, and perhaps harder to implement within the CLI.

For that reason I wanted to document how to do this for a user new to AWS. I’m assuming the user can install the CLI on their own.

  1. Create a new user and new group :
  2. Create a new role :
  3. Set role permission policies : Select the policies that you think you will need – in this example I have added AdministratorAccess because I was troubleshooting a new account however I would not advise you select this policy unless it for a specific pupose and then only for a short period of time.
  4. Within IAM>Roles, now create a Trust entity within the Trust relationships tab. I have redacted my AWS account number and you will need to replace it with your account number.
  5. Assuming the new role on the CLI : Changing role involves running a command and then creating new environmental variables – which I think is a unnecessarily convoluted – so I have created a tool “awsurole” that is available here. The usage can be see in the snippet below. This ends the tasks to create and use roles on the CLI. The next steps allow us to implement the use of MFA tokens.
  6. Create a MFA token : within IAM>Users you need to assign an MFA token. I use an app named “Authenticator” on my mobile phone to manage MFA tokens. Once you’ve created the security credentials you can proceed to the next step.
  7. Create a policy : I have created a policy named peter-mfa-policy.Use the policy below
  8. Use the token on the CLI : again, I’ve created a tool to simplify this step that is available here. There is an example of it’s usage below.

9. Finally – let’s take a look at an example where we authenticate with a token and then immediately assume a new role. We’re also using a useful tool I’ve developed named “awswhoami” that is available here.

I hope this helps you use Multi-factor authentication and roles. Please feedback if you think of any way to improve this documentation.

Where have the UNIX SysAdmins Gone ?

They are alive and well !

It’s been clear for years that the number of plain UNIX SA jobs has been gradually diminishing. The reasons behind this are new techniques ; Virtualised Infrastructure and Infrastructure as Code, Container workloads, Automation methods, Utility (Cloud) Computing, Intolerance of configuration drift leading to Immutable Infrastructure approaches.

Under the bonnet, Cloud Compute Offerings are mostly just-a-bunch-of-linux-servers so I want to highlight that it’s not Cloud in itself that reduces the need for SysAdmins, but the technological approaches that become economically viable at scale.

The advantage of simplifying the delivery of infrastructure is that it releases resources to tackle other challenges and that’s good for everyone. The need for UNIX (and Linux) skills is never going to dissapear but the trend we see is that UNIX SysAdmins are diversifying into areas where they can add value whilst still bring their UNIX skills to bear.

The emergence of manufacturing displaced Artisans (typically textile workers) in the late 18th through to the early 19th century. There are some similarities here – except – that in our case there are so many exciting paths in which to diversify. Unfortunately, textile workers became luddites and smashed looms in protest . It’s sad that you can’t diversify into all of the new technologies (Cyber, Cloud and Devops). Maybe you can if you count DevSecOps !

I want to invent the term UNIX Artisan. I checked google and no-one else has used this term. A UNIX Artisan might be a dedicated UNIX SysAdmin that makes lots of individual changes to UNIX servers by hand with love and dedication.

I thought it might be interesting to see what the UNIX SysAdmin landscape looks like so I carried out some research. I looked through my linkedin network and selected contacts that are or were UNIX SysAdmins at one time, and then noted what they are doing now. I had a general idea of how I would categorise the data. Some were a suprise. I settled on 6 categories for those that had moved outside of plain UNIX System Administration ; 1. Cloud, 2. Devops, 3.Cyber, 4. Application, 5. Low Latency, High Frequency Trading and Market Data, and 6. Senior Leadership Roles.

I was unsure whether to count “Low Latency” as a non-standard UNIX role, the dilemma being that in some cases it could involve plain system patching yet on the other hand it could involve network testing and measurements. I decided to settle on keeping it seperate and the reader can make their own judgement on how to interpret the data.

What I found is that from 77 SysAdmins, 35 still in SysAdmin roles, 9 in Cyber, 6 in Cloud, 7 in Devops, 10 in Application (various roles), 4 in Low-Latency/Market Data, 6 in Senior Leadership Roles.

Sample77SysAdmins Artisans35
Cloud9Application Roles10
Cyber6Low Latency & Market Data4
Devops7Senior Leadership Roles6
New roles based on 77 SysAdmins

Some observations:

  1. I’m not claiming that people that have moved to new technology Silos, are no longer undertaking SysAdmin work. Just that their roles are now hybrid or have a different focus.
  2. Some role drift would occur in any event due to careers progression.
  3. Can you tell from a job title exactly what people are doing now ? Usually no – but I have maintained knowledge of my network directly and sometimes indirectly. I have a strong confidence in being able to capture the detail I need for the purpose here – plus – this is not hard science, we are just watching trends.
  4. I expected the Cyber/Cloud/Devops category to be greater and was suprised it only represented just over one-third of the sample.
  5. I was suprised that as much as 13% of the sample (1-in-7) had moved over to various roles within the “Application World” : Support, Development, Architecture and Engineering.
  6. It’s encouraging that 6 UNIX SysAdmin had moved into Senior Leadership Roles.
  7. There are still lot’s of UNIX Artisans which means that the requirement for people with hardcore UNIX skills is not going to go away any time soon.

Maps, maps and more maps.

Ever since I was a kid I loved to look at maps. Somehow I loved the simple way that the world could be scaled down and the relationships between places revealed. I would draw maps and in fact at around the age of 4 I could draw the entire rail network of south london from memory. Looking back I think the intense concentration was a form of relief from a complicated home and emotional life – let’s leave it there !

Even now I’m fascinated by maps and collected many antique maps. What is really quite odd is that when my grandfather passed on, my grandmother offered me his collection of maps including some beautiful national geographical maps (I don’t think anyone else wanted them !!). He was a private individual and we never discussed maps once, how odd that he was fascinated by these topographical records too.

The internet is rich in maps and I thought it might be interesting to share some of the bookmarks I’ve made.

  1. Google maps – everybody knows about google maps. What I want to recommend is to turn on “Terrain” from the “Layers” menu.

2. Open Infrastructure Map – provides information about infrastructure such as electric and gas. It’s really interesting to see how much power is generated in different places and to see where all the wind turbines and solar panels are !

3. OpenRail Map – gives so much detail and includes legacy information. Who would have thought there used to be a rail link allowing trains to go from the circle line into Liverpool street station mainline platform 1 and beyond to the rail network. Hudson Yards in New York is really worth checking out too.

4. Topography Map – after trying to find a way to get an accurate elevation for where I live – I came across this website. It’s very details and reveals peaks and valleys beautifully.

5. Disputed Territories of the world.

6. The History of Europe every year (video).

7. Historical map of Europe (atlas style).

8. Interactive flood map. Global Warming edition !

9. San Andreas fault lines.

10. Access to Pubs. … it’s actually alot more than that – but this title sounds more fun !

11. History of the world – year by year.

12. London tube infrastructure map.

13. Interactive Satellite map.

14. Lightning map.

15. Sea vessel realtime map.

16. North Sea Oilfield map. Fascinating to see how much work it takes to get oil and gas from the North Sea.

17. Traksy. Where are the trains – probably the best realtime map for a london commuter.

18. Nukemap. Macabre and frigtening simulation.

19. House Price Map.

I hope you enjoy these maps and find something useful. I’m always excited to get more recommendations.

Wiki – dissapointment.

I’ve been registered with OpenSpringer for a few years and I get 10 to 20 article alerts per week. These are typically research articles about Cloud and CyberSecurity. Sometimes the articles are not of any direct interest to me and to be honest, more often than not, they go straight above my head !! But every now and again I come across something interesting that I can also understand and also I like to try and keep abreast of new developments.

So when I recieved an email with this title I was intrigued because it’s something I had never heard of : Verifiable image revision from chameleon hashes.

What is a Chameleon Hash ! This is something I have never heard of and something I felt I should at least be aware of. There was nothing on wikipedia at all and the only meaningful results from google were links to research papers, but I could only get the abstracts. Generally, researh papers are published by journals that are only avaible to non-academics at a cost. There was no single place that described this conception.

Eventually I did find something interesting here, Chameleon Hashing and Signatures (1997). This link is to a postscript file so if you do download it you then have to get Adobe Distiller to convert it to a pdf.

So I thought it might be “helpful” to add the information to Wikipedia. I considered creating a new article but chose not to and to update the article on Digital signatures instead.

Heres what I wrote : “In Chameleon Hashing and Digital Signatures (1997), Krawczyk and Rabin describe Chameleon signatures that provide an undeniable commitment of the signer to the contents of the signed document as regular digital signatures do but at the same time do not allow the recipient of the signature to disclose the contents of the signed information to any third party without the signer’s consent. These signatures are closely related to undeniable signatures but chameleon signatures allow for simpler and more efficient realizations than the latter.”

You can see that all I have done is copy/paste an excerpt from the abstract of the 1997. I made a few minor edits because of the context change and then precedeed it with the source.

I checked in the next day to see that my change had been removed with this text : “Undid revision 1049099506 by 82.38.100.100 (talk) unsourced undo Tag: Undo.”

So if I am to understand this correctly – it’s claimed I didn’t add a source for the information when in actual fact the paragraph starts with the source and the authors of the very research paper from which I quote. I was really dissapointed. I’m not exactly an expert on Cryptography, I would say I’m at a basic undergraduate level with a little real-world experience in practice with tools like SSH. If I had been overruled by someone who had many years of contributions to the wikipedia pages on cryptography I might have felt a little better. With a little digging I found that my edits had been removed by an editor whose interests stretch as wide as the List of Banks of the Republic of Ireland, Bandon Grammar High School and Chris Collins the lacrosse player. More than that I could not find a single wiki contribution but rather found lots of wiki edits removing new content.

I’m willing to learn and still accept it might be the case there is something I don’t understand about the change I made that made it necessary to remove it – but there was nothing in the comments to say what. Out of interest, the comment I left when I made my edit was this : “I couldn’t find anything on wiki about Chameleon Signatures so obliged. I did consider a seperate wiki page just for this, but thought it might be better just to place it here. Please feel free to improve if you want to – I am not an expert editor. Thankyou. PR (UK)” – so there is space to leave even small feedback.

Is wikipedia really about making the world a little bit better one small step at a time ? I still hope so.

Devops for UNIX.

Spending alot of time with “UNIX” people I hear so many varied views and opinions about devops. Mostly these views complement each other although not always.

The reasons for these diverse opinions are worth discussing. By understanding this phenomenon we can learn why it exists and glean new perspectives.

I like to think of DEVOPS as two distinct components; 1. automation of the RELEASE process, and 2. a change in the delivery of the supporting infrastructure (primarily Infrastructure as Code).

RELEASE MANAGEMENT is a term taken from ITIL, it defines a standardized process for planning the getaway, building and testing the release, scheduling the release, pushing the release, deploying the release, providing early life support , and closure of releases.

Most Sysadmins get alot more excited by automating infrastructure than by automating Release Pipelines. The ideal DEVOPS candidate has an expertise that branches into both silos and alot of DEVOPS literature spends alot of time emphasising this.

I’ve spoken to more than a few Recruitment Agents frustrated by the same problem – finding UNIX people with an skills in Python – OR – finding APP/RELEASE people with skills in UNIX (by this I do not mean tailing logfiles and restarting application daemons). I’ve lost count of the number of times people claim to be “UNIX Admins” because they’ve typed “ls -l” a few times.

In large organisations, a RELEASE starts in the Application Development Team, passes to the Application Support Team who probably have a Application Operations Team or perhaps (if it’s a big team) an Application Release Team.

They will work with the UNIX team to schedule the “roll-out” which is a fancy way of saying that a UNIX admin will copy a bunch of files from one place to another and maybe stop and start some daemons and then hang around if things go wrong. In some cases this whole process might just take 15 minutes and have one or two steps. At the other end of the spectrum, there might be as many as 30 steps or more.

Therefore DEVOPS = AUTOMATED RELEASE MANAGEMENT + AUTOMATED INFRASTRUCTURE. When DEVOPS first emerged many UNIX admins had a far greater understanding of Configuration Management tools (e.g. Puppet, Ansible, Chef) than RELEASE tools such as Jenkins or Selenium, and I think that’s understandable. For that reason many System Administrators saw DEVOPS as synonymous with Configuration Management and Deployment tools (plus some magic sauce that I’ll learn when I have to!)

From discussions with agents I think it’s also fair to say that DEVOPS candidates also fall into one of two camps based on their previous role and their “education journey” in learning technologies from the new tower or (I think) more importantly learning the core competencies from the “other” Technology Tower.

In my next article I’m intending to map out the ITIL roles against the traditional “Technology Tower” teams – and then map those to the DEVOPS tools that you need to learn to position yourself to pickup a DEVOPS role.

Interesting Articles ? Question Mark !!

I wanted to write a post with a summary of some of the more interesting or useful articles I had written in the past years.

AWS EC2 auto-terminate instance.

The AWS free tier allows us 750 hours of VM per month. If you are like me, you might forget to terminate your instance from time to time.

I present here a method to terminate your instance automatically if it has no users logged in to it. There are some scenarios where this might not be useful – for example if you are using the VM for some purpose other than simple interactive logins but for most of us this is quite useful.

Step 1. Set up a policy that will shutdown an instance. I’ve named my policy EC2stop-${instanceID}.

Step 2. Create a user through which we will be able to terminate the instance using ‘Programmatic Access’. This user does not require console access.

Step 3. Copy the script located here : https://github.com/peterhodes/linuxtools/blob/main/root/bin/AWSautoterminate. In case the link breaks in the future I’ve also added the script at the bottom of this page.

Step 4. Add an entry to crontab to run the script.

Step 5. Customise the AWSautoterminate script and crontab. Select how often the crontab should run the script and the “no-login grace period” – that is the period the system should wait whilst no users are logged on to the system, before the VM is auto terminated.

Step 6. The autoterminate script will place useful messages into /var/log/messages or syslog depending on the operating system you are using

Step 7. You might consider setting TMOUT in your profile so that if you are logged in and leave the terminal running without any activity – you will be logged out after a certain time. Placing the entry ‘export TMOUT=900’ in your profile for example will instruct the shell to log out out after 900 seconds (15 minutes) of inactivity.

Finally (below), the AWSautoterminate script.

#!/usr/bin/python3


import os
import re


# PRE REQUISITES
# 1. Installation of AWS CLI (this should create the /root/.aws directory).
# 2. AWS Credentials that correspond to a user that has permission to shutdown instance.
# 3. ec2metadata command is available (sometimes needs cloud-utils package).


noLoginsTimeStampFile = '/root/.aws/noLoginsTimeStampFile'
TerminateCommand      = '/usr/bin/aws ec2 stop-instances --instance-ids'
noLoginsGracePeriod   = 1800


def ReadCommand(CommandString):
    Result = os.popen(CommandString).read()
    ListResult = Result.split('\n')
    return(ListResult)

def ReadFile(filename):
    if os.path.isfile(filename):
        with open(filename,"r") as f:
            lines = f.readlines()
    return(lines)

def WriteFile(filename,content):
    if (type(content)) is str:
        content = [content]
    File = open(filename,'w')
    for item in content:
        File.write(item)
        File.write('\n')
    File.close()

def GetOsName():
    osReleaseFile = "/etc/os-release"
    attrList = ReadFile(osReleaseFile)
    r = re.compile("NAME=.*")
    newList = list(filter(r.match,attrList))
    OsName = newList[0].split('=')[1]
    OsName = OsName.replace('\n','')
    OsName = OsName.replace('"','')
    return OsName

def getWhoCount():
    whoCount = ReadCommand('/usr/bin/who | /usr/bin/wc -l')[0]
    return(int(whoCount))

def GetInstanceId():
    if osName == 'Amazon Linux':
        Result = ReadCommand('/usr/bin/ec2-metadata -i')
        Result = Result[0].split(':')[1]
        Result = Result.replace(' ','')

    elif osName == 'Ubuntu':
        Result = ReadCommand('/usr/bin/ec2metadata --instance-id')
        Result = Result[0]

    else:
        ##   We don't know the OS type so let's quit.
        exit()

    return(Result)

def LogThis(String):
    LogString = "/usr/bin/logger NoLoginsAutoShutdown : " + "'" + String + "'"
    ReadCommand(LogString)




osName             = GetOsName()
whoCount           = int(getWhoCount())
epochTime          = ReadCommand('/usr/bin/date +%s')[0]
instanceID         = GetInstanceId()
TerminateCommand   = TerminateCommand + ' ' + instanceID

LogThis("whoCount : " + str(whoCount) + "  GracePeriod : " + str(noLoginsGracePeriod))


if whoCount == 0:
    if os.path.isfile(noLoginsTimeStampFile):
        ##   If no users and timestamp file already exists.
        initialTimeStamp = int(ReadFile(noLoginsTimeStampFile)[0])
        currentTimeStamp = int(ReadCommand('/usr/bin/date +%s')[0])
        timeStampDiff    = currentTimeStamp - initialTimeStamp
        if timeStampDiff >= noLoginsGracePeriod:
            ##   If we've allowed grace period then go ahead and terminate.
            # remove this file before proceeding : noLoginsTimeStampFile
            os.remove(noLoginsTimeStampFile)
            LogThis("Running Command " + TerminateCommand)
            ReadCommand(TerminateCommand)
        else:
            ##   Otherwise don't reboot until more time has elapsed.
            exit()
    else:
        ##   No TimeStampFile ?  then create one and quit.
        WriteFile(noLoginsTimeStampFile,epochTime)
else:
    if os.path.isfile(noLoginsTimeStampFile):
        ##   If users are logged in make sure timestampfile is removed.
        os.remove(noLoginsTimeStampFile)

Cannot remove AWS VPN – dependency errors.

It’s clear from the number of posts on troubleshooting sites like stackoverflow that alot of people get stuck with removing a VPN (or other network components). Unfortunately the solutions of stackoverflow are sometimes unclear so here is a list of resources to check if you get stuck with this type of problem.

  1. Delete NAT Gateways.
  2. Detach Internet Gateways.
  3. Release Elastic IP Addresses.
  4. Delete Internet Gnternet Gateways.
  5. Now try deleting VPCs (ideally you will now be at the point it will remove further dependencies).

Jenkins memory on GCP e2-micro.

The GCP First Generation e2-micro server is not particularly powerful but it should be powerful enough to run a simple Jenkins server.

At first it appears that this is not the case. The reason for this is that after we install Jenkins on Ubuntu, it triggers the uber-readahead service to regenerate it’s datafiles. The uber-readahead and the running java processes simply require more memory that the server has available and there is neither a swap file.

Once the server reaches 100% memory utilisation it will become unresponsive and will unlikely recover without direct intervention. A swapfile (virtual memory) is a way of “tricking” the system to think it has more memory by allocating disk to act as memory. We don’t need to get into the details of that as it is a commonplace method that is discussed in many place.

1. Inspect the available memory to familiarise yourself with the memory configuration.
# free -h
2. Create the swapfile and set file permissions.
# fallocate -l 2G /swapfile (OR # dd if=/dev/zero of=/swapfile bs=1M count=2048)
# chmod 600 /swapfile
3. Make and add the swap.
# mkswap /swapfile
# swapon /swapfile
4. Inspect the changes.
# swapon --show
NAME TYPE SIZE USED PRIO

/swapfile file 2G 26.6M -2
# free -h
total used free shared buff/cache available
Mem: 576M 447M 39M 692K 89M 42M
Swap: 2.0G 26M 2.0G
5. Make the change permanent so that it is persistent across reboots.
# echo '/swapfile none swap sw 0 0' >> /etc/fstab

Once you’ve added the swap – you should proceed with the Jenkins installation.