Missing Semester Lecture 4 - Data Wrangling
MIT The Missing semester Lecture of Your CS Education Lecture 4 - Data Wrangling
1 | ssh myserver 'journalctl | grep sshd | grep "Disconnected from"' | less |
The addition quoting means we do the filtering on the remote server and then message the data locally. less
gives us a “pager” that allows us to scroll up and down through the long output. To save some additional traffic while we debug our command-line, we can even stick the current filtered logs into a file so that we don’t have to access the network while developing:
1 | $ ssh myserver 'journalctl | grep sshd | grep "Disconnected from"' > ssh.log |
There’s still a lot of noise here. There are a lot of ways to get rid of that, but let’s look at one of the most powerful tools in your toolkit: sed
.
sed
is a “stream editor” that builds on top of the old ed
editor. In it, you basically give short commands for how to modify the file, rather than manipulate its contents directly (although you can do that too). There are tons of commands, but one of the most common ones is s
: substitution. For example, we can write:
1 | ssh myserver journalctl |
What we just wrote was a simple regular expression; a powerful construct that lets you match text against patterns. The s
command is written on the form: s/REGEX/SUBSTITUTION/
, where REGEX
is the regular expression you want to search for, and SUBSTITUTION
is the text you want to substitute matching text with.
Regular expressions
Regular expressions are common and useful enough that it’s worthwhile to take some time to understand how they work. Let’s start by looking at the one we used above: /.*Disconnected from /
. Regular expressions are usually (though not always) surrounded by /
. Most ASCII characters just carry their normal meaning, but some characters have “special” matching behavior. Exactly which characters do what vary somewhat between different implementations of regular expressions, which is a source of great frustration. Very common patterns are:
.
means “any single character” except newline*
zero or more of the preceding match+
one or more of the preceding match[abc]
any one character ofa
,b
, andc
(RX1|RX2)
either something that matchesRX1
orRX2
^
the start of the line$
the end of the line
sed
’s regular expressions are somewhat weird, and will require you to put a \
before most of these to give them their special meaning. Or you can pass -E
.
So, looking back at /.*Disconnected from /
, we see that it matches any text that starts with any number of characters, followed by the literal string “Disconnected from ”. Which is what we wanted. But beware, regular expressions are tricky. What if someone tried to log in with the username “Disconnected from”? We’d have:
1 | Jan 17 03:13:00 thesquareplanet.com sshd[2631]: Disconnected from invalid user Disconnected from 46.97.239.16 port 55920 [preauth] |
What would we end up with? Well, *
and +
are, by default, “greedy”. They will match as much text as they can. So, in the above, we’d end up with just
1 | 46.97.239.16 port 55920 [preauth] |
Which may not be what we wanted. In some regular expression implementations, you can just suffix *
or +
with a ?
to make them non-greedy, but sadly sed
doesn’t support that.
It’s a little tricky to match just the text that follows the username, especially if the username can have spaces and such! What we need to do is match the whole line:
1 | | sed -E 's/.*Disconnected from (invalid |authenticating )?user .* [^ ]+ port [0-9]+( \[preauth\])?$//' |
There is one problem with this though, and that is that the entire log becomes empty. We want to keep the username after all. For this, we can use “capture groups”. Any text matched by a regex surrounded by parentheses is stored in a numbered capture group. These are available in the substitution (and in some engines, even in the pattern itself!) as \1
, \2
, \3
, etc. So:
1 | | sed -E 's/.*Disconnected from (invalid |authenticating )?user (.*) [^ ]+ port [0-9]+( \[preauth\])?$/\2/' |
Anyway. What we have now gives us a list of all the usernames that have attempted to log in. But this is pretty unhelpful. Let’s look for common ones:
1 | ssh myserver journalctl |
sort
will, well, sort its input. uniq -c
will collapse consecutive lines that are the same into a single line, prefixed with a count of the number of occurrences. We probably want to sort that too and only keep the most common usernames:
1 | ssh myserver journalctl |
sort -n
will sort in numeric (instead of lexicographic) order. -k1,1
means “sort by only the first whitespace-separated column”. The ,n
part says “sort until the n
th field, where the default is the end of the line. In this particular example, sorting by the whole line wouldn’t matter, but we’re here to learn!
If we wanted the least common ones, we could use head
instead of tail
. There’s also sort -r
, which sorts in reverse order.
Okay, so that’s pretty cool, but what if we’d like these extract only the usernames as a comma-separated list instead of one per line, perhaps for a config file?
1 | ssh myserver journalctl |
Let’s start with paste
: it lets you combine lines (-s
) by a given single-character delimiter (-d
; ,
in this case). But what’s this awk
business?
awk – another editor
awk
is a programming language that just happens to be really good at processing text streams. There is a lot to say about awk
if you were to learn it properly, but as with many other things here, we’ll just go through the basics.
First, what does {print $2}
do? Well, awk
programs take the form of an optional pattern plus a block saying what to do if the pattern matches a given line. The default pattern (which we used above) matches all lines. Inside the block, $0
is set to the entire line’s contents, and $1
through $n
are set to the n
th field of that line, when separated by the awk
field separator (whitespace by default, change with -F
). In this case, we’re saying that, for every line, print the contents of the second field, which happens to be the username!
Let’s see if we can do something fancier. Let’s compute the number of single-use usernames that start with c
and end with e
:
1 | | awk '$1 == 1 && $2 ~ /^c[^ ]*e$/ { print $2 }' | wc -l |
There’s a lot to unpack here. First, notice that we now have a pattern (the stuff that goes before {...}
). The pattern says that the first field of the line should be equal to 1 (that’s the count from uniq -c
), and that the second field should match the given regular expression. And the block just says to print the username. We then count the number of lines in the output with wc -l
.
However, awk
is a programming language, remember?
1 | BEGIN { rows = 0 } |
BEGIN
is a pattern that matches the start of the input (and END
matches the end). Now, the per-line block just adds the count from the first field (although it’ll always be 1 in this case), and then we print it out at the end.
Exercises
- Find the number of words (in
/usr/share/dict/words
) that contain at least threea
s and don’t have a's
ending.sed
’sy
command, or thetr
program, may help you with case insensitivity. How many of those two-letter combinations are there? And for a challenge: which combinations do not occur?
1 | cat /usr/share/dict/words | tr [:upper:] [:lower:] | grep -E "^([^a]*a){3}.*$" | grep -v "'s$" | wc -l |
What are the three most common last two letters of those words?
1 | cat /usr/share/dict/words | tr [:upper:] [:lower:] | grep -E "^([^a]*a){3}.*$" | grep -v "'s$" | sed -E "s/^.*(.{2})$/\1/" | uniq -c | sort -n | tail -n3 |
How many of those two-letter combinations are there?
1 | cat /usr/share/dict/words | tr [:upper:] [:lower:] | grep -E "^([^a]*a){3}.*$" | grep -v "'s$" | sed -E "s/^.*(.{2})$/\1/" | uniq | wc -l |
And for a challenge: which combinations do not occur?
A train of thought: write a script to iterate all possible combinations and store them into a txt, then store current combinations into another txt, then use diff
command to find the combinations which do not occur.
- To do in-place substitution it is quite tempting to do something like
sed s/REGEX/SUBSTITUTION/ input.txt > input.txt
. However this is a bad idea, why? Is this particular tosed
? Useman sed
to find out how to accomplish this.
1 | -i extension |
It is generally a bad idea to in-place edit files as mistakes can be made when using regular expressions and without backups on the original file, it can be very difficult to recover. This is not particular to sed
but in general to all forms of in-place editing files. To accomplish in-plface substitution add the -i
tag.
- Find your average, median, and max system boot time over the last ten boots. Use
journalctl
on Linux andlog show
on macOS, and look for log timestamps near the beginning and end of each boot.
1 | journalctl | grep "Startup finished in" | awk '{print $18}' | sed -E "s/(.*)s\./\1/" | bc | st --mean --median --max |
st is a simple statistics tool from the CLI.
- Look for boot messages that are not shared between your past three reboots (see
journalctl
’s-b
flag). Break this task down into multiple steps. First, find a way to get just the logs from the past three boots. There may be an applicable flag on the tool you use to extract the boot logs, or you can usesed '0,/STRING/d'
to remove all lines previous to one that matchesSTRING
. Next, remove any parts of the line that always varies (like the timestamp). Then, de-duplicate the input lines and keep a count of each one (uniq
is your friend). And finally, eliminate any line whose count is 3 (since it was shared among all the boots).
1 | journalctl --list-boots | cat | head -n3 | awk '{ print $4, $5, $6, $7, $8 }' | uniq -c | awk '$1 != 3' |