0
0
0
s2sdefault

While this is far from the first Linux script I ever write, it is the first I am putting up here on my site for analysis and downloads. It's a quick connectivity check script that logs the details toa CSV log file for easy opening in spreadsheet apps like LibreOffice Calc.

So I have been having a weird variety of inconsistent connectivity issues to a device's one specific interface. This device has some other interfaces, and none of them seem to experience this issue. Stranger yet, is that when I just unplug and replug the network cable, it works fine again until it breaks again at random. Everytime I try to do any troubleshooting, I simply see nothing getting to the interface. I unplugh and replug the cable, and everything is fine. I am starting to suspect that the issue might have something to do with traffic, or perhaps a lack thereof... Maybe the interface "shuts down" or something after 'x' amount of time has passed without any traffic??

Since this interface hosts a website that is publicly accessible, I decided to use that as a connectivity test vector... Run the 'curl' command, and note the error code. If the error code is 0, the curl command worked, and the site/interface is up and available. If we get any errors, the site/interface is down. Of course, this information is rather useless without some kind of date/time stamp, so we'll need that too. In the end, I wound writing the following script:

#! /bin/bash

LOG_DIR=/var/log/conn

LOG_FILE=$LOG_DIR"/https.log"

WEBSITE="https://jonmoore.duckdns.org"

if [ ! -e $LOG_DIR ]

then
    mkdir $LOG_DIR

fi

if [ ! -e $LOG_FILE ]

then

    touch $LOG_FILE

    chmod 757 $LOG_FILE

fi

FINAL_LOG=$(date),

UPTIME=$(cat /proc/uptime | awk '{ print $1 }') # uptime in secs

FINAL_LOG+=$UPTIME,

T="$(date +%s%N)"

CMD_OUT=$(time curl $WEBSITE)

CMD_ERR_CODE=$(echo $?)

wait

FINAL_LOG+=$CMD_ERR_CODE,

CMD_TIME="$(($(date +%s%N)-T))"

CMD_TIME_MS="$((CMD_TIME/1000000))"

FINAL_LOG+=$CMD_TIME_MS,

echo $FINAL_LOG >> $LOG_FILE

Of course, the actual script file is commented for further explanations in the script file itself. Let's quickly break down the script and see what it does, and how it does it.

LOG FILE

LOG_DIR=/var/log/conn

LOG_FILE=$LOG_DIR"/https.log"

if [ ! -e $LOG_DIR ]

then
    mkdir $LOG_DIR

fi

if [ ! -e $LOG_FILE ]

then

    touch $LOG_FILE

    chmod 757 $LOG_FILE

fi

This section here defines the log file, where it is saved, checks for the folder and file, and creates them if necessary.

DATE and TIME STAMP

FINAL_LOG=$(date),

Log the date/time from the test device that will be running the script.

UPTIME

UPTIME=$(cat /proc/uptime | awk '{ print $1 }')

FINAL_LOG+=$UPTIME,

Capture the test device's uptime, because we can do so easily.

MAIN

T="$(date +%s%N)"

This sets a timestamp, to the nanosecond, for the start time of the commands.

CMD_OUT=$(time curl $WEBSITE)

This is the curl command itself.

CMD_ERR_CODE=$(echo $?)

Capture the error code.

wait

Wait until the above commands have all completed before moving on.

FINAL_LOG+=$CMD_ERR_CODE,

Log the error code.

CMD_TIME="$(($(date +%s%N)-T))"

This sets a new timestamp, again to the nanosecond, and subtracts the start time, leaving us with the length of time the curl command took to complete.

CMD_TIME_MS="$((CMD_TIME/1000000))"

Do some math to "convert" the nanoseconds to miliseconds.

FINAL_LOG+=$CMD_TIME_MS,

Log the milisecond execution time.

echo $FINAL_LOG >> $LOG_FILE

Actually log the data to the log file.

 

So, everytime we run this script, it does the check, and logs the results to a file. Now we will need this to run all day, everyday. I had initially thought to run this continuously in the background, with the meat of the script all within some kind of loop. However, if it ever stops for any reason, it would again require some manual intervention to come in and restart it. At most, I would only need this script to run every minute, really. While totally feasible to run more frequently, it would start to generate perhaps TOO MUCH info, and would cause the relevant stuff to just get lost somewhere. Since I only need it to run every minute, this is perfectly suited for a cron job. The most frequent a cron job could run would only be every minute, so this is perfect. You can certainly go in and manually edit the crontab file to accomplish this, it is best to just use the crontab -e command; while it opens a text edititor like a manual edit, it also has some built-in verifications, so if you screw something up, crontab -e will catch it, while the manual edit would not. So run that command, add a line like this:

*/1 * * * * /path/to/script/httpscheck.sh &

And Voila! Complete! The script will now automatically run every minute, and log the results to the log file. Afterwards, we can load the log file, and really start looking at when the connectivity starts to fail, and start to do something about that. Without this, I don't have anything to start to quatify this connectivity issue I am working on. Assuming my theory mentioned above is indeed correct, this constant source of traffic would prevent the interface from going in to that inactive state. Otherwise, if I am wrong, I will have something from which to start further investigating this.

Downloads:
zip-1 HTTP/S Check 1 HOT

HTTP/S Connectivity Check script

Date-1Monday, 07 September 2015 13:11 File Size-1 818 B Download-1 312 Download

Add comment


Security code
Refresh

0
0
0
s2sdefault