Scripts for Server Monitoring using Influx DB and Grafana without Telegraf agent


Recommended Posts

Scripts for Server Monitoring using Influx DB and Grafana without Telegraf agent

 

Thread for discussing user scripts to push data into Infuxdb.

 

Any system command that lists useful data can be parsed to send data into influxdb via a script.

 

I use the atribe docker for telegraf, influx and grafana from Community Applications plugin. They worked pretty much out of the box. Set these up first before adding any scripts, just to test that influxdb and grafana are actually working properly, and that the telegraf database within influxdb is already set up and accepting data. I'm not going to go into any detail on setting up these dockers, they seemed quite easy to setup. The only bit that needed specific setup was setting the datasource in grafana, but it was fairly self explanatory.

 

Data can be sent to Influxdb direct without requiring the telegraph agent. See influxdb man pages - Writing Data with the HTTP API:

 

https://docs.influxdata.com/influxdb/v1.0/guides/writing_data/

 

The basic command is :


curl -i -XPOST 'http://[color=red]influxServerIP[/color]:8086/write?db=[color=red]telegraf[/color]' --data-binary ’[color=red]measurementName[/color],host=[color=red]Tower[/color],region=[color=red]us-west[/color] [color=red]tagName[/color]=[color=red]value[/color]'


Specific bits in this to change are:

 

Usually static for your setup and same for all your scripts:

 

influxServerIP:  change this to your InfluxDB server IP

 

telegraf:  this can be changed to a different database if required. I just added the data to the existing telegraf database which saves adding another datasource in grafana

 

host:  the host name of the server e.g. Tower or Unraid or whatever - best use the same as telegraf

 

region:  region where you are

 

 

 

Usually change per script depending on what it is designed to measure:

 

measurementName:  the type of measurement as a group, e.g. HDTemp

 

tagName:  the specific thing/device/tag the measurement is measuring e.g. /dev/sdx

 

value:  the numerical value to store

 

Note that spaces in the names can be a problem, so you might need a line in the script to remove spaces from the thing that you are measuring.

 

I'm sure there are better, more sophisticated ways of sending data direct to influxdb including specific php libraries, but I am not a developer, so I had to keep it simple.

 

So you just need a script to parse the data for measurementName, tagName and value and send it with the curl command. Simples!

 

The graphs can be generated in Grafana in just the same way as the graphs from telegraf (if you stick to sending the data to the telegraf database in influxdb). Otherwise you will need to set up another datasource in grafana which complicates this a little, but can be done.

 

I store my scripts in a folder called myscripts on appdata. I'm not entirely sure this is the safest place to keep them since anyone with smb access to this folder could change the scripts and then there could be trouble, so any advice on this is welcome.

 

Don't forget to make the script executable:

chmod +x /mnt/cache/appdata/myscripts/script.sh

 

The scripts are run using cron. The list of active scripts can be edited by the

crontab -e

command.

 

This defaults to the vi editor to edit the text which I find painful to use. In terms of basics there are plenty of useful guides on the net on how to use it, but essentially you need to go to the bottom of the file (type  capital G), then insert a line below the current one ready for editing (type * o). Insert your crontab line e.g.

 

* * * * * /mnt/cache/appdata/myscripts/hdtemp.sh &>/dev/null 2>&1

 

 

Then <esc> (to stop editing mode)

 

Then type :wq <enter> (with the colon) to save the file again.

 

You can check the line is correct with

 

crontab -l

 

which lists the crontab file.

 

All stars in the crontab will run the script every minute. If you want it to run every 5 minutes use

 

*/5 * * * *

 

instead of 5 stars.

 

 

 

Some scripts to get started with:

HDTemp.sh - measures HD temperature with smartctl

https://lime-technology.com/forum/index.php?topic=52220.msg501122#msg501122

 

spin.sh - gets the spin state of the HDs with hdparm

https://lime-technology.com/forum/index.php?topic=52220.msg501124#msg501124

 

 

IPMI.sh - gets values from IPMI with ipmi-sensors (needs ipmi-sensors installed)

https://lime-technology.com/forum/index.php?topic=52220.msg501125#msg501125

 

apcupsd.sh - gets info from UPS from apcaccess

https://lime-technology.com/forum/index.php?topic=52220.msg501126#msg501126

Untitled.jpg.621056150e0a3d946e865b7bce308aba.jpg

  • Like 1
Link to comment

hdtemp.sh

 

Script for monitoring HD temperatures via smartctl.

 

** A working influxdb setup is required - see opening post **

 

**Disclaimer: I am not a programmer and thus please use the scripts at your own risk. Please read the script to understand what it is trying to achieve. There is no error trapping in it and no check to see if it is already running, so in theory if the script running time is longer than the crontab interval you could end up in a ever increasing system load and potentially crash the server.**

 

This uses smartctl to get the information output from the SMART for the device. Needless to say SMART needs to be turned on for the device otherwise it won't work. But I can't think of a reason why you wouldn't want to have SMART turned off in any case.

 

You need to edit the script to add the devices you want measuring in the $tagsArray statement. I'm not clever enough to write a script that does it automatically. And you will need to change the parameters mentioned in the first post in the curl statement. You will need to use a unix type editor to do it e.g. Notepad++ such that the correct line endings are in the file. Save them somewhere that crontab can access them (mine are in /mnt/cache/appdata/myscripts/)

 

#!/usr/bin/php
<?php

$tagsArray = array(
"/dev/sdc", 
"/dev/sdd", 
"/dev/sde", 
"/dev/sdg", 
"/dev/sdh", 
"/dev/sdi", 
"/dev/sdj", 
"/dev/sdb"
);

//do system call and parse output for tag and value

foreach ($tagsArray as $tag) {

$call = "smartctl -A ".$tag;
$output = shell_exec($call);
preg_match("/Temperature.+(\d\d)$/im", $output, $match);

//send measurement, tag and value to influx

sendDB($match[1], $tag);

}

//end system call


//send to influxdb - you will need to change the parameters (influxserverIP, Tower, us-west) in the $curl to your setup, optionally change the telegraf to another database, but you must create the database in influxdb first. telegraf will already exist if you have set up the telegraf agent docker.

function sendDB($val, $tagname) {

$curl = "curl -i -XPOST 'http://influxServerIP:8086/write?db=telegraf' --data-binary 'HDTemp,host=Tower,region=us-west "
.$tagname."=".$val."'";
$execsr = exec($curl);

}


?>

 

Since HD temp doesn't change that quickly I have made the script run every 5 minutes in crontab:

 

*/5 * * * * /mnt/cache/appdata/myscripts/hdtemp.sh &>/dev/null 2>&1

 

Grafana does not need any specific setup to show this metric since it is a simple value. The measurement HDTemp will appear in the list of measurements to select.

Link to comment

spin.sh

 

Script for monitoring the spin state of the HDs with hdparm

 

** A working influxdb setup is required - see opening post **

 

**Disclaimer: I am not a programmer and thus please use the scripts at your own risk. Please read the script to understand what it is trying to achieve. There is no error trapping in it and no check to see if it is already running, so in theory if the script running time is longer than the crontab interval you could end up in a ever increasing system load and potentially crash the server.**

 

The script uses hdparm to check if the value is standby.

 

You need to edit the script to add the devices you want measuring in the $tagsArray statement. I'm not clever enough to write a script that does it automatically. And you will need to change the parameters mentioned in the first post in the curl statement. You will need to use a unix type editor to do it e.g. Notepad++ such that the correct line endings are in the file. Save them somewhere that crontab can access them (mine are in /mnt/cache/appdata/myscripts/)

 

#!/usr/bin/php
<?php

$tagsArray = array(
"/dev/sdc", 
"/dev/sdd", 
"/dev/sde", 
"/dev/sdg", 
"/dev/sdh", 
"/dev/sdi", 
"/dev/sdj", 
"/dev/sdb",
"/dev/sdf"
);


//do system call and parse output for tag and value

foreach ($tagsArray as $tag) {

$call = "hdparm -C ".$tag;
$output = shell_exec($call);
if (strpos($output, 'standby') !== false) {
	$idle = 0; 
}
	else {
		$idle = 1;
	}

//send measurement, tag and value to influx

sendDB($idle, $tag);

}
//end system call

//send to influxdb

function sendDB($val, $tagname) {

$curl = "curl -i -XPOST 'http://influxDBServerIP:8086/write?db=telegraf' --data-binary 'HDSpin,host=Tower,region=us-west "
.$tagname."=".$val."'";
$execsr = exec($curl);
}
?>

 

This script runs pretty quickly, so I have mine setup to run every minute, but that might be overkill.

 

* * * * * /mnt/cache/appdata/myscripts/spin.sh &>/dev/null 2>&1

 

Grafana setup

 

I have this set up as a stacked bar chart in grafana.

 

The query for each device reads as follows in the Metrics tab:

FROM default HDSpin WHERE +

SELECT field(/dev/sde) sum() +

GROUP BY time(1m) +

ALIAS BY disc name Format as Time Series

 

At the bottom I have :

 

Group by time interval >60s

 

The display tab has

Draw Mode

Bars checked

 

Stacking and Null value

Stack checked

 

Hover info

Stacked value individual

 

 

Link to comment

ipmi.sh

 

Script for monitoring IPMI parameters.

 

** A working influxdb setup is required - see opening post **

 

**Disclaimer: I am not a programmer and thus please use the scripts at your own risk. Please read the script to understand what it is trying to achieve. There is no error trapping in it and no check to see if it is already running, so in theory if the script running time is longer than the crontab interval you could end up in a ever increasing system load and potentially crash the server.**

 

This requires ipmi-sensors to be installed. I use the excellent dmacias IPMI plugin which installs this.

 

You need to edit the script to add the devices you want measuring in the $tagsArray statement. Different motherboards will have different values. Run ipmi-sensors from the command line to see what yours are. I'm not clever enough to write a script that does it automatically. And you will need to change the parameters mentioned in the first post in the curl statement. You will need to use a unix type editor to do it e.g. Notepad++ such that the correct line endings are in the file. Save them somewhere that crontab can access them (mine are in /mnt/cache/appdata/myscripts/).

 

Note that the tagName cannot have spaces in, so I have had to add an extra line to remove the spaces from the string.

 

#!/usr/bin/php
<?php

$tagsArray = array(
"CPU_FAN1", 
"REAR_FAN1", 
"REAR_FAN2",
"MB Temp",
"CPU Temp"
);



//do system call

$call = "ipmi-sensors";
$output = shell_exec($call);

//parse output for tag and value

foreach ($tagsArray as $tag) {

preg_match("/".$tag.".*(\b\d+\b)\..*$/mi", $output, $match);

//send measurement, tag and value to influx

sendDB($match[1], $tag);

}
//end system call


//send to influxdb

function sendDB($val, $tagname) {
$tagname2 = str_replace(' ', '', $tagname);
$curl = "curl -i -XPOST 'http://influxDBIP:8086/write?db=telegraf' --data-binary 'IPMI,host=Tower,region=us-west "
.$tagname2."=".$val."'";
$execsr = exec($curl);
}

?>

 

I run this script every 5 min in crontab:

 

*/5 * * * * /mnt/cache/appdata/myscripts/ipmi.sh &>/dev/null 2>&1

 

Grafana does not need any specific setup to show this metric since it is a simple value. The measurement IPMI will appear in the list of measurements to select.

To show fan speed and temp on the same graph, move the temps to the right axis and the fans to the left axis.

Link to comment

apcupsd.sh

 

Script to monitor values from the UPS.

 

** A working influxdb setup is required - see opening post **

 

**Disclaimer: I am not a programmer and thus please use the scripts at your own risk. Please read the script to understand what it is trying to achieve. There is no error trapping in it and no check to see if it is already running, so in theory if the script running time is longer than the crontab interval you could end up in a ever increasing system load and potentially crash the server.**

 

This uses apcaccess to get values from the UPS such as 'time on battery', 'battery %charge', 'load in %', 'time left on battery', 'unit temperature'.

 

#!/usr/bin/php
<?php

$command = "apcaccess";
$args = "status";
$tagsArray = array(
"LOADPCT", 
"ITEMP", 
"TIMELEFT", 
"TONBATT", 
"BCHARGE"
);

//do system call

$call = $command." ".$args;
$output = shell_exec($call);

//parse output for tag and value

foreach ($tagsArray as $tag) {

preg_match("/".$tag."\s*:\s([\d|\.]+)/si", $output, $match);

//send measurement, tag and value to influx

sendDB($match[1], $tag);

}
//end system call


//send to influxdb

function sendDB($val, $tagname) {

$curl = "curl -i -XPOST 'http://influxDBIP:8086/write?db=telegraf' --data-binary 'APC,host=Tower,region=us-west "
.$tagname."=".$val."'";
$execsr = exec($curl);

}

?>

 

Nothing special in grafana for this since they are simple values returned. I modified the LOADPCT by multiplying it with the rated capacity of my unit to get the power in watts.

 

I run this every minute in crontab:

 

* * * * * /mnt/cache/appdata/myscripts/apcupsd.sh &>/dev/null 2>&1

Link to comment

The scripts are run using cron. The list of active scripts can be edited by the

crontab -e

command.

 

This defaults to the vi editor to edit the text which I find painful to use. In terms of basics there are plenty of useful guides on the net on how to use it, but essentially you need to go to the bottom of the file (type  capital G), then insert a line below the current one ready for editing (type * o). Insert your crontab line e.g.

 

I just googled it and you can use nano if you call crontab using the following:

export VISUAL=nano; crontab -e

I tried it and it works.

Link to comment

I got it working, but I did run into a hiccup. I copy and pasted your code into notepad++ (in Windows) and then copied it to the appdata/scripts folder. But Windows injected some line endings that didn't jive with unRAID.

 

I was getting the error message:

Sep 24 22:40:01 tower crond[1788]: exit status 126 from user root /mnt/user/appdata/scripts/hdtemp.sh &>/dev/null 2>&1

Sep 24 22:40:01 tower crond[1788]: exit status 126 from user root /mnt/user/appdata/scripts/spin.sh &>/dev/null 2>&1

 

But I searched the forums and found the fix here.

https://lime-technology.com/forum/index.php?topic=11310.msg145909#msg145909

This fix removed the ^M at the end of the lines.

 

Thanks again!

Link to comment

I really like the scripts and they definitely fill in a gap that my dashboard had.

 

I made a slight change in the scripts I'm using in the way the data is sent to influxdb.

 

$curl = "curl -i -XPOST 'http://10.13.14.200:8086/write?db=telegraf' --data-binary 'diskspin,host=unraid,region=us-west,disk=".$tagname." "
."value=".$val."'";

 

Basically I moved the $tagname to the attribute disk. That way I can just use a single query and then in the 'group by' add tag(disk) and in 'alias by' field write $tag_disk. It shows all the disks in a single query. And, as a bonus feature, value is the default field name in grafana.

Link to comment

One last thing, to make the additions to crontab persistent after a reboot I had to add the following to my /boot/config/go file.

 

(crontab -l; echo "* * * * * /mnt/user/appdata/scripts/hdtemp.sh &>/dev/null 2>&1") | crontab -
(crontab -l; echo "* * * * * /mnt/user/appdata/scripts/spin.sh &>/dev/null 2>&1") | crontab -

Link to comment
  • 1 month later...

I got some motivation from the scripts posted here to add some monitoring to my UNRAID installation as well.  I figured it was a bit less resource intensive to do it directly from bash but I'm totally guessing on that. 

 

I also wrote scripts for my DD-WRT router and windows PCs (powershell) but I figured for now I'd share the unraid scripts I wrote in case they are useful to anyone.  I'm not that experienced with bash scripting so if there is anything I could do better I'd appreciate the corrections.  All I ask is if you make improvements please share it back to me and the community.

 

I actually created 3 scripts for different intervals.  1, 5 and 30 mins.

 

Cron Jobs

#
# InfluxDB Stats 1 Minute (Delay from reading CPU when all the other PCs in my network report in)
# * * * * * sleep 10; /boot/custom/influxdb/influxStats_1m.sh > /dev/null 2>&1
#
# InfluxDB Stats 5 Minute
# 0,10 * * * * /boot/custom/influxdb/influxStats_5m.sh > /dev/null 2>&1
#
# InfluxDB Stats 30 Minute
# 0,30 * * * * /boot/custom/influxdb/influxStats_30m.sh > /dev/null 2>&1

 

 

Basic variables I use in all 3 scripts.

#
# Set Vars
#
DBURL=http://192.168.254.3:8086
DBNAME=statistics
DEVICE="UNRAID"
CURDATE=`date +%s`

 

 

CPU

Records CPU metrics - Load averages and CPU time

# Had to increase to 10 samples because I was getting a spike each time I read it.  This seems to smooth it out more
top -b -n 10 -d.2 | grep "Cpu" |  tail -n 1 | awk '{print $2,$4,$6,$8,$10,$12,$14,$16}' | while read CPUusr CPUsys CPUnic CPUidle CPUio CPUirq CPUsirq CPUst
do
top -bn1 | head -3 | awk '/load average/ {print $12,$13,$14}' | sed 's/,//g' | while read LAVG1 LAVG5 LAVG15
do
	curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "cpuStats,Device=${DEVICE} CPUusr=${CPUusr},CPUsys=${CPUsys},CPUnic=${CPUnic},CPUidle=${CPUidle},CPUio=${CPUio},CPUirq=${CPUirq},CPUsirq=${CPUsirq},CPUst=${CPUst},CPULoadAvg1m=${LAVG1},CPULoadAvg5m=${LAVG5},CPULoadAvg15m=${LAVG15} ${CURDATE}000000000" >/dev/null 2>&1
done
done

 

Memory Usage

top -bn1 | head -4 | awk '/Mem/ {print $6,$8,$10}' | while read USED FREE CACHE
do
curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "memoryStats,Device=${DEVICE} memUsed=${USED},memFree=${FREE},memCache=${CACHE} ${CURDATE}000000000" >/dev/null 2>&1	
done

 

Network

if [[ -f byteCount.tmp ]] ; then

# Read the last values from the tmpfile - Line "eth0"
grep "eth0" byteCount.tmp | while read dev lastBytesIn lastBytesOut
do
	cat /proc/net/dev | grep "eth0" | grep -v "veth" | awk '{print $2, $10}' | while read currentBytesIn currentBytesOut 
	do			
		# Write out the current stats to the temp file for the next read
		echo "eth0" ${currentBytesIn} ${currentBytesOut} > byteCount.tmp

		totalBytesIn=`expr ${currentBytesIn} - ${lastBytesIn}`
		totalBytesOut=`expr ${currentBytesOut} - ${lastBytesOut}`

		# Prevent negative numbers when the counters reset.  Could miss data but it should be a marginal amount.
		if [ ${totalBytesIn} -le 0 ] ; then
			totalBytesIn=0
		fi

		if [ ${totalBytesOut} -le 0 ] ; then
			totalBytesOut=0
		fi
  				
		curl -is -XPOST "$DBURL/write?db=$DBNAME&u=$USER&p=$PASSWORD" --data-binary "interfaceStats,Interface=eth0,Device=${DEVICE} bytesIn=${totalBytesIn},bytesOut=${totalBytesOut} ${CURDATE}000000000" >/dev/null 2>&1

	done
done 

else
    # Write out blank file
echo "eth0 0 0" > byteCount.tmp
fi

 

Hard Disk IO

# Gets the stats for disk#
#
# The /proc/diskstats file displays the I/O statistics
# of block devices. Each line contains the following 14
# fields:
#  1 - major number
#  2 - minor mumber
#  3 - device name
#  4 - reads completed successfully
#  5 - reads merged
#  6 - sectors read <---
#  7 - time spent reading (ms)
#  8 - writes completed
#  9 - writes merged
# 10 - sectors written <---
# 11 - time spent writing (ms)
# 12 - I/Os currently in progress
# 13 - time spent doing I/Os (ms)
# 14 - weighted time spent doing I/Os (ms)
#

# Special Cases
# sda = Flash/boot
# sdf = Cache
# sdd = Parity

if [[ -f diskByteCountTest.tmp ]] ; then
cat /proc/diskstats | grep -E 'md|sdd|sda|sdf|loop0' | grep -E -v 'sd[a-z]1' |sed 's/md//g' | awk '{print "disk" $3, $6, $10}' | while read DISK currentSectorsRead currentSectorsWrite
do
	# Check if the disk is in the temp file.
	if grep ${DISK} diskByteCountTest.tmp 
	then
		grep ${DISK} diskByteCountTest.tmp | while read lDISK lastSectorsRead lastSectorsWrite
		do
			# Replace current disk stats with new stats for the next read
			sed -i "s/^${DISK}.*/${DISK} ${currentSectorsRead} ${currentSectorsWrite}/" diskByteCountTest.tmp
	  
			# Need to multiply by 512 to convert from sectors to bytes
			(( totalBytesRead = 512 * (${currentSectorsRead} - ${lastSectorsRead}) ))
			(( totalBytesWrite = 512 * (${currentSectorsWrite} - ${lastSectorsWrite}) ))
			(( totalBytes = totalBytesRead + totalBytesWrite))

			# Cases
			case ${DISK} in
			"disksda" )
				curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=boot,Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1 ;;
			"disksdd" )
				curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=parity,Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1 ;;
			"disksdf" )
				curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=cache,Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1 ;;
			"diskloop0" )
				curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=docker,Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1 ;;
			*)
				curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=${DISK},Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1
				;;

		done
	else
		# If the disk wasn't in the temp file then add it to the end
		echo ${DISK} ${currentSectorsRead} ${currentSectorsWrite} >> diskByteCountTest.tmp
	fi
done
else
    # Write out a new file
cat /proc/diskstats | grep -E 'md|sdd|sda|sdf|loop0' | grep -E -v 'sd[a-z]1' |sed 's/md//g' | awk '{print "disk" $3, $6, $10}' | while read DISK currentSectorsRead currentSectorsWrite
do
	echo ${DISK} ${currentSectorsRead} ${currentSectorsWrite} >> diskByteCountTest.tmp
done
fi

 

Number of Dockers Running

docker info | grep "Running" | awk '{print $2}' | while read NUM
do
curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "dockersRunning,Device=${DEVICE} Dockers=${NUM} ${CURDATE}000000000" >/dev/null 2>&1
done

 

 

Hard Disk Temperatures

# Current array assignment.
# I could pull the automatically from /var/local/emhttp/disks.ini
# Parsing it wouldnt be that easy though.
DISK_ARRAY=( sdd sdg sde sdi sdc sdb sdh sdf )
DESCRIPTION=( parity disk1 disk2 disk3 disk4 disk5 disk6 cache )
#
# Added -n standby to the check so smartctl is not spinning up my drives
#
i=0
for DISK in "${DISK_ARRAY[@]}"
do
smartctl -n standby -A /dev/$DISK | grep "Temperature_Celsius" | awk '{print $10}' | while read TEMP 
do
	curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "DiskTempStats,DEVICE=${DEVICE},DISK=${DESCRIPTION[$i]} Temperature=${TEMP} ${CURDATE}000000000" >/dev/null 2>&1
done
((i++))
done

 

 

Hard Disk Spinup Status

# Current array assignment.
# I could pull the automatically from /var/local/emhttp/disks.ini
# Parsing it wouldnt be that easy though.
DISK_ARRAY=( sdd sdg sde sdi sdc sdb sdh sdf )
DESCRIPTION=( parity disk1 disk2 disk3 disk4 disk5 disk6 cache )
i=0
for DISK in "${DISK_ARRAY[@]}"
do
hdparm -C /dev/$DISK | grep 'state' | awk '{print $4}' | while read STATUS
do
	#echo ${DISK} : ${STATUS} : ${DESCRIPTION[$i]}
	if [ ${STATUS} = "standby" ]
	then
		curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStatus,DEVICE=${DEVICE},DISK=${DESCRIPTION[$i]} Active=0 ${CURDATE}000000000" >/dev/null 2>&1
	else
		curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStatus,DEVICE=${DEVICE},DISK=${DESCRIPTION[$i]} Active=1 ${CURDATE}000000000" >/dev/null 2>&1
	fi
done
((i++))
done

 

Hard Disk Space

# Gets the stats for boot, disk#, cache, user
#
df | grep "mnt/\|/boot\|docker" | grep -v "user0\|containers" | sed 's/\/mnt\///g' | sed 's/%//g' | sed 's/\/var\/lib\///g'| sed 's/\///g' | while read MOUNT TOTAL USED FREE UTILIZATION DISK
do
if [ "${DISK}" = "user" ]; then
	DISK="array_total"
fi
curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "drive_spaceStats,Device=${DEVICE},Drive=${DISK} Free=${FREE},Used=${USED},Utilization=${UTILIZATION} ${CURDATE}000000000" >/dev/null 2>&1	
done

 

 

Uptime

UPTIME=`cat /proc/uptime | awk '{print $1}'`
curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "uptime,Device=${DEVICE} Uptime=${UPTIME} ${CURDATE}000000000" >/dev/null 2>&1

 

 

Disk 4 and 5 just finished their file integrity check.  I also have a hole every day at 11:36-12:36 that I haven't yet figured out why.  I'll need to investigate that but I dont think its related to these scripts.

 

Unraid-1.png

 

Unraid-2.png

 

Unraid-3.png

 

If anyone wants the grafana json just let me know I can post it as well.

 

 

Please post any suggestions of other metrics to capture.

  • Like 2
  • Upvote 1
Link to comment
  • 4 weeks later...

This looks awesome, any chance of having it all bundled up into a plugin or docker container?

I'm sure it would be doable as a plugin. Docker container might be difficult since some of the scripts might need native access to the OS. It's not within my capability though, and it would be a weird plugin since it would be dependent on the dockers too.

 

 

Sent from my iPhone using Tapatalk

Link to comment

Rainman, excellent work!  I've managed to get everything working except for CPU.  For some reason it's not loading the cpuStats field into InfluxDB.  Any ideas where to look?

 

It may be related to the CPU. I have a single core sempron.  The stats output may be different.

 

if you SSH into the server what do you get when you run

 

top -b -n 3 -d.2 | grep "Cpu" |  tail -n 1

 

and

 

cat /proc/loadavg

 

Feel free to PM me if you want to try and work though it.  Should be relatively easy fix but it may be user specific.  These scripts may not be that generic, especially the hard drive one I modified a lot since I posted that but its now really specific.  I cant find a good way to make it generic.

Link to comment

If anyone has an edimax smart plug:

 

The cost calculation is complicated.  Thanks to the power company but you get the point from the code. 

 

#
# SMART PLUG
# EDIMAX SMART PLUG SWITCH
# Model: SP-1101W
#
# Off-Peak: Weekends all day and weekdays 19:00 - 07:00 -- 0.000145 cents/watt-min
# Mid-Peak: Weekdays 17:00 - 19:00 & 07:00-11:00 -- 0.00022 cents/watt-min
# Peak: Weekdays 11:00 - 17:00 -- 0.0003 cents/watt-min
# Extra fees: 4.14c/kwh
OffPeak=0.000145
MidPeak=0.00022
Peak=0.0003
ExtraFees=0.000069
AdjustmentFactor=1.076

day=$(date +"%u")
hour=$(date +"%H")

DEVICE="UNRAID-PLUG"

curl -d @/boot/custom/influxdb/edi.xml http://admin:[email protected]:10000/smartplug.cgi -o /boot/custom/influxdb/output.txt

CURRENT=`cat /boot/custom/influxdb/output.txt | sed -n 's:.*<Device.System.Power.NowCurrent>\(.*\)</Device.System.Power.NowCurrent>.*:\1:p'`
POWER=`cat /boot/custom/influxdb/output.txt | sed -n 's:.*<Device.System.Power.NowPower>\(.*\)</Device.System.Power.NowPower>.*:\1:p'`


echo ${POWER}
echo ${CURRENT}

# Calculates the cost of each minute of electricty use based on the time of day calculation
if [ "$day" -eq 6 ] || [ "$day" -eq 7 ]; then
COST=$(echo $POWER $OffPeak $ExtraFees $AdjustmentFactor | awk '{printf "%.6f\n",$1*$4*($2+$3)}')
elif [ "$hour" -ge 19 ] || [ "$hour" -lt 7 ]; then
COST=$(echo $POWER $OffPeak $ExtraFees $AdjustmentFactor | awk '{printf "%.6f\n",$1*$4*($2+$3)}')
elif [ "$hour" -ge 11 ] && [ "$hour" -lt 17 ]; then
COST=$(echo $POWER $Peak $ExtraFees $AdjustmentFactor | awk '{printf "%.6f\n",$1*$4*($2+$3)}')
else
COST=$(echo $POWER $MidPeak $ExtraFees $AdjustmentFactor | awk '{printf "%.6f\n",$1*$4*($2+$3)}')
fi

curl -is -XPOST "$DBURL/write?db=$DBNAME&u=$USER&p=$PASSWORD" --data-binary "powerCost,Device=${DEVICE} Cost=${COST} ${CURDATE}000000000" >/dev/null

curl -is -XPOST "$DBURL/write?db=$DBNAME&u=$USER&p=$PASSWORD" --data-binary "smartplugStats,Device=${DEVICE} Current=${CURRENT},Power=${POWER} ${CURDATE}000000000" >/dev/null

 

edi.xml

<? xml version = "1.0" encoding = "UTF-8"?> 
<SMARTPLUG id="edimax">
<CMD id="get">
	<NOW_POWER></NOW_POWER>
</CMD>
</SMARTPLUG>

Link to comment

Here is another one for hard disk power on time:

 

#
# Hard Disk Power On Hours
#
# Added -n standby to the check so smartctl is not spinning up my drives
#
i=0
for DISK in "${DISK_ARRAY[@]}"
do
smartctl -n standby -A /dev/${DISK} | grep -E "Power_On_Hours" | awk '{ print $10 }' | while read POWERONHOURS
do
	curl -is -XPOST "$DBURL/write?db=$DBNAME&u=$USER&p=$PASSWORD" --data-binary "diskPowerOnHours,DEVICE=${DEVICE},DISK=${DESCRIPTION[$i]} PowerOnHours=${POWERONHOURS} ${CURDATE}000000000" >/dev/null 2>&1
done
((i++))
done

Link to comment
  • 3 weeks later...

Rainman, excellent work!  I've managed to get everything working except for CPU.  For some reason it's not loading the cpuStats field into InfluxDB.  Any ideas where to look?

 

It may be related to the CPU. I have a single core sempron.  The stats output may be different.

 

if you SSH into the server what do you get when you run

 

top -b -n 3 -d.2 | grep "Cpu" |  tail -n 1

 

and

 

cat /proc/loadavg

 

Feel free to PM me if you want to try and work though it.  Should be relatively easy fix but it may be user specific.  These scripts may not be that generic, especially the hard drive one I modified a lot since I posted that but its now really specific.  I cant find a good way to make it generic.

 

Turns out it was working fine.  I was just looking in the wrong place.  Thanks again for putting this together!

Link to comment
  • 4 weeks later...

Thanks to the absolutely brilliant RAINMAN for his scripts, ideas, and advise; Here is my first script. It monitors AKCP Sensorprobe device for Temperature/Humidity...

 

### DB Details
DBURL=http://localhost:8086
DBNAME=environmon
DEVICE="akcp"
CURDATE=`date +%s`

### SNMP Details
VERSION=1 # 1, v2c
COMMUNITY=public
SNMPURL=akcp #Hostname or IP
PORTS=4 # How many ports


for (( i=0; i<=($PORTS - 1); i++ ))
  do
    snmpget -v ${VERSION} -c ${COMMUNITY} ${SNMPURL} -Ov 1.3.6.1.4.1.3854.1.2.2.1.16.1.3.$i | awk '{ print $2 }' | while read VALUE
  do
    curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "environmentStats,Type=Temperature,Interface=$(($i + 1)),Device=${DEVICE} Value=${VALUE}" >/dev/null 2>&1
  done
done

for (( i=0; i<=($PORTS - 1); i++ ))
  do
    snmpget -v ${VERSION} -c ${COMMUNITY} ${SNMPURL} -Ov 1.3.6.1.4.1.3854.1.2.2.1.17.1.3.$i | awk '{ print $2 }' | while read VALUE
  do
    curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "environmentStats,Type=Humidity,Interface=$(($i + 1)),Device=${DEVICE} Value=${VALUE}" >/dev/null 2>&1
  done
done

 

 

Here's a pretty picture of it after 8 or so hours;

 

D38G4BX.png

 

 

Hopefully this will make SNMP stuff a little easier for anyone that wants to hack my script :)

Link to comment

Hey all, here's another one, and this is version 2 of it, I'll explain further down..

 

This one is designed to poll Apple network devices (Airport, Timecapsule etc) and Monitors Traffic, Errors, Clients, and Uptime.

 

A couple of things to note:

If you use this, you will want to ensure you have the correct number of interfaces. you can do this using an snmpwalk on .1.3.6.1.2.1.2.2.1.2. this will list all the interfaces, you can then adjust the interface array accordingly.

 

There's two uptime variables, time in seconds, and time as a string.. I don't like the way that Grafana displays time "1.3 months" and prefer more detail. Uptime as a string will break if it's less than 24 hours or more than 1 year .. I've now fixed this script to show Years downwards and it *shouldn't* break ..

 

Here's the pic of how it looks once up and running in Grafana...

 

BgCmT0J.png

 

And here's the script...

 

####### Interesting OIDs
## .1.3.6.1.2.1.2.2.1.2.x - interface
## .1.3.6.1.2.1.2.2.1.10.x - Ingress data
## .1.3.6.1.2.1.2.2.1.16.x - egress data
## .1.3.6.1.2.1.2.2.1.14.x - Ingress errors
## .1.3.6.1.2.1.2.2.1.20.x - Egress Errors
## .1.3.6.1.2.1.1.3.0 = uptime
## .1.3.6.1.4.1.63.501.3.2.1.0 = Number of Wireless Clients

# echo $i "iface: " $INTERFACE "data in: " $INDATA "data out: " $OUTDATA "errors in: " $INERROR "errors out: " $OUTERROR

####### Polls Apple Airport type devices for traffic, clients, and uptime.
####### You may need to modify the number of items in the array to suit your device. The names are irrelevant.
####### This is based off an Airport Extreme, an older Timecapsule has fewer interfaces.
####### Need to use the "Difference" transformation in Grafana.

### DB Details
DBURL=http://localhost:8086
DBNAME=test
DEVICE="airport"

### SNMP Details
VERSION=2c # 1, v2c
COMMUNITY=public
SNMPURL=airport #Hostname or IP

### Time periods in seconds
SECONDS=316512321252
SECYEAR=31536000
SECMONTH=2592000
SECWEEK=604800
SECDAY=86400
SECHOUR=3600
SECMIN=60

# Instantiate arrays
INTERFACE_ARRAY=( gec0 ath0 ath1 lo0 wlan0 wlan1 wlan2 wlan3 vlan0 bridge0 bridge1 vlan1 vlan2 )
arrayLength=${#INTERFACE_ARRAY[@]}

### Data Transfer stats
for (( i = 1 ; i <= ${arrayLength} ; i++ ))
do
  snmpget -v ${VERSION} -c ${COMMUNITY} ${SNMPURL} -Ov .1.3.6.1.2.1.2.2.1.2.$i | awk '{ print $2 }' | while read INTERFACE #interface name
  do
    snmpget -v ${VERSION} -c ${COMMUNITY} ${SNMPURL} -Ov .1.3.6.1.2.1.2.2.1.10.$i | awk '{ print $2 }' | while read INDATA #inbound data
    do
      snmpget -v ${VERSION} -c ${COMMUNITY} ${SNMPURL} -Ov .1.3.6.1.2.1.2.2.1.16.$i | awk '{ print $2 }' | while read OUTDATA #outbound data
      do
        snmpget -v ${VERSION} -c ${COMMUNITY} ${SNMPURL} -Ov .1.3.6.1.2.1.2.2.1.14.$i | awk '{ print $2 }' | while read INERROR #inbound errors
        do
          snmpget -v ${VERSION} -c ${COMMUNITY} ${SNMPURL} -Ov .1.3.6.1.2.1.2.2.1.20.$i | awk '{ print $2 }' | while read OUTERROR #outbound error
          do
            curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "${DEVICE}_Stats,Type=Traffic,Interface=${INTERFACE} Data_In=${INDATA},Data_Out=${OUTDATA},Errors_in=${INERROR},Errors_Out=${OUTERROR}" >/dev/null 2>&1
          done
        done
      done
    done
  done
done


### Get Device Uptime in seconds
snmpget -v ${VERSION} -c ${COMMUNITY} ${SNMPURL} -Ov .1.3.6.1.2.1.1.3.0 | awk '{ print $2 }' | sed 's/[()]//g' | while read UPTIMENUM 
  do
    UPTIME=$(($UPTIMENUM / 100)) # convert timeticks to seconds
    curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "${DEVICE}_Stats,Type=UptimeSeconds Uptime_Seconds=${UPTIME}" >/dev/null 2>&1
  done

### Get Device Uptime as a string
snmpget -v ${VERSION} -c ${COMMUNITY} ${SNMPURL} -Ov .1.3.6.1.2.1.1.3.0 | awk '{ print $2 }' | sed 's/[()]//g' | while read UPTIMENUM 
  do
    UPTIME=$(($UPTIMENUM / 100)) # convert timeticks to seconds
    #Convert seconds in to days/months/years etc..
    STRINGTIME=$(printf "\"%d Years, %d Months, %d Weeks, %d Days, %d Hours, %d Minutes, and %d Seconds\"" \
      $(( ${UPTIME} / ${SECYEAR} )) \
      $(( ${UPTIME} % ${SECYEAR} / ${SECMONTH} )) \
      $(( ${UPTIME} % ${SECYEAR} % ${SECMONTH} / ${SECWEEK} )) \
      $(( ${UPTIME} % ${SECYEAR} % ${SECMONTH} % ${SECWEEK} / ${SECDAY} )) \
      $(( ${UPTIME} % ${SECYEAR} % ${SECMONTH} % ${SECWEEK} % ${SECDAY} / ${SECHOUR} )) \
      $(( ${UPTIME} % ${SECYEAR} % ${SECMONTH} % ${SECWEEK} % ${SECDAY} % ${SECHOUR} / ${SECMIN} )) \
      $(( ${UPTIME} % ${SECYEAR} % ${SECMONTH} % ${SECWEEK} % ${SECDAY} % ${SECHOUR} % ${SECMIN} % 60 )))
    curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "${DEVICE}_Stats,Type=UptimeString Uptime_String=${STRINGTIME}" >/dev/null 2>&1
  done

## Get Wireless Clients
snmpget -v ${VERSION} -c ${COMMUNITY} ${SNMPURL} -Ov .1.3.6.1.4.1.63.501.3.2.1.0 | awk '{ print $2 }' | while read CLIENTS
  do
    curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "${DEVICE}_Stats,Type=Clients Value=${CLIENTS}" >/dev/null 2>&1
  done

 

 

Have fun!

Link to comment

Hey all,

 

I was having some trouble with RAINMAN's cpu script, for an hour each day, I was noticing that I was getting no stats ... A bit of digging found that it was the first hour each day from when I booted the server..

 

I don't know if it's because I have a weird processor or not, but if anyone else is having similar issues, my (very slightly) edited version of RAINMAN's script may help you out.

 

 

DBURL=http://localhost:8086
DBNAME=test
DEVICE="test"
CURDATE=`date +%s`

# Had to increase to 10 samples because I was getting a spike each time I read it.  This seems to smooth it out more
top -b -n 10 -d.2 | grep "Cpu" |  tail -n 1 | awk '{print $2,$4,$6,$8,$10,$12,$14,$16}' | while read CPUusr CPUsys CPUnic CPUidle CPUio CPUirq CPUsirq CPUst
do
  top -bn1 | head -3 | sed -e 's/.*load//g' | awk '/average/ {print $2" " $3" " $4}' | sed -e 's/,//g'| while read LAVG1 LAVG5 LAVG15
  do
    curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "cpuStats,Device=${DEVICE} CPUusr=${CPUusr},CPUsys=${CPUsys},CPUnic=${CPUnic},CPUidle=${CPUidle},CPUio=${CPUio},CPUirq=${CPUirq},CPUsirq=${CPUsirq},CPUst=${CPUst},CPULoadAvg1m=${LAVG1},CPULoadAvg5m=${LAVG5},CPULoadAvg15m=${LAVG15} ${CURDATE}000000000"  >/dev/null 2>&1
  done
done

Link to comment

hdtemp.sh

 

Script for monitoring HD temperatures via smartctl.

 

** A working influxdb setup is required - see opening post **

 

**Disclaimer: I am not a programmer and thus please use the scripts at your own risk. Please read the script to understand what it is trying to achieve. There is no error trapping in it and no check to see if it is already running, so in theory if the script running time is longer than the crontab interval you could end up in a ever increasing system load and potentially crash the server.**

 

This uses smartctl to get the information output from the SMART for the device. Needless to say SMART needs to be turned on for the device otherwise it won't work. But I can't think of a reason why you wouldn't want to have SMART turned off in any case.

 

You need to edit the script to add the devices you want measuring in the $tagsArray statement. I'm not clever enough to write a script that does it automatically. And you will need to change the parameters mentioned in the first post in the curl statement. You will need to use a unix type editor to do it e.g. Notepad++ such that the correct line endings are in the file. Save them somewhere that crontab can access them (mine are in /mnt/cache/appdata/myscripts/)

 

#!/usr/bin/php
<?php

$tagsArray = array(
"/dev/sdc", 
"/dev/sdd", 
"/dev/sde", 
"/dev/sdg", 
"/dev/sdh", 
"/dev/sdi", 
"/dev/sdj", 
"/dev/sdb"
);

//do system call and parse output for tag and value

foreach ($tagsArray as $tag) {

$call = "smartctl -A ".$tag;
$output = shell_exec($call);
preg_match("/Temperature.+(\d\d)$/im", $output, $match);

//send measurement, tag and value to influx

sendDB($match[1], $tag);

}

//end system call


//send to influxdb - you will need to change the parameters (influxserverIP, Tower, us-west) in the $curl to your setup, optionally change the telegraf to another database, but you must create the database in influxdb first. telegraf will already exist if you have set up the telegraf agent docker.

function sendDB($val, $tagname) {

$curl = "curl -i -XPOST 'http://influxServerIP:8086/write?db=telegraf' --data-binary 'HDTemp,host=Tower,region=us-west "
.$tagname."=".$val."'";
$execsr = exec($curl);

}


?>

 

Since HD temp doesn't change that quickly I have made the script run every 5 minutes in crontab:

 

*/5 * * * * /mnt/cache/appdata/myscripts/hdtemp.sh &>/dev/null 2>&1

 

Grafana does not need any specific setup to show this metric since it is a simple value. The measurement HDTemp will appear in the list of measurements to select.

 

 

Awesome script! Thank you.

 

Curious if anyone else is having the same issues that I am...That being, I'm only getting data for two drives. No matter what order I put them in the array, I only get temp data for sda & sdb.

 

edit: Want to add that SMART is enabled on all drives and when I run the command manually, I get data back.

 

 

root@Hollywood:~# smartctl -A /dev/sde
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.18-unRAID] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   108   099   006    Pre-fail  Always       -       20659920
  3 Spin_Up_Time            0x0003   097   097   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       710
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   078   060   030    Pre-fail  Always       -       61758002
  9 Power_On_Hours          0x0032   071   071   000    Old_age   Always       -       25798
10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       298
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0 0 0
189 High_Fly_Writes         0x003a   099   099   000    Old_age   Always       -       1
190 Airflow_Temperature_Cel 0x0022   080   061   045    Old_age   Always       -       20 (Min/Max 17/22)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       281
193 Load_Cycle_Count        0x0032   001   001   000    Old_age   Always       -       768366
194 Temperature_Celsius     0x0022   020   040   000    Old_age   Always       -       20 (0 14 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       17417h+18m+05.179s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       8300941312
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       48794130453

root@Hollywood:~# smartctl -A /dev/sda
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.18-unRAID] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   177   176   021    Pre-fail  Always       -       6133
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       311
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   100   253   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   096   096   000    Old_age   Always       -       3014
10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       42
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       16
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       1004
194 Temperature_Celsius     0x0022   128   119   000    Old_age   Always       -       22
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.