
script it…walk away.
I’m sure by now most of you have discovered dbatools, a wonderful library of PowerShell functions that make life as a DBA so much better. It’s a fantastic community full of people willing to help with minimal flaming and the library itself is frequently updated.
Initially I wanted to jump on the contribution train but IRL I have jobs to consider so instaed I took more of a end user role, working on familiarizing myself with PowerShell in order to tap into the power of dbatools. After writing some DBA specific utilities I started looking at the bigger picture of what I could do across the infrastructure with PowerShell.
Now for any of you loyal readers (other than you, mom), you’ll recall that some RedGate products weren’t working out too well in our environment. Mostly the products were not offering the level of customization that we wanted. I had already migrated backup tasks away from SQL Backup to Ola Hallengren’s scripts, and I had started creating monitoring jobs that ran from within SQL. This was cool and was working for the most part, however it still wasn’t where we needed it. We really wanted our alerts to be able to interact with the infrastructure as a whole, and to be extremely flexible to the point where most anyone in our tech department that knew PowerShell could contribute or troubleshoot. PowerShell proved itself to be a great choice for all of this, and it was much more flexible utilizing dbatools.
For sake of this post let’s say that a basic alerting system has a few components; checking a condition, alerting people, logging, and optional reactions. The examples here are meant to spark your creativity and I’m hoping that from here people will either contribute to the repository to make it better, or go off and write their own things.
Starting with the config file, my idea here was to add options that can help in filtering out machines, and or setting some thresholds. Below is the current state of my config which gives a good example. I have the Server name, port and IP Address for identifying and locating servers on the network. Then I added a threshold for our process alert (to be detailed in a future blog post) and the last column is for identifying if the server is a subscriber in our architecture, which again, is utilized in some scripts that are checking for conditions specific to replication subscribers.
ServerName, Port, IPAddress, ProcessAlertThreshold, isSubscriber, isAgPrimary serverA,11111,1.1.1.1,40,0,1 serverB,22222,1.1.1.1,30,1,0 serverC,33333,1.1.1.1,30,1,0 serverD,44444,1.1.1.1,30,0,0
Moving on to logging, at this point I really only needed two functions. One for writing logs and another for reading. For writing logs I wanted to have the date and time, server name, and then flexibility in adding more information for further analysis. Currently, the function takes in the server name, a string array and finally a log name for organization. Right now I’m manually entering in the header row names, mostly because I’m not sure what I want them to end up being, but a future addition will be setting this in the write log function. In my head it would be another string array that is set per alert and fed into the write-log function.
As far as reading the log goes, currently I only need to read a log in order to get the most recent alert date/time, therefore the read log function sorts everything by date descending. In reviewing it for this post, I think a quick improvement would be to return only the top row, excluding headers of course.
There is also a test function for use during development.
function Write-Log { param ($ServerName,[string[]]$LogItems, $LogName) $LogString = (Get-Date).ToString() $LogString += ',' + $ServerName foreach($Item in $LogItems){ $LogString += ',' + $Item } if(-not (Test-Path C:\Powershell\Logs\$ServerName.$LogName.log)){ $LogString | Out-File -FilePath C:\Powershell\Logs\$ServerName.$LogName.log } else{ $LogString | Out-File -FilePath C:\Powershell\Logs\$ServerName.$LogName.log -Append } } function Write-Log-Test { param ($ServerName,[string[]]$LogItems, $LogName) $LogString = (Get-Date).ToString() $LogString += ',' + $ServerName foreach($Item in $LogItems){ $LogString += ',' + $Item } if(-not (Test-Path C:\Powershell\Logs\$ServerName.$LogName.log.test)){ $LogString | Out-File -FilePath C:\Powershell\Logs\$ServerName.$LogName.log.Test } else{ $LogString | Out-File -FilePath C:\Powershell\Logs\$ServerName.$LogName.log.Test -Append } } function Read-Log { param ($ServerName, $LogName) Import-Csv C:\Powershell\Logs\$ServerName.$LogName.log | Sort -Property date -Descending }
The next set of functions are for alerting. We only needed emails to be sent out as we have systems in place for escalation, so this was pretty simple. The functions take in a subject line and body, and are statically set to email to different people or lists. For example, the Send-Email-Test function is what I use with my own email during development so that I don’t flood any of the production alert email lists with unnecessary emails. I haven’t yet started formatting the messages with HTML, but the object is set to use HTML in the body.
$email = "" $smtpServer = "" $msg = new-object Net.Mail.MailMessage $smtp = new-object Net.Mail.SmtpClient($smtpServer) $msg.From = "$email" $msg.BodyEncoding = [system.Text.Encoding]::Unicode $msg.SubjectEncoding = [system.Text.Encoding]::Unicode $msg.IsBodyHTML = $true function Send-Email { param ($Subject, $Body) $recipient = "<recipient@email.domain>" $msg.To.Add("$recipient") $msg.Subject = $Subject $msg.Body = $Body $smtp.Send($msg) } function Send-Email-Test{ param ($Subject, $Body) $recipient = "<recipient@email.domain>" $msg.To.Add("$recipient") $msg.Subject = $Subject $msg.Body = $Body $smtp.Send($msg) } function Send-Email-DevOps{ param ($Subject, $Body) $recipient = "<recipient@email.domain>,<recipient@email.domain>,<recipient@email.domain>" $msg.To.Add("$recipient") $msg.Subject = $Subject $msg.Body = $Body $smtp.Send($msg) }
With the helper functions out of the way, we can dig into the first of many alerts.
This first alert example is for monitoring the synchronization status of database in a always on availability group. This was one of the first alerts I wrote because it was pretty simple and didn’t require much logic for anything. It’s either synchronizing, synchronized, or in a state that someone better take a look at. Most of the code has comments and is simple enough that it doesn’t need to be explained in depth. I wanted to point out that I’m initializing variables that may or may not be used in this script, but that are used across most scripts. Not doing so has cause me some serious debugging headaches across the scripts, so I felt it simple enough to just cut n paste them in and not worry about it. Here is a quick example of putting this all together. This is the first post in a series of many to go over alerting, so I’m not going to go into detail about this alert, we can examine it and others in more detail later.
Set-StrictMode -Version 1 #Import Functions . C:\Powershell\Functions\Messaging.ps1 . C:\Powershell\Functions\Logging.ps1 #Initialize All Variables $LogName = $null $LogPath = $null $Servers = $null $StartTime = $null $Endtime = $null $CurrentTime = $null $Log = $null $LogItems = $null $LastAlertTime = $null $ElapsedTime = $null $Results = $null $MissingIndexes = $null $ExistingIndexes = $null #Globals for the name of the log and the log path $LogName = 'SQLAgent' $LogPath = 'C:\Powershell\Logs\' #Import the server names from a text file $Servers = import-csv "C:\Powershell\servers_with_ports_test.txt" #Set the current time for later use $CurrentTime = Get-Date #Put Alerting into Loop to go through servers foreach ($Server in $Servers){ $ServerName = $Server.ServerName $ServiceStatus = Get-Service -ComputerName $Server.ServerName -Name SQLSERVERAGENT #Write-Host is for debuging #Write-Host $ServiceStatus.Status if($ServiceStatus.Status -ne "Running"){ Get-Service -ComputerName $Server.ServerName -Name SQLSERVERAGENT | Set-Service -Status Running $LogItems = $Server.ServerName Write-Log-Log -ServerName $Server.ServerName -LogItems $LogItems -LogName $LogName $Subject = $Server.ServerName + ' SQL Agent Service Status' $Body = 'The SQL Agent Service Status was not running on: ' + $Server.ServerName $LastAlertTime = Get-Content -Path "${env:\userprofile}\$ServerName.$LogName.log" -Tail 1 $CurrentTime = Get-Date $ElapsedTime = New-TimeSpan -Start $CurrentTime -End $LastAlertTime #For debugging #Write-Host $ElapsedTime.Seconds if([Math]::Abs($ElapsedTime.Minutes) -gt 15){ Send-Email-Test -Subject $Subject -Body $Body } #else{ # Write-Host "The alert has been sent to recently. Do NOT Send Another" #} #Write-Host $CurrentTime.ToString() } }
From here you just setup a scheduled task in Windows Task Scheduler with the frequency you want to check at, which I would suggest you clear with someone over your head, preferably the business. The tasks should execute powershell.exe with the parameters -ExecutionPolicy Bypass C:\Powershell\<scriptname>.ps1
And there you have it! Minimal alerting/logging/monitoring utilizing dbatools and powershell.
Next up, more alerts!