Why Become a PowerShell DBA

If you find this article valuable and you decide that you want to get your learn on with PowerShell and SQL Server, there is a SQLSkills class that I am teaching this year, please join me to get a great start in how to get your toolset growing by adding PowerShell to your toolbelt. Early Bird discount still available until March 10, 2017.

I have heard many times from many people that it is not worth learning PowerShell as a DBA.  I have been using PowerShell since v1.0 and have found it to be amazingly rewarding. I will illustrate why this is the case in this entry and hopefully some of it will resonate with you and you will find some worth in learning at least a little PowerShell for your DBA job.

First, PowerShell was built to manage Windows Servers and soon manage some part of Linux servers as DotNet Core comes into play.  Back in the day, if you were a DBA and wanted to manage SQL Servers, you used a lot of SMO (Shared Management Objects) and that meant programming in C# or pseudo C# in PowerShell. Creating objects, managing properties, etc.. That was not for everyone I know. I have a developer background and an IT background so managing servers and Active Directory and building programs to do things was part of my world a long time ago.  But what I found was that there were not a lot of tools in the marketplace for doing things against SQL Server with PowerShell then so it was not widely adopted by DBAs.

In the SQL Server 2008 days, Microsoft wrote some PowerShell Snapins that would give us access to some cmdlets (Commands) in PowerShell to manage some things and introduced a PowerShell Provider for SQL Server that turned your SQL Server into a series of Paths (SQLSERVER:\sql\servername\instancename) and that was pretty powerful, at least to a guy like me, looking to automate many repetitive tasks. With those cmdlets and the provider there began to be a way to now use PowerShell to get at SQL Server without writing .NET code. It certainly did not cover the breadth of the product like Exchange had done with their cmdlets, but it was a start of some love. This is where the dawn of becoming a PowerShell DBA began.

Now Why Investigate

For a DBA there are many times you want to know information about your server, and there is plenty of TSQL to get that information. You need to know the DMVs or the system tables to get at that information and for a fairly new DBA, that can be a real challenge. Here is where the paradigm begins. In SMO there are objects (methods, properties) that encapsulate the properties of the objects in SQL Server into a class (Object Oriented term). These classes represent the objects by having the appropriate methods and properties to act on those objects.  Let me illustrate.

We all know that there is a Database object in SQL Server and there are many TSQL commands that we can use to find out information such as…

SELECT * FROM sys.databases

SELECT * FROM sys.database_files

and many more….

It is a common practice to create a set of TSQL scripts and take them with you as you go. That is one of the values of longevity in the industry is what you have created and built up.  There is no argument from me for this mentality. This is where you parallel with the PowerShell mentality in that you can reuse the scripts later on and not have to remember or rebuild them on the fly. But is there a better way? Maybe, but at least there is “another” way.

Enter PowerShell

PowerShell in and of itself is a mechanism to do many things. I advocate DBAs learn it because of the way you can change your job by using Automation, Scripting and Information Gathering that is much different than running TSQL Scripts and copy-pasting them into documents or tables. (or even Excel)  I will take you through a couple of scenarios to give you a flavor of an additional tool (not advocating replacing SSMS or any other tool with PowerShell, only making you more efficient) with examples.

Say I want to have a list of databases with their name and recovery model.

TSQL

SELECT name, recovery_model FROM sys.databases;

PowerShell

Get-SqlDatabase –ServerInstance servername  | Select Name, RecoveryModel | Format-Table * –AutoSize

We won’t really go into how you get to be able to run these commands, just to compare the commands. They are not much different than each other, one is run in SSMS or sqlcmd and one is run in PowerShell console or ISE or another tool that will execute PowerShell.  The results are pretty similar as well.

sys_databases

PowerShell-Get-SqlDatabase

Now that doesn’t look too scary. You get the same results.  Here is how you would get it with the provider.

PowerShell-Get-SqlDatabase-Provider

Notice that I used a Path-Like structure to get to the Databases in the Provider.  I used the –Force parameter to ensure that I got the System Databases. Now that seems the same, other than the tool. So you ask the question, why do I do it?  Why do I learn PowerShell to be able to get the same information?  Here is the answer.

Objects are the answer

In TSQL you get results in a Grid or Text, you copy and paste or you just look at it and read it. You just get some text back either in PowerShell or TSQL or so it seems.  In PowerShell everything is an object and they all have (potentially) methods and properties.  Let me illustrate something that is an interesting part of PowerShell and objects.

Say I want to change the recovery model to Full on my DBA database.  in TSQL I could do it in a couple of ways and it would look something like this.

ALTER DATABASE databasename SET RECOVERY FULL

Let’s take a look at how you do it with PowerShell and why it would be valuable to get this tool in your toolbelt.

PowerShell-AlterDatabase

What you see is the retrieving of the Database object in an SMO object (the provider always returns an SMO object) and there is a Property called RecoveryModel that tells me which recovery mode it is. To set it, the property is read/write so I can change it with the words “FULL”, “SIMPLE” or “BULKLOGGED” and then I use the method Alter() as above.  The methods have the double parenthesis () after the method name and I could have inserted ($true) in the parenthesis to do a WITH ROLLBACK IMMEDIATE like I would do if I were changing a property that required full access.  Now that looks like it may not be very advantageous, but imagine the next scenario.

PowerShell-Function

I create a function to change my recovery model.  I can use parameters for my SQL Server instance and the database and the RecoveryModel.  Now I have a tool that does not require SSMS to be installed and allows me to call a function to change my recovery model.  Once this is written and saved (and even put in a module for easy loading), and loaded into my environment, I can use it pretty easily for any database on any server.

Like I said before, this is contrived and works because I have SMO loaded, which is not hard to do, but this entry is all about why, not how to get things loaded.  I have a SQL PowerShell Stairway at SqlServerCentral.com that you can go and read to help you get that part worked out.  Not hard but not the reason for this post.

Join me in the quest to become a PowerShell DBA. Stay tuned for more on this site and join me in a class soon.

Read More

PowerShell month in May with DBAduck

I am excited for this month.

SQLSkills Training

Event 1 for this month revolving around PowerShell is May 8-10 at SQLskills.com training in Chicago. I am teaching the first class of the Immersion event on PowerShell for the DBA. You can see the details in the link on the class.

I am super humbled and excited to teach this class for SQLskills and at the time of this writing there are 3 seats left after we opened more seats for the class. Please join me for a great event.

We will be going over how to get started with PowerShell and then dive right in and learn why a DBA should learn PowerShell and why you would ever want to become a PowerShell DBA. There are so many reasons and so much value, I cannot contain it all in this type of post. But we will be having a grand time drinking from a firehose and learning while we do. 🙂

I hope you will join me for this iteration, or plan on another iteration another time. I love PowerShell and look forward to meeting all of you who are going to be there.

PASS Virtual PowerShell Chapter Meeting

Event 2 for this month will be on Wednesday, May 17th at 12:00 PM Eastern Time I will be presenting to the group on Gathering Data for Trending using PowerShell.  You can RSVP here . I will be covering a great topic that I spend a lot of time thinking about. There is a lot of data in SQL Server and a lot of Meta-Data that can tell some great stories about your servers and what the behavior looks like.  Ever since working at Microsoft with the Community Research team on Newsgroup Behaviors, I have been hooked on looking at things in SQL Server as far as behaviors go, not just about features and such.  So I have created some data gathering scripts to get data and put it into tables with snapshots in time and looking at the information in a different light to see if I can derive how things are working in SQL Server.  Join me for a great time and register for the webinar here.

SQL Intersection

Event 3 will be a PreCon at SQLIntersection on May 20th in Orlando at the Disney Swan Hotel.  If you are going to be there, I hope you will join me and if you are thinking about going, please register and join me there.  It is my well-liked PreCon that takes you from 0-60 in using PowerShell as a DBA and a professional that wants to manage SQL Server from a command line instead of clicking your way through everything.  I will be teaching how to leverage the shell as well as balancing the use between SSMS and PowerShell to gain another tool in your toolbelt.

This will be a great month focused on PowerShell.

 

Read More

T-SQL Tuesday #39: More Efficient SMO with Server.SetDefaultInitFields

TSQL2sDay150x150 This T-SQL Tuesday is about PowerShell and how I use it with SQL Server. There are many more posts to come about PowerShell and SQL Server but I thought I would cut my teeth on this T-SQL Tuesday since it was on PowerShell.

If you read Rob Farley’s (b | t) post on the SMO topic, I am going to extend it a little and illustrate how you can get a little better TSQL out of SMO by using some things built into the SMO Server object.

The Server object contains a lot of properties and methods, but this post will focus on one called SetDefaultInitFields. This method takes an object and a list of properties that should be retrieved initially when the object properties are accessed. This allows much more efficiency when dealing with collections of items, such as Databases, Tables and Columns.

The code will retrieve me a list of databases with their PageVerify setting and the screenshots will show you what the Profiler output looks like with and without using the Server.SetDefaultInitFields method. Nothing rocket-science, but it will help illustrate that you can help SMO out in the efficiency department.

# This will get the Server object from the SQL Provider
# either by using Import-Module SQLPS with SQL 2012 installed
# or by using Add-PSSnapin SqlServerProviderSnapin100 with 2008/R2 installed
# or by using SQLPS.exe with 2008/R2 installed
# If you load the Module SQLPS with 2012, you don't have to load the 
# SMO Assembly for Microsoft.SqlServer.Smo
# Execute each set in a new PowerShell window to see the same information

# This set will get the list of databases, but will execute the query
# to get the PageVerify for each database (plus much more information)

$server= Get-Item SQLSERVER:\SQL\localhost\default
$server.Databases | Select Name, PageVerify

# This set will get the list of databases and when it does, it will get them with 1 query
# along with the PageVerify at the same time

$server= Get-Item SQLSERVER:SQLlocalhostdefault
$server.SetDefaultInitFields([Microsoft.SqlServer.Management.Smo.Database], "PageVerify")
$server.Databases | Select Name, PageVerify

Here are the screenshots that show the output in Profiler for each query. Notice that you will see the first set of PowerShell actually gets the data for each database one at a time.  With the second set, you see one query.  The reason is because of SetDefaultInitFields on the Server Object.

First screenshot, shows the retrieval of the Name of the database for one database and you get the CompatibilityLevel for free as SMO will interrogate that automatically for it’s own internal purposes. But you will see that the second screenshot will have a lot more information that it retrieves, just because you asked for one of the properties in a group that SMO has defined. It retrieves it all. Notice on each of these, that it is specifically for the database named ‘Bob1’ and not for any other. Each database will have these 2 queries run for it, and without any help it could be a long process to get 1000 database attributes.

Profiler_1_NoHelp

Profiler_2_NoHelp

Now in the third screenshot you will see that using the second piece of PowerShell in the code, that it issues 1 query and gets all the information in one round trip to the server. Look Mom, no WHERE clause.  Pretty efficient.

Profiler_4_WithHelp

Hopefully this gave you a taste of what you can do to help SMO be a little more efficient and return with less round trips to the server. Do not underestimate the power of SetDefaultInitiFields.

Have a great T-SQL Tuesday.

Ben Miller

Read More

Get SQL Server Service Accounts with WMI

Today I was tasked with retrieving the SQL Server Service accounts to validate them in Active Directory.  I wanted a simple way to get them and show the account names for each SQL Server service.  I knew that you could get the service in PowerShell with Get-Service.  I also new that PowerShell could get the service with Get-WmiObject –class Win32_Service. But neither one had the ServiceAccount property.

So I turned to WMI under SMO with PowerShell.  The script below has the details of a script I named GetSqlServices_WMI with a parameter of $sqlserver.

param (
	[string]$sqlserver
)

# Call this with .filename.ps1 "servername"

[void][reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.SqlWmiManagement")
$wmi = New-Object ("Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer") $sqlserver
$wmi.Services | Select DisplayName, Type, StartMode, ServiceState, ServiceAccount | ft -auto

The call is in the code.  The results will have DisplayName, Type, StartMode, ServiceState and ServiceAccount listed in a table with column headers.

Great way to interrogate the Services from SQL Server.

Available Properties to you are as follows:

  • Parent
  • AcceptsPause
  • AcceptsStop
  • Description
  • DisplayName
  • ErrorControl
  • ExitCode
  • PathName
  • ProcessId
  • ServiceAccount
  • StartMode
  • Type
  • IsHadrEnabled (because I am using 11.0 SMO)
  • StartupParameters
  • Dependencies
  • AdvancedProperties
  • Urn
  • Name
  • Properties
  • UserData
  • State

With Methods

  • Alter
  • ChangeHadrServiceSetting (because I am using 11.0 SMO)
  • ChangePassword
  • Equals
  • GetHashCode
  • GetType
  • Initialize
  • Pause
  • Refresh
  • REsume
  • SetServiceAccount (handy little guy)
  • Start
  • STop
  • ToString
  • Validate

The link to the documentation for this object is here (Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer).

Enjoy and happy scripting.

@DBAduck

Read More

SMO and Index Fragmentation

When working with SMO and Indexes there are times that you want to get the fragmentation of this particular index.  In SMO there is a method on the Index called EnumFragmentation.  When you call this method, there is a bug in SMO out on Connect (SMO Index Enum Fragmentation…) that you should vote for.  This is a very impactful bug as this is the only way in SMO to get the Fragmenation information on an Index.

The bug is this.  When you call EnumFragmentation() you get the entire Database fragmentation by index instead of for the Index that you are calling it from.  Normally what you would expect is to see

select *
from sys.dm_db_index_physical_stats(9, 39399393, NULL, NULL, 'SAMPLED')

Instead you see

select *
from sys.dm_db_index_physical_stats(NULL, NULL, NULL, NULL, 'LIMITED')

This is crazy as you only get a limited set, which I know you can change with the method above by specifying the type of Detail (‘DETAILED’, ‘SAMPLED’, ‘LIMITED’) but it defaults to LIMITED.

Vote up the bug if this is something that affects you and your work.

Read More

Powershell, SMO and Database Files

In response to some questions about renaming a physical file for a database, I have 2 versions of Powershell scripts that do this for you, including taking the database offline and then online to make the physical change match the meta-data.

First, there is an article about this at http://msdn.microsoft.com/en-us/library/ms345483.aspx.  This explains that you start by setting the database offline, then alter the database and modify the filename then set it back online.  This particular article does not show the physical renaming of the file, but it is necessary since the ALTER DATABASE MODIFY FILE statement only changes the metadata of the database in master.

There is a procedure to do this with SQL CLR and TSQL, but I chose to illustrate it in Powershell and SMO (using SQL Provider as well as straight SMO).  The SQLCLR version is by a SQL Server MVP, Ted Krueger (Blog | Twitter) and can be found at this link.

My version will be based on Powershell and SMO with a smattering of SQL Provider provided in SQL 2008+.

Code Snippet
  1. Add-PSSnapin SqlServerCmdletSnapin100
  2. Add-PSSnapin SqlServerProviderSnapin100
  3.  
  4. $servername = "localhost"
  5. $instance = "default"
  6. $dbname = "N2CMS"
  7. $logicalName = "N2CMS"
  8. $NewPath = "c:\program files\microsoft sql server\mssql10_50.mssqlserver\mssql\data"
  9. $NewFilename = "N2CMS_4.mdf"
  10.  
  11. if(Test-Path "sqlserver:\sql\$servername\$instance\databases\$dbname")
  12. {
  13.         $database = Get-Item("sqlserver:\sql\$servername\$instance\databases\$dbname");
  14.  
  15.         $fileToRename = $database.FileGroups["PRIMARY"].Files[$logicalName]
  16.         $InitialFilePath = $fileToRename.FileName
  17.  
  18.         $fileToRename.FileName = "$NewPath\$NewFilename"
  19.  
  20.         $database.Alter();
  21.         $database.SetOffline()
  22.         Rename-Item -Path $InitialFilePath -NewName "$NewFilename"
  23.         $database.SetOnline()
  24.         Write-Host "File Renamed"
  25. }
  26. else
  27. {
  28.         Write-Host "Database does not exist";
  29. }

If you notice, I first add the Snapin that is the SQL Provider.  If you already have it loaded in your Powershell Profile, then you can just omit those lines.  If you do not have SQL 2008 objects installed on this machine, then you will notice that the Snapins don’t load either.  In that case you would just use the version below.

In the SQL Provider version you will see a Powershellism with Test-Path and you are using a path, it just is a SQL provider path that points to the Database and makes sure that it exists.  Compare it to the other version where you are looking at the Server.Databases[$dbname] and then you get the filegroups and then the files.

This version of the script is purely SMO and PowerShell.  First you load the objects from SQL Server 2005/2008 SMO.

Code Snippet
  1. # Always
  2. [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | outnull
  3.  
  4. # Only if you don't have SQL 2008 SMO Objects installed
  5. [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | outnull
  6.  
  7. # Only if you have SQL 2008 SMO objects installed
  8. [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SqlEnum") | outnull
  9.  
  10. $servername = "localhost"
  11. $instance = "default"
  12. $dbname = "N2CMS"
  13. $logicalName = "N2CMS"
  14. $NewPath = "c:\program files\microsoft sql server\mssql10_50.mssqlserver\mssql\data"
  15. $NewFilename = "N2CMS_4.mdf"
  16.  
  17. $server = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server -ArgumentList $servername
  18. if($server.Databases[$dbname] != $null)
  19. {
  20.         $database = $server.Databases[$dbname];
  21.         $fileToRename = $database.FileGroups["PRIMARY"].Files[$logicalName]
  22.         $InitialFilePath = $fileToRename.FileName
  23.         $fileToRename.FileName = "$NewPath\$NewFilename"
  24.         $database.Alter();
  25.         $database.SetOffline()
  26.         Rename-Item -Path $InitialFilePath -NewName "$NewFilename"
  27.         $database.SetOnline()
  28.         Write-Host "File Renamed"
  29. }
  30. else
  31. {
  32.         Write-Host "Database does not exist";
  33. }

This version will use the Server object to get the database that you are looking for so that you can get at the files.  The file you are looking for is the $logicalName so that you can get the PhysicalName of the file.  Then using the builtin CmdLets to handle Files operations, a Rename-Item is issued to rename the initial path to the new one.

Operation order:

  • ALTER DATABASE dbname SET OFFLINE
  • ALTER DATABASE dbname MODIFY FILE
  • Rename-Item from old to new
  • ALTER DATABASE dbname SET ONLINE

I did not put these in a function, but they could easily (or not too hard that is) put together using Parameters and make it versatile for other situations, but that is for you to do so that it works for your scenario.

Love PowerShell!

Read More
Menu