Quantcast
Channel: Everyday SQL
Viewing all 71 articles
Browse latest View live

The Performance Impact to Prefix Stored Procedures with sp_

$
0
0
Last week I ran across a blog post by Axel Achten (B|T) that outlined a few reasons why you should not use SELECT * in queries.   In the post, Axel used the SQLQueryStress tool by Adam Machanic (B|T) to stress-test a simple query using SELECT * and SELECT col1, col2,...  This gave me an idea to use the same SQLQueryStress tool to benchmark a stored procedure that's prefixed with sp_.

All DBAs know, or should know, you should not prefix stored procedures with sp_.  Even Microsoft mentions the sp_ prefix is reserved for system stored procedures in Books Online.

I'm not going to discuss the do's and don'ts of naming conventions.  What I want to know is there still a performance impact of using the sp_ prefix. 

For our test, we'll use the AdventureWorks2012 database.  First we need to create two new stored procedures that selects from the Person.Person table.

USEAdventureWorks2012;
GO
CREATEPROCEDUREdbo.sp_SelectPersonASSELECT*FROMPerson.Person;
GO
CREATEPROCEDUREdbo.SelectPersonASSELECT*FROMPerson.Person;
GO

Next, we'll clear the procedure cache, and then execute each procedure once to compile it and to ensure all the data pages are in the buffer.

DBCCFREEPROCCACHE;
GO
EXECdbo.sp_SelectPerson;
GO
EXECdbo.SelectPerson;
GO

Next, we'll run execute each stored proc 100 times using SQLQueryStress and compare the results.



Total time to execute sp_SelectPerson was 3 minutes 43 seconds, and only 3 minutes 35 seconds to execute SelectPerson.  Given this test run was only over 100 iterations, 8 seconds is huge amount of savings.

We can even query sys.dm_exec_procedure_stats to get the average worker time in seconds and average elapsed time in seconds for each procedure.

SELECT
     o.nameAS'object_name'
    ,p.execution_count
    ,p.total_worker_timeAS'total_worker_time(μs)'
    ,(p.total_worker_time/p.execution_count)*0.000001 AS'avg_worker_time(s)'
    ,p.total_elapsed_timeAS'total_elapsed_time(μs)'
    ,(p.total_elapsed_time/p.execution_count)*0.000001 AS'avg_elapsed_time(s)'
FROMsys.dm_exec_procedure_stats p
JOINsys.objectsoONp.object_id=o.object_id;
GO


As you can see, the average time per execution is very minimal, but it does add up over time.  This could easy scale into a much larger difference if all stored procedures begin with sp_.


Dealing with a Fragmented Heap

$
0
0
Just for the record, this happens to be one of my favorite interview questions to ask candidates.

At some point in time, there will be a database containing tables without clustered indexes (a heap) that you will be responsible for maintaining.  I personally believe that every table should have a clustered index, but sometimes my advice is not always followed.  Additionally there can be databases from a 3rd party vendor that have this same design.  Depending on the what those heap tables are used for, over time it's possible they can become highly fragmented and degrade query performance.  A fragmented heap is just as bad as a fragmented index.  To resolve this issue, I'd like to cover four ways we can defragment a heap. 

To start with, we will need a sample database with a highly fragmented heap table.  You can download the FRAG database (SQL2012) from here.  Let's use the sys.dm_db_index_physical_stats DMV to check the fragmentation level.

USEFRAG;
GO
SELECT
     index_id
    ,index_type_desc
    ,index_depth
    ,index_level
    ,avg_fragmentation_in_percent
    ,fragment_count
    ,page_count
    ,record_count
FROMsys.dm_db_index_physical_stats(
     DB_ID('FRAG')
    ,OBJECT_ID('MyTable')
    ,NULL
    ,NULL
    ,'DETAILED');
GO


As you can see, the heap is 93% fragmented, and both non-clustered indexes are 99% fragmented.  So now we know what we're dealing with. 

Repair options in order of my preference:
  1. ALTER TABLE...REBUILD (SQL 2008+).
  2. CREATE CLUSTERED INDEX, then DROP INDEX.
  3. CREATE TABLE temp, INSERT INTO temp, DROP original table, sp_rename temp to original, recreate the non-clustered indexes.
  4. BCP out all data, drop the table, recreate the table, bcp data in, recreate the non-clustered indexes.

Option 1 is the easiest and the most optimal way to remove heap fragmentation; however, this option was only introduced in SQL Server 2008, so it's not available for all versions.  This is a single command that will rebuild the table and any associated indexes; yes, even clustered indexes.  Keep in mind, this command will rebuild the heap as well as all of the non-clustered indexes.

ALTERTABLEdbo.MyTableREBUILD;
GO

Option 2 is almost as quick, but involves a little bit of planning.  You will need to select a column to create the clustered index on, keeping in mind this will reorder the entire table by that key.  Once the clustered index has been created, immediately drop it.

CREATECLUSTEREDINDEXcluIdx1ONdbo.MyTable(col1);
GO
DROPINDEXcluIdx1ONdbo.MYTable;
GO

Option 3 requires manually moving all data to a new temporary table.  This option is an offline operation and should be done during off-hours.  First you will need to create a new temporary table with the same structure as the heap, and then copy all rows to the new temporary table.

CREATETABLEdbo.MyTable_Temp(col1INT,col2INT);
GO
INSERTdbo.MyTable_Temp
SELECT*FROMdbo.MyTable;
GO

Next, drop the old table, rename the temporary table to the original name, and then create the original non-clustered indexes.

DROPTABLEdbo.MyTable;
GO
EXECsp_rename'MyTable_Temp','MyTable';
GO
CREATENONCLUSTEREDINDEXidx1ONdbo.MyTable(col1);
GO
CREATENONCLUSTEREDINDEXidx2ONdbo.MyTable(col2);
GO

Option 4 is by far to the least efficient way to complete this task.  Just like option 3, this option is an offline operation and should be done during off-hours.  First we need to use the BCP utility to bulk copy out all of the data to a data file.  Using BCP will require a format file to define the structure of what we're bulk copying.  In this example, I am using an XML format file.  More information on format files can be found here.

BCP FRAG.dbo.MyTable OUT D:\MyTable.dat -T -STRON\TEST1 -fD:\MyTableFormat.xml

Once that is complete, we need to drop and recreate the table.

DROPTABLEdbo.MyTable;
GO
CREATETABLEdbo.MyTable(col1INT,col2INT);
GO

Next, we need to use the BCP utility to bulk copy all of the data back into the table.

BCP FRAG.dbo.MyTable IN D:\MyTable.dat -T -STRON\TEST1 -fD:\MyTableFormat.xml

Finally, we can create the original non-clustered indexes.

CREATENONCLUSTEREDINDEXidx1ONdbo.MyTable(col1);
GO
CREATENONCLUSTEREDINDEXidx2ONdbo.MyTable(col2);
GO

Options 1 and 2 do not require any downtime for the table; however, they will cause blocking during the rebuild stage.  You can use the WITH ONLINE option but that will require enough free space in tempdb for the entire table.  Both options 3 and 4 will require downtime and will potentially impact any foreign key constraints or other dependent objects.  If you're running SQL Server 2008 or higher, I highly recommend using option 1.

As you've seen, there are multiple ways of dealing with heap fragmentation.  However, the best way is to avoid heaps altogether in your database design.

T-SQL Tuesday #40 - Proportional Fill within a Filegroup

$
0
0

T-SQL Tuesday #40 is underway, and this month's host is Jennifer McCown (blog|twitter).  The topic is about File and Filegroup Wisdom.  Jennifer says she's a big fan of the basics, so I thought I would talk about the basics of proportional fill within a filegroup.  This should be pretty common knowledge, but I still talk to a lot of DBAs that don't know anything  about it, or if they have heard of it, they still don't know how it works.

The proportional fill algorithm is used to keep the amount of free space within a filegroup evenly distributed across all files in a filegroup.  SQL Server's proportional fill falls in line with the strategy of placing your files and filegroups across multiple disks, and thus, allowing for improved I/O performance.

Let's say we need to add more storage space for our AdventureWorks2012 database that has outgrown the current drive D.  Because of storage limitations, we can't add any more space to D, so our only choice is to add a completely new drive E.  

Once we add the new E drive to the server, we add a new data file to the PRIMARY filegroup of the AdventureWorks2012database using the following query.

USEmaster;
GO
ALTERDATABASEAdventureWorks2012
ADDFILE (
     NAME=N'AdventureWorks2012_Data2'
    ,FILENAME=N'E:\MSSQL11.TEST1\MSSQL\DATA\AdventureWorks2012_Data2.ndf'
    ,SIZE= 200MB
    ,FILEGROWTH= 1024KB
)TOFILEGROUP[PRIMARY];
GO

One might think we're safe at this point; however, because of the proportional fill feature we're not.  Once new data is written to the data files, SQL Server will create the new page allocations on the newly created AdventureWorks2012_Data2.ndf file because it has a higher percentage of free space compared to AdventureWorks2012_Data.mdf.  Drive E now suddenly becomes a new I/O hotspot on the server.

You can check the space used with the following query.

USEAdventureWorks2012;
GO
SELECT
     nameAS'LogicalName'
    ,physical_nameAS'PhysicalName'
    ,CONVERT(INT,ROUND(size/128,0))AS'Size (MB)'
    ,CONVERT(INT,ROUND(FILEPROPERTY(name,'SpaceUsed')/128,0))AS'SpaceUsed (MB)'
FROMsys.database_files
WHEREtype= 0;
GO




To avoid this disk hotspot issue, we need to have the data more evenly balanced across both files in the filegroup in terms of data page allocations.  The quickest way to do this is to rebuild all of the clustered indexes within the database.

ALTERINDEX[PK_AWBuildVersion_SystemInformationID]ON[dbo].[AWBuildVersion]REBUILD;
ALTERINDEX[PK_ErrorLog_ErrorLogID]ON[dbo].[ErrorLog]REBUILD;
ALTERINDEX[PK_Department_DepartmentID]ON[HumanResources].[Department]REBUILD;
:
:
ALTERINDEX[PK_Store_BusinessEntityID]ON[Sales].[Store]REBUILD;
GO

SQL Server will do its best to automatically rebalance all of the page allocations across all files within the same filegroup.  In our case, both data files are still part of the PRIMARY filegroup. 

Check the space used again with the following query.

USEAdventureWorks2012;
GO
SELECT
     nameAS'LogicalName'
    ,physical_nameAS'PhysicalName'
    ,CONVERT(INT,ROUND(size/128,0))AS'Size (MB)'
    ,CONVERT(INT,ROUND(FILEPROPERTY(name,'SpaceUsed')/128,0))AS'SpaceUsed (MB)'
FROMsys.database_files
WHEREtype= 0;
GO



Now what we have is much more evenly balanced allocation across both data files.  This will allow SQL Server to even distribute the write I/O across both disk drives.

By doing this one index maintenance step after adding a new file, you'll help prevent a write hotspot on one of your disks and help SQL Server improve its I/O performance.  But keep in mind that proportional fill only affects all files in the same filegroup.  If we had added the second file to a new filegroup, then we would have to manually move tables to the new filegroup.

For more info on files and filegroups, check out BooksOnline.

Use Powershell to Pick Up what Database Mirroring Leaves Behind

$
0
0
Database mirroring has been around since SQL Server 2005, and it's turned out to be an excellent step up from log shipping.  However, like log shipping, it is still only a database-level disaster recovery solution.  Meaning that any logins, server role memberships or server-level permissions will not be mirrored over to the mirror server.  This is where the DBA needs to plan ahead and create their own custom jobs to script and/or document these types of shortcomings.

My solution is to use Powershell.  In this example, I have setup database mirroring for the AdventureWorks2012 database.  For this demo, both instances, TEST1 and TEST2, are on the same physical server.


There are two logins on the principal server that currently do not exist on the mirror server.  One is a SQL login, AWLogin1, and the other is a Windows Authenticated login, TRON2\AWLogin2.

The first step of our Powershell script will need to connect to the principal server to generate a CREATE LOGIN script for those two logins.  To generate the script, we need to grab the login name, the SID, and the hashed password if it's a SQL login.  This is accomplished by running the following code.

SELECT'USE master; CREATE LOGIN '+QUOTENAME(p.name)+' '+
CASEWHENp.typein('U','G')
    THEN'FROM WINDOWS '
    ELSE''
    END
+'WITH '+
CASEWHENp.type='S'
    THEN'PASSWORD = '+master.sys.fn_varbintohexstr(l.password_hash)+' HASHED, '+'SID = '+master.sys.fn_varbintohexstr(l.sid)+  ', CHECK_EXPIRATION = '+
    CASEWHENl.is_expiration_checked>0
        THEN'ON, '
        ELSE'OFF, '
        END
    +'CHECK_POLICY = '+
    CASEWHENl.is_policy_checked>0
        THEN'ON, '
        ELSE'OFF, '
        END+
    CASEWHENl.credential_id> 0
        THEN'CREDENTIAL = '+c.name+', '
        ELSE''
        END
ELSE''
END
+'DEFAULT_DATABASE = '+p.default_database_name+
CASEWHENLEN(p.default_language_name)> 0
    THEN', DEFAULT_LANGUAGE = '+p.default_language_name
    ELSE''
    END
+';'AS'LoginScript'
FROMmaster.sys.server_principalspLEFTJOINmaster.sys.sql_loginsl
    ONp.principal_id=l.principal_idLEFTJOINmaster.sys.credentialsc
    ONl.credential_id=c.credential_id
WHEREp.typeIN('S','U','G')
    ANDp.nameNOTIN('sa','NT AUTHORITY\SYSTEM')
    ANDp.nameNOTLIKE'##%##'
    ANDp.nameNOTLIKE'BUILTIN\%'
    ANDp.nameNOTLIKE'NT SERVICE\%'
ORDERBYp.name;

In this example, you can see we have one row for each of the two logins.


The next step of the Powershell script will need to write those two rows of data to a file on the mirror server.  This is done using the System.IO.StreamWriterclass.

foreach($rowin$commandList.Tables[0].Rows)
{
    try
    {
            $output=$row["LoginScript"].ToString()
            $stream.WriteLine($output)
      }
      catch
      {
            $stream.Close()
            CheckForErrors
      }
}

When there is a need to failover to the mirror server, the DBA can then open this script and run it.  All logins will be created and with their original SID value and password.

The second half of the Powershell script will use the same procedures to script out any server role memberships or server-level permissions these two logins may have on the principal server.  This is done using the following block of code.

-- BUILD SERVER ROLE MEMBERSHIPS
SELECT'USE master; EXEC sp_addsrvrolemember @loginame = '+QUOTENAME(s.name)+', @rolename = '+QUOTENAME(s2.name)+';'  AS'ServerPermission'
FROMmaster.sys.server_role_membersrINNERJOINmaster.sys.server_principalss
    ONs.principal_id=r.member_principal_idINNERJOINmaster.sys.server_principalss2
    ONs2.principal_id=r.role_principal_id
WHEREs2.type='R'
    ANDs.is_disabled= 0
    ANDs.nameNOTIN('sa','NT AUTHORITY\SYSTEM')
    ANDs.nameNOTLIKE'##%##'
    ANDs.nameNOTLIKE'NT SERVICE\%'
UNIONALL
-- BUILD SERVER-LEVEL PERMISSIONS
SELECT'USE master; '+sp.state_desc+' '+sp.permission_name+' TO '+QUOTENAME(s.name)COLLATELatin1_General_CI_AS+';'  AS'ServerPermission'
FROMsys.server_permissionsspJOINsys.server_principalss
    ONsp.grantee_principal_id=s.principal_id
WHEREs.typeIN('S','G','U')
    ANDsp.typeNOTIN('CO','COSQ')
    ANDs.is_disabled= 0
    ANDs.nameNOTIN('sa','NT AUTHORITY\SYSTEM')
    ANDs.nameNOTLIKE'##%##'
    ANDs.nameNOTLIKE'NT SERVICE\%';

From the output, you can see the TRON\AWLogin2 is a member of the BULKADMIN server role and has the VIEW SERVER STATE permission.  These two rows will be written to a file in the same file share as the previous file.  


As before, once the database is failed over to the mirror server, the DBA can run this script to apply any missing permissions.

Finally, this Powershell script can be scheduled to run from any server; however, I choose to setup this job on the principal server.  I schedule it to run once a day through SQL Agent.  Each run of the script will overwrite the existing file, so if there are any logins or permissions that have been added or removed, it will show up in the latest version of the files.


Using this Powershell script can make it very easy to script out logins and permissions.  While this example was used with database mirroring, then same strategy will work for log shipping.  The entire Powershell script is below.

Merging SQL Server and Softball Just for Fun

$
0
0
With opening day of Major League Baseball season finally here, I thought I’d take the time to cover two of my favorite topics…SQL Server and softball.  Have you ever thought about how you can use SQL Server in conjunction with softball? Ok, so maybe you haven’t, but I have.  I have been managing a slow-pitch softball team, the Sons of Pitches, for the past 5 years.  Yes, I did say “slow-pitch”.  My friends and I have already passed the peak of our physical ability, so we use it as an excuse to get together and have a little fun.

As a DBA, I’m big on keeping track of metrics, so naturally this spills over into my extracurricular activities.  For each softball game, the league requires us to keep score, but in addition to that I like to keep individual player stats.  Why would I want to do this?  For benchmarking, of course!  How will I ever tell if my players are getting better or worse without it?  So to save myself a lot of time each week, I created a SQL Server database to keep track of all the stats.  Each week, all I have to do is enter the stats and then generate a report by running a few TSQL scripts.  Below is a sample of the score book that I keep for each game.  This is the only manual piece.  Everything else is automated by SQL Server.


Let’s start by creating our database and then a Games table to hold all of the data.

CREATEDATABASESoftball;
GO
USESoftball;
GO
CREATETABLEdbo.Games(
    [Id]INTIDENTITY(1,1)NOTNULL,
    [GameDate]DATEDEFAULT(GETDATE())NOTNULL,
    [SeasonId]INTNOTNULL,
    [SeasonName]VARCHAR(50)NOTNULL,
    [GameNumber]SMALLINTNOTNULL,
    [BattingOrder]SMALLINTNOTNULL,
    [Roster]SMALLINTNOTNULL,
    [Name]VARCHAR(50)NOTNULL,
    [PA]FLOATNOTNULL,
    [AB]  AS ([PA]-([SACf]+[BB])),
    [H]  AS ((([1B]+[2B])+[3B])+[HR]),
    [1B]FLOATNOTNULL,
    [2B]FLOATNOTNULL,
    [3B]FLOATNOTNULL,
    [HR]FLOATNOTNULL,
    [ROE]FLOATNOTNULL,
    [SACf]FLOATNOTNULL,
    [BB]FLOATNOTNULL,
    [K]FLOATNOTNULL,
    [RBI]FLOATNOTNULL,
    [RUNS]FLOATNOTNULL
);
GO
ALTERTABLEdbo.Games
    ADDCONSTRAINTPK_GamesPRIMARYKEYCLUSTERED (Id,GameDate);
GO

Let's backup for one minute.  For those of you who are not familiar with baseball (or softball) scoring; here’s a quick break down of what each abbreviation means.  These are the definitions for my softball scoring.  Some baseball statistics are omitted, because they are not relevant in our league.

PA = Plate Appearance: number of times that player appeared in the batter’s box.
AB = At Bat: plate appearances, not including walks or sacrifices.
H = Hit: number of times a batter safely reached a base w/o an error by the defense.
1B = Single: number of times a batter safely reached first base w/o an error by the defense.
2B = Double: number of times a batter safely reached second base w/o an error by the defense.
3B = Triple: number of times a batter safely reached third base w/o an error by the defense.
HR = Home Run: number of times a batter safely reached all four bases w/o an error by the defense.
ROE = Reached on Error: number of times a batter safely reached a base with an error by the defense.
SACf = Sacrifice Fly: Fly ball hit the outfield that was caught for an out, but allowed a base runner to advance.
BB = Base on Balls (aka Walk): number of times a batter did not swing at four pitches outside the strike zone, and was awarded first base by the umpire.
K = Strike Out: number of times a third strike is called or swung at and missed, or hit foul when the batter already had two strikes.
RBI = Run Batted In: number of runners who scored as a result of the batters’ action.
RUNS = Runs Scored: number of times a runner crossed home plate.
BA = Batting Average: Hits (H) divided by At Bats (AB).
OB = On Base Percentage: number of times a batter reached base (H + AB) divided by (AB + BB + SACf).
SLUG = Slugging Average: number of bases achieved (1B+2B*2+3B*3+HR*4) divided by At Bats (AB). One base is for each 1B, two bases for each 2B, three bases for each 3B, and four bases for each HR.
OPS = On Base Percentage Plus Slugging: sum of batter’s OB + SLUG.

The Games table will hold one row for each player and his stats for that game.  All columns will need to be manually entered, except for H and AB.  The hits and at bats are two of the various computed columns.  What we have now is the raw data stored in the database.

SELECT*FROMGames;
GO


Next, need to build some views to add the other calculated fields: BA, OB, SLUG, and OPS, as well as present the data in a more readable format.  Since Hits (H) and At Bats (AB) are already computed columns, we need to use a view in order to use them in another computed column.

CREATEVIEWdbo.PlayerStats
AS
SELECT
     [SeasonId]
    ,[SeasonName]
    ,[GameNumber]
    ,[BattingOrder]
    ,[Roster]
    ,[Name]
    ,[PA]
    ,[AB]
    ,[H]
    ,[1B]
    ,[2B]
    ,[3B]
    ,[HR]
    ,[ROE]
    ,[SACf]
    ,[BB]
    ,[K]
    ,[RBI]
    ,[RUNS]
    ,CONVERT(DECIMAL(5,3),(ISNULL(([H]/NULLIF([AB],0)),0)))AS[BA]
    ,CONVERT(DECIMAL(5,3),(ISNULL((([H]+[BB])/(NULLIF([AB]+[BB]+[SACf],0))),0)))AS[OB]
    ,CONVERT(DECIMAL(5,3),(ISNULL((([1B]+([2B]*2)
        +([3B]*3)+([HR]*4))/NULLIF([AB],0)),0)))AS[SLUG]
    ,CONVERT(DECIMAL(5,3),(ISNULL((([H]+[BB])/NULLIF([AB]+[BB]+[SACf],0)),0)
        +ISNULL((([1B]+([2B]*2)+([3B]*3)+([HR]*4))/NULLIF([AB],0)),0)))AS[OPS]
FROMdbo.Games;
GO

SELECT*FROMPlayerStats;
GO


Next, we can create a view for the season stats.

CREATEVIEWdbo.SeasonStats
AS
SELECT
     [SeasonId]
    ,[SeasonName]
    ,[Roster]
    ,[Name]
    ,COUNT(GameNumber)as[Games]
    ,SUM([PA])AS[PA]
    ,SUM([AB])AS[AB]
    ,SUM([H])AS[H]
    ,SUM([1B])AS[1B]
    ,SUM([2B])AS[2B]
    ,SUM([3B])AS[3B]
    ,SUM([HR])AS[HR]
    ,SUM([ROE])AS[ROE]
    ,SUM([SACf])AS[SACf]
    ,SUM([BB])AS[BB]
    ,SUM([K])AS[K]
    ,SUM([RBI])AS[RBI]
    ,SUM([RUNS])AS[RUNS]
    ,ISNULL(CONVERT(DECIMAL(5,3),(SUM([H])/NULLIF(SUM([AB]),0))),0)AS[BA]
    ,ISNULL(CONVERT(DECIMAL(5,3),((SUM([H])+SUM([BB]))
        /(NULLIF(SUM([AB])+SUM([BB])+SUM([SACf]),0)))),0)AS[OB]
    ,ISNULL(CONVERT(DECIMAL(5,3),((SUM([1B])+(SUM([2B])*2)
        +(SUM([3B])*3)+(SUM([HR])*4))/NULLIF(SUM([AB]),0))),0)AS[SLUG]
    ,CONVERT(DECIMAL(5,3),ISNULL((SUM([H])+SUM([BB]))
        /(NULLIF(SUM([AB])+SUM([BB])+SUM([SACf]),0)),0)+ISNULL((SUM([1B])
        +(SUM([2B])*2)+(SUM([3B])*3)+(SUM([HR])*4))/NULLIF(SUM([AB]),0),0))AS[OPS]
FROMdbo.PlayerStats
GROUPBY[SeasonId],[SeasonName],[Roster],[Name];
GO

This will allow us to view each player’s stats for a given season.

SELECT*FROMSeasonStats
WHERESeasonId= 12
ORDERBY[OB]DESC,[BA]DESC;
GO


In addition to season stats, we also need a view for career stats.

CREATEVIEWdbo.CareerStats
AS
SELECT
     [Roster]
    ,[Name]
    ,COUNT(GameNumber)as[Games]
    ,SUM([PA])AS[PA]
    ,SUM([AB])AS[AB]
    ,SUM([H])AS[H]
    ,SUM([1B])AS[1B]
    ,SUM([2B])AS[2B]
    ,SUM([3B])AS[3B]
    ,SUM([HR])AS[HR]
    ,SUM([ROE])AS[ROE]
    ,SUM([SACf])AS[SACf]
    ,SUM([BB])AS[BB]
    ,SUM([K])AS[K]
    ,SUM([RBI])AS[RBI]
    ,SUM([RUNS])AS[RUNS]
    ,ISNULL(CONVERT(DECIMAL(5,3),(SUM([H])/NULLIF(SUM([AB]),0))),0)AS[BA]
    ,ISNULL(CONVERT(DECIMAL(5,3),((SUM([H])+SUM([BB]))
        /(NULLIF(SUM([AB])+SUM([BB])+SUM([SACf]),0)))),0)AS[OB]
    ,ISNULL(CONVERT(DECIMAL(5,3),((SUM([1B])+(SUM([2B])*2)
        +(SUM([3B])*3)+(SUM([HR])*4))/NULLIF(SUM([AB]),0))),0)AS[SLUG]
    ,CONVERT(DECIMAL(5,3),ISNULL((SUM([H])+SUM([BB]))
        /(NULLIF(SUM([AB])+SUM([BB])+SUM([SACf]),0)),0)+ISNULL((SUM([1B])
        +(SUM([2B])*2)+(SUM([3B])*3)+(SUM([HR])*4))/NULLIF(SUM([AB]),0),0))AS[OPS]
FROMdbo.PlayerStats
GROUPBY[Roster],[Name];
GO

SELECT*FROMCareerStats
ORDERBY[OB]DESC,[BA]DESC;
GO


Next, we can create a view for the individual player stats.

CREATEVIEWdbo.IndividualStats
AS
SELECT
     CONVERT(VARCHAR,[SeasonId])AS[SeasonId]
    ,[SeasonName]
    ,[Roster]
    ,[Name]
    ,COUNT(GameNumber)AS[Games]
    ,SUM([PA])AS[PA]
    ,SUM([AB])AS[AB]
    ,SUM([H])AS[H]
    ,SUM([1B])AS[1B]
    ,SUM([2B])AS[2B]
    ,SUM([3B])AS[3B]
    ,SUM([HR])AS[HR]
    ,SUM([ROE])AS[ROE]
    ,SUM([SACf])AS[SACf]
    ,SUM([BB])AS[BB]
    ,SUM([K])AS[K]
    ,SUM([RBI])AS[RBI]
    ,SUM([RUNS])AS[RUNS]
    ,ISNULL(CONVERT(DECIMAL(5,3),(SUM([H])/NULLIF(SUM([AB]),0))),0)AS[BA]
    ,ISNULL(CONVERT(DECIMAL(5,3),((SUM([H])+SUM([BB]))
        /(NULLIF(SUM([AB])+SUM([BB])+SUM([SACf]),0)))),0)AS[OB]
    ,ISNULL(CONVERT(DECIMAL(5,3),((SUM([1B])+(SUM([2B])*2)
        +(SUM([3B])*3)+(SUM([HR])*4))/NULLIF(SUM([AB]),0))),0)AS[SLUG]
    ,CONVERT(DECIMAL(5,3),ISNULL((SUM([H])+SUM([BB]))
        /(NULLIF(SUM([AB])+SUM([BB])+SUM([SACf]),0)),0)+ISNULL((SUM([1B])
        +(SUM([2B])*2)+(SUM([3B])*3)+(SUM([HR])*4))/NULLIF(SUM([AB]),0),0))AS[OPS]
FROMdbo.PlayerStats
GROUPBY[SeasonId],[SeasonName],[Roster],[Name];
GO

This will allow us to view the stats from an individual player as well as union all of the careers stats.

SELECT*FROMIndividualStats
WHERERoster= 4
UNIONALL
SELECT'CAREER STATS','',*FROMCareerStats
WHERERoster= 4;
GO


Finally, we can create one last view for the individual player stats by season.

CREATEVIEWdbo.IndividualTeamStats
AS
SELECT
     CONVERT(varchar,[SeasonId])AS[SeasonId]
    ,[SeasonName]
    ,[Roster]
    ,[Name]
    ,CONVERT(varchar,[GameNumber])AS[Games]
    ,SUM([PA])AS[PA]
    ,SUM([AB])AS[AB]
    ,SUM([H])AS[H]
    ,SUM([1B])AS[1B]
    ,SUM([2B])AS[2B]
    ,SUM([3B])AS[3B]
    ,SUM([HR])AS[HR]
    ,SUM([ROE])AS[ROE]
    ,SUM([SACf])AS[SACf]
    ,SUM([BB])AS[BB]
    ,SUM([K])AS[K]
    ,SUM([RBI])AS[RBI]
    ,SUM([RUNS])AS[RUNS]
    ,ISNULL(CONVERT(DECIMAL(5,3),(SUM([H])/NULLIF(SUM([AB]),0))),0)AS[BA]
    ,ISNULL(CONVERT(DECIMAL(5,3),((SUM([H])+SUM([BB]))
        /(NULLIF(SUM([AB])+SUM([BB])+SUM([SACf]),0)))),0)AS[OB]
    ,ISNULL(CONVERT(DECIMAL(5,3),((SUM([1B])+(SUM([2B])*2)
        +(SUM([3B])*3)+(SUM([HR])*4))/NULLIF(SUM([AB]),0))),0)AS[SLUG]
    ,CONVERT(DECIMAL(5,3),ISNULL((SUM([H])+SUM([BB]))
        /(NULLIF(SUM([AB])+SUM([BB])+SUM([SACf]),0)),0)+ISNULL((SUM([1B])
        +(SUM([2B])*2)+(SUM([3B])*3)+(SUM([HR])*4))/NULLIF(SUM([AB]),0),0))AS[OPS]
FROMdbo.Games
GROUPBY[SeasonId],[SeasonName],[Name],[Roster],[GameNumber]
UNION
SELECT
     'SEASON STATS'
    ,[SeasonName]
    ,[Roster]
    ,[Name]
    ,''
    ,[PA]
    ,[AB]
    ,[H]
    ,[1B]
    ,[2B]
    ,[3B]
    ,[HR]
    ,[ROE]
    ,[SACf]
    ,[BB]
    ,[K]
    ,[RBI]
    ,[RUNS]
    ,[BA]
    ,[OB]
    ,[SLUG]
    ,[OPS]
FROMdbo.SeasonStats;
GO

This allows us to get a line by line view of what each player did in each game throughout the season as well as have his aggregated season stats.

SELECT*FROMIndividualStatsTeam
WHERESeasonName='Spring 2012'
ORDERBYRoster;
GO


As you can see, two of the things I'm passionate about, SQL Server and softball, work very well together.  Although this was done as a hobby, I use it every single week.  What kind of hobbies can you use SQL Server for?

And yes, these are the actual stats of my team.  We play just about every Tuesday night during the spring, summer, and fall for the SportsLink league in Charlotte, NC.  We’re not the best team, but we’re also nowhere near the worst, and we’ve managed to win the league championship twice.  If you happen to be in Charlotte for the SQLPass Summit in October, maybe you can find some time of come watch my team play that Tuesday night.  I can't guarantee that it will be good or bad, but it’s definitely entertaining.

An Alternative to SELECT COUNT(*) for Better Performance

$
0
0
Sometimes rapid code development doesn't always produce the most efficient code.  Take the age old line of code SELECT COUNT(*) FROM MyTable.  Obviously this will give you the row count for a table, but at what cost? Doing any SELECT * from a table will ultimately result in a table or clustered index scan.

USEAdventureWorksDW2012;
SELECTCOUNT(*)FROMdbo.FactProductInventory;
GO


Turning on STATISTICS IO on reveals 5753 logical reads just to return the row count of 776286.

Table 'FactProductInventory'. Scan count 1, logical reads 5753, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Starting with SQL Server 2005, Microsoft introduced a DMV, sys.dm_db_partition_stats, that provides you with the same information at a fraction of the cost.  It requires a little more coding, but once you turn on STATISTICS IO, you will see the performance benefit.

USEAdventureWorksDW2012;
SELECT
        s.nameAS'SchemaName'
       ,o.nameAS'TableName'
       ,SUM(p.row_count)AS'RowCount'
FROMsys.dm_db_partition_statsp
       JOINsys.objectsoONo.object_id=p.object_id
       JOINsys.schemassONo.schema_id=s.schema_id
WHEREp.index_id<2 ANDo.type='U'
       ANDs.name='dbo'
       ANDo.name='FactProductInventory'
GROUPBYs.name,o.name
ORDERBYs.name,o.name;
GO


Since we're querying a DMV, we never touch the base table.  We can see here we only need 16 logical reads to return the same row count of 776286, and the FactProductInventory table is nowhere in our execution plan.

Table 'sysidxstats'. Scan count 1, logical reads 10, physical reads 0, read-ahead reads 8, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysschobjs'. Scan count 0, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysclsobjs'. Scan count 0, logical reads 2, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

By using the DMV, we have improved the query performance and reduced the total I/O count by nearly 100%. Another added benefit of using the DMV, is we won't need locks on the base table and therefore will avoid the possibility of blocking other queries hitting that table.

This is just one simple example of how you can easily improve the performance of an application.

Investigating Plan Cache Bloat

$
0
0
SQL Server includes a DMV, sys.dm_exec_query_stats, that returns performance statistics for each query plan cached in memory.  However, it can also help give you insight into how consistent your developers are with writing code.

For this topic, we'll just concentrate on a few columns returned by the DMV: sql_handle and plan_handle.  Per Books Online, sql_handle is a value that refers to the batch or stored procedure that the query, and plan_handle is a value that refers to the compiled plan of that query.  For each query that is processed, SQL Server can generate one or more compiled plans for that query.  This one-to-many relationship can be caused by a number factors, but one simple reason can be coding inconsistency.

One simple coding difference that I often see is within the SET statements preceding a query.  If a developer executes the exact same query using different SET statements, then SQL Server will compile a separate plan for each one.

First, we need to clear the cache.

DBCCFREEPROCCACHE;
GO

Next run these two queries.

SETQUOTED_IDENTIFIEROFF;
GO
SELECTp.FirstName,p.LastNameFROMPerson.Personp
JOIN HumanResources.EmployeeeONp.BusinessEntityID=e.BusinessEntityID
WHEREe.Gender='M';
GO

SETQUOTED_IDENTIFIERON;
GO
SELECTp.FirstName,p.LastNameFROMPerson.Personp
JOIN HumanResources.EmployeeeONp.BusinessEntityID=e.BusinessEntityID
WHEREe.Gender='M';
GO

As you can see, the only difference between the two queries is the value for SET QUOTED_IDENTIFIER. Now let's query the DMV.

SELECTs.text,q.sql_handle,q.plan_handle
FROMsys.dm_exec_query_statsq CROSSAPPLYsys.dm_exec_sql_text(sql_handle)s;
GO



We can see that we have 2 rows returned, one for each query.  As you'll notice, the sql_handleis the same for each, but the plan_handle is different.  Next let's look at the graphical query plan of each.

SELECT*FROMsys.dm_exec_query_plan(0x0600050049DA7633D08998220000000001000000000000000000000000000000000000000000000000000000);
GO


SELECT*FROMsys.dm_exec_query_plan(0x0600050049DA7633908298220000000001000000000000000000000000000000000000000000000000000000);
GO


You will see the query plan is the same; however, SQL Server treats each one as if it were a completely distinct query.  If this were just a typo by the developer, then SQL Server just doubled the amount of plan cache needed for this query and wasted valuable resources.

Let's look at the same queries from another angle.  This time we'll remove the SET statements, but change the formatting of the queries.

First clear the plan cache.

DBCCFREEPROCCACHE;
GO

Next run these two queries.

SELECTp.FirstName,p.LastNameFROMPerson.Personp
JOIN HumanResources.EmployeeeONp.BusinessEntityID=e.BusinessEntityID
WHEREe.Gender='M';
GO

SELECT
        p.FirstName
       ,p.LastName
FROMPerson.Personp
JOIN HumanResources.EmployeeeONp.BusinessEntityID=e.BusinessEntityID
WHEREe.Gender='M';
GO

Finally, look at the DMV.

SELECTs.text,q.sql_handle,q.plan_handle
FROMsys.dm_exec_query_statsq CROSSAPPLYsys.dm_exec_sql_text(sql_handle)s;
GO


What you'll notice is SQL Server will still treat each query as if it were two completely different statements; however, the only difference is the formatting.

In these examples, we've covered how important it is for the developers to be consistent with all the code passed to SQL Server.  Just minor changes in the code will cause SQL Server to generate different query plans and lead to plan cache bloat and wasted resources.  As a DBA, these are some simple examples of feedback you should be providing to your development teams.  Be proactive and don't let them down!

VMware vSphere Storage Performance - Thick vs Thin Provisioning

$
0
0
Industry experts will tell you that virtualization of your environment is not done to improve performance, it's done to make it cheaper and easier to manage.  The task of most VM administrators is to cram as many VMs into a farm as possible.  One of the ways is to accomplish that is to allocate "thin provisioned" storage to each VM.

For each VM that is created, the VM admin has to specify the number of virtual CPUs, the amount of virtual RAM, the number and size of each virtual disk, as well as a few other items.  The virtual disks can be allocated in two different ways: thin provision or thick provision.  The difference between thick and thin is very simple and outlined in this diagram from VMware.


Thick provisioned storage allocates all storage when the disk is created.  This means if a VM admin allocate 25GB for a virtual disk, then VMDK file on the host is actually 25GB. 

Thin provisioned storage allows the VM admin to essentially over allocate storage, much in the same way they can over allocate memory.  For example, if a VM admin allocates 25GB for a virtual disk, then the VMDK file will start out at a few MB then grow as the space is used by the VM.  However, within the VM, the Windows operating system will see the disk as having a total capacity of 25GB.

Below, you can see Windows shows both Drive E and F as 25GB in size.


However, vSphere shows the thick provisioned disk (Drive E) as 25GB, but the thin provisioned disk (Drive F) is 0GB.


VMSTORAGETEST_4-flat.vmdk is the thick provisioned disk (Drive E).
VMSTORAGETEST_5-flat.vmdk is the thin provisioned disk (Drive F).

Thin provisioning is a great concept for using only what you need, and not allowing you waste valuable storage.  However, this can have a detrimental effect on database performance.  Thin provisioned disk will auto grow the VMDK file as the VM needs more space on that disk.  When VMware needs to grow the VMDK file, it will cause a delay in the VM's disk access while it's growing.  Let's take a look at a few examples.

Example 1 - File copy from within Windows

In this test, we'll use ROBYCOPY to copy a 20GB folder from the C drive to the thick provisioned disk (Drive E).

ROBOCOPY C:\SQL E:\SQL *.* /E /NFL /NDL /NJH

Copy time of 4 min 24 sec at a rate of 82MB/sec.


Now let's do the same copy to the thin provisioned disk (Drive F) and compare the results.

ROBOCOPY C:\SQL F:\SQL *.* /E /NFL /NDL /NJH

Copy time of 5 min 01 sec at a rate of 73MB/sec.


Windows is getting nearly 10MB/sec faster copy times to the thick provisioned disk (Drive E).

Example 2 - Database backup to disk from SQL Server

In this test, we'll backup a database to each of the disks and compare the runtimes.

First, we'll backup to the thick provisioned disk (Drive E).

BACKUPDATABASEAdventureWorks2012
TODISK='E:\AdventureWorks2012.BAK'WITHINIT;
GO

Processed 449472 pages for database 'AdventureWorks2012', file 'AdventureWorks2012_Data' on file 1.
Processed 2 pages for database 'AdventureWorks2012', file 'AdventureWorks2012_Log' on file 1.
BACKUP DATABASE successfully processed 449474 pages in 74.125 seconds (47.372 MB/sec).

Now backup to the thin provisioned disk (Drive F).

BACKUPDATABASEAdventureWorks2012
TODISK='E:\AdventureWorks2012.BAK'WITHINIT;
GO

Processed 449472 pages for database 'AdventureWorks2012', file 'AdventureWorks2012_Data' on file 1.
Processed 2 pages for database 'AdventureWorks2012', file 'AdventureWorks2012_Log' on file 1.
BACKUP DATABASE successfully processed 449474 pages in 83.285 seconds (42.162 MB/sec).

As you can see, we're seeing similar results that we saw in our earlier test.  Within SQL Server we're getting about 5MB/sec faster backup times. 

After running these tests, we can look back in vSphere to see the new size of the VMDK file for our thin provisioned disk (Drive F).  You'll see the VMDK is now showing over 24GB of used space for that file.


These simple tests reveal that thin provisioning storage within VMware can indeed impact performance.  This doesn't mean that you should thick provision storage on every VM, but it does show you how this configuration can affect Windows and SQL Server.  You can equate this to the data/log file auto grow feature within SQL Server; you should right-size the virtual disk from day one the same way you should right-size your database files from day one.  As I stated earlier, virtualizing your SQL Servers is done to make things cheaper and easier to manage, not to make them perform better.

Setup an Availability Group with Multiple Subnets in VMware Workstation

$
0
0

Before we get started, I want to make it clear this is NOT how you would normally configure all these items in a production environment.  This is meant for a lab or demo area to play with Availability Groups over multiple subnets.

I use VMware a lot for demos at work as well as tooling around with various Windows and SQL Server related stuff.  In working with Availability Groups, one of the things I would like to do for my demos is have multiple subnets in VMware Workstation, so I can simulate a site failover.

Just to test Availability Groups requires at least three VMs; one for the Active Directory domain controller, one for the primary replica, and one for the secondary replica.  For this demo, we'll still just need those three VMs.

I'm not going to cover all the steps to setup an Active Directory domain controller or install SQL Server.  I'll assume you have already completed those steps on each of the VMs.  All three of my VMs are running Windows Server 2008 R2 Enterprise Edition.  If you are running a different version, then some of these screenshots could be different.

Here is the setup for each VM.

PDC
  1. Windows Active Directory Domain Controller (MCP domain)
  2. DNS server (mcp.com)
  3. Network Policy and Remote Access (used for routing)
  4. Connected to both 192.168.1.x and 192.168.2.x subnets

SQLCLU1
  1. SQL Server 2012 Enterprise Edition
  2. SPIRIT1 is the named instance listening on port 1433
  3. Connected to 192.168.1.x subnet

SQLCLU2
  1. SQL Server 2012 Enterprise Edition
  2. SPIRIT2 is the named instance listening on port 1433
  3. Connected to 192.168.2.x subnet

AdventureWorks2012AG
  1. Availability Group for the AdventureWorks2012 database
  2. Listening on port 1433
  3. Mapped to 192.168.1.55 and 192.168.2.55

Now that you see how the finished environment is setup, let's see how to get there.

The first thing we need to do is setup each of the custom networks.   From the VMware Workstation menu, open the Virtual Network Editor.  Click on "Add Network" and select VMnet2.  Select Host-only and uncheck both the "Connect to Host Virtual Adapter" and "Use local DHCP" options.  Set the subnet IP to 192.168.1.0 and the subnet mask to 255.255.255.0.

Click "Add Network" and select VMnet3.  Make all the same setting changes, but this time set the subnet IP to 192.168.2.0. 


On the VM that is your Active Directory Domain Controller (PDC):

Edit the settings of the VM.  Make sure you have only ONE network card, and assign it to VMnet2.


Power on your VM domain controller.  Once it's up, edit the IPv4 settings of your network card.  Since we're not using DHCP, we'll need to hard code the IP address.  I have my domain controller IP set to 192.168.1.50.  You can set yours to any IP as long as it's on the same subnet.  Set the subnet mask to 255.255.255.0 and then leave the gateway blank.  Set the preferred DNS server to 192.168.1.50, because this is also your DNS server.  Save the changes and then shutdown the VM.


Edit the settings if the VM, add a 2nd network card and assign it to VMnet3. 


Power on the VM.  Once it's up, edit the IPv4 settings of the new network card.  This time set the IP to 192.168.2.50, the subnet mask to 255.255.255.0, and the Preferred DNS server to 192.168.1.50. Save the changes.


Your PDC will act as a router between the two subnets, but it will need software to make it happen.  Open Server Manager, select roles, and then "Add Role".  Select "Network Policy and Access Services".


For the service role, select "Routing".  It will automatically select the other required services.


Click next and then install. Once the installation is complete, go to Administrative Tools and open "Routing and Remote Access".  Right click on the domain controller and select "Configure and Enable Routing and Remote Access".   From the wizard, choose "Custom Configuration" then click Next.  Select "LAN Routing" then click next to complete the configuration. 


When a pop up asks to start the service, click "Start Service".  Once the configuraiton is complete, you now have software routing enabled on your domain controller.  The routing should be automatically configured between the two subnets. 


You would normally use a hardware router for this job, but the Routing and Remote Access service functions just fine for a lab running on VMware.  The next step is to configure the network and IP settings for each of our SQL Servers. 

On the first SQL Server VM (SQLCLU1):

Open the VM properties and make sure your network card is assigned to VMnet2. 


Save the settings and then power on the VM.  Once it's up, edit the IPv4 settings of the network card.  Set the IP address to 192.168.1.51.  Set the subnet mask to 255.255.255.0 and the default gateway to 192.168.1.50.  The default gateway needs to be the IP address of the server that is running the Routing and Remote Access service.  In this case, it's the IP of the domain controller.  Set the Preferred DNS server to 192.168.1.50.  Click OK to save the settings.


Additionally, you will need to open firewall ports TCP 5022 for the HADR service, TCP 1433 for the SQL Server service, and UDP 1434 for the SQL Server Browser service.

On the second SQL Server VM (SQLCLU2):

Open the VM properties and make sure your network card is assigned to VMnet3. 


Save the settings and then power on the VM.  Once it's up, edit the IPv4 settings of the network card.  Set the IP address to 192.168.2.52.  Set the subnet mask to 255.255.255.0 and the default gateway to 192.168.2.50.  The default gateway needs to be the IP of the 2nd network care we setup earlier on the domain controller.  Set the Preferred DNS server to 192.168.1.50.  Click OK to save the settings.


Additionally, you will need to open firewall ports TCP 5022 for the HADR service, TCP 1433 for the SQL Server service, and UDP 1434 for the SQL Server Browser service.

Your two subnets should be working now.  If you want to test it, just open a command prompt from SQLCLU1 and issue a "PING SQLCLU2".  You can do the same test from SQLCLU2.


Setting up the Windows Cluster

Open Failover Cluster Manager and click "Create a Cluster".  Step through the setup wizard by selecting the two cluster nodes: SQLCLU1 and SQLCLU2. 


Key in the name of the cluster, SQLCLUV1.  Select the 192.168.1.0/24 subnet and enter the IP address of 192.168.1.54.  Make sure to uncheck the other subnet.  Click next to finish the setup.


At this point we would normally configure the quorum; however, since this is just for a lab setup, we'll leave the quorum set to Node Majority.  When setting this up in a production environment, you'll want to configure the quorum based on the number voting nodes.  This link will guide you through what changes are needed.

Look at the settings of each of the cluster networks.  Cluster Network 1 is the 192.168.1.x subnet and is connected to SQLCLU1. 


Cluster Network2 is the 192.168.2.x subnet and is connect to SQLCLU2.


Setting up the Availability Group

Now comes the easy part.  First we'll need to enable the Availability Group feature on each SQL Server instance.  On SQLCLU1, open the SQL Server Configuration Manger.  Right click on the SQL Server service and select properties.  Select the "AlwaysOn High Availability" tab, and check the box to enable it.  Click OK to save the changes, and then stop and restart the SQL Service services. 


Make the change on the second SQL Server, SQLCLU2.

Now make sure the AdventureWorks2012 database is in FULL recovery mode.  Within SQL Server Management Studio we'll setup the Availability Group for the AdventureWorks2012 database.  Open Object Explorer to SQLCLU1\SPIRIT1. Right click on the "AlwaysOn High Availability" node and select "New Availability Group Wizard".  Enter a name for the group, AdventureWorks2012AG and click next.


Check the box next to the AdventureWorks2012 database and click next.


Click Add Replica and add SQLCLU2\SPIRT2 to the list of replicas.  Check all the boxes for Automatic Failover and Synchronous Commit.


Click the Listener tab.  Select the "Create an Availability Group Listner" radio button, then enter a listener name and port number and make sure Static IP is selected for Network Mode.


Click the Add button.  Select the 192.168.1.0/24 subnet and enter the IP of the listener, 192.168.1.55, then click OK.


Click the Add button again.  Select the 192.168.2.0/24 subnet and enter the second IP of the listener, 192.168.2.55, then click OK.  You should now see 2 separate IP address for the listener.  Click Next to continue.


Select FULL data synchronization and specify a backup file share, \\SQLCLU1\BACKUP, and click next.  The file share is only needed to complete the initial data synchronization.


Verify all the validation checks are successful and then click next.  Click finish to complete the Availability Group setup.

Once the setup is complete, go back into Failover Cluster Manager to check out the Availability Group resource that was added.  What you'll notice is two IP addresses associated to the Availability Group listener.  One is currently online and the other is offline.  The IP that's online is associated to the subnet that SQLCLU1 is on, because it's currently the primary replica.


Now let's failover the Availability Group to SQLCLU2\SPIRIT2 to see what happens to the listener.  Open a query window to SQLCLU2\SPIRIT2 and run the following code.

ALTERAVAILABILITYGROUPAdventureWorks2012AGFAILOVER;

Once the failover is complete, go back into Failover Cluster Manager to check out the properties of the Availability Group.  You'll notice the IP resources have switched.  The IP 192.168.1.55 is offline and 192.168.2.55 is online.


SQLCLU2\SPIRIT2 is now the primary replica for the Availability Group and it's on the 192.168.2.x subnet.  You can also go back to the domain controller and open up the DNS Manager.  There you will see the two DNS entries for the the Availability Group listener; one for each IP address.


What we've covered here is a quick and easy way to setup an Availability Group on multiple subnets within VMware Workstation.  Remember this is not how you would normally setup everything within a production environment.  In a production environment we'd use a hardware router instead of the routing service, the separate subnets would likely be in different data centers, and the quorum would be configured according the number voting nodes.  However, this provides you with a platform for doing multisubnet failovers with Availability Groups.

Are You the Primary Replica?

$
0
0
In my recent adventures with AlwaysOn Availability Groups, I noticed a gap in identifying whether or not a database on the current server is the primary or secondary replica.  The gap being Microsoft did not provide a DMO to return this information.  The good news is the documentation for the upcoming release of SQL Server 2014 looks to include a DMO, but that doesn't help those of us who are running SQL Server 2012.

I've developed a function, dbo.fn_hadr_is_primary_replica, to provide you with this functionality.  This is a simple scalar function that takes a database name as the input parameter and outputs one of the following values.

0 = Resolving
1 = Primary Replica
2 = Secondary Replica

The return values correspond to the role status listed in sys.dm_hadr_availability_replica_states.

In this example, I have setup 2 SQL Servers (SQLCLU1\SPIRIT1 and SQLCLU2\SPIRIT2) to participate in some Availability Groups.  I have setup 2 Availability Groups; one for AdventureWorks2012 and a second for the Northwind database.  SQLCLU1\SPIRIT1 is the primary for AdventureWorks2012 and secondary for Northwind.  SQLCLU2\SPIRIT2 is the primary for Northwind and secondary for AdventureWorks2012.

First let's run the function for both databases on SQLCLU1\SPIRIT1.


On this server, the function returns 1 because it's the primary for AdventureWorks2012, and returns 2 because it's the secondary for Northwind.

Now let's run it again on SQLCLU2\SPIRIT2.


As expected we get the opposite result.

This function does not take into account the preferred backup replica; it only returns information based on whether it is the primary or secondary replica.  It was created to use within other scripts to help determine a database's role if it's part of an Availability Group.  I hope this script can help you as well.


USEmaster;
GO

IFOBJECT_ID(N'dbo.fn_hadr_is_primary_replica',N'FN')ISNOTNULL
    DROPFUNCTIONdbo.fn_hadr_is_primary_replica;
GO
CREATEFUNCTIONdbo.fn_hadr_is_primary_replica(@DatabaseNameSYSNAME)
RETURNSTINYINT
WITHEXECUTEASCALLER
AS

/********************************************************************

  File Name:    fn_hadr_is_primary_replica.sql

  Applies to:   SQL Server 2012

  Purpose:      To return either 0, 1, or 2 based on whether this
                @DatabaseName is a primary or secondary replica.

  Parameters:   @DatabaseName - The name of the database to check.

  Returns:      0 = Resolving
                1 = Primary
                2 = Secondary

  Author:       Patrick Keisler

  Version:      1.0.0 - 06/30/2013

  Help:         http://www.patrickkeisler.com/

  License:      Freeware

********************************************************************/

BEGIN
    DECLARE@HadrRoleTINYINT;

    -- Return role status from sys.dm_hadr_availability_replica_states
    SELECT@HadrRole=ars.role
    FROMsys.dm_hadr_availability_replica_statesars
    INNERJOINsys.databasesdbs
        ONars.replica_id=dbs.replica_id
    WHEREdbs.name=@DatabaseName;
   
    -- @DatabaseName exists but does not belong to an AG so return 1
    IF@HadrRoleISNULLRETURN 1;
   
    RETURN@HadrRole;
END;
GO

PASS Summit 2013 - You Ain't From Around Here Are Ya?

$
0
0
I know what y'all are thinkin', what's Charlotte got to do with SQL Server?  Just hear me out.  There's a lot more to Charlotte than NASCAR, fried chicken, and rednecks. I assume most of the 5000 attendees have never been to Charlotte, and probably don't know much about the area.  To help everyone out, I have made a list of useful tips.






My history and why you should listen to me.I begged my management for nearly a decade to send me to the PASS Summit, and this year they finally granted my request.  And to top it off even more, I just happen to live in Charlotte and work in the building right across the street from the Charlotte Convention Center.  I'm native to North Carolina and I have lived in Charlotte for about 17 years.  I even graduated from The University of NorthCarolina at Charlotte.

Queen City History.Charlotte is named in honor of Charlotte of Mecklenburg-Strelitz who was married to King George III of Great Britain.  This is why the city is nicknamed the "Queen City".  It is currently the 17th largest city in the US and it's the 2nd largest financial city, trailing only New York City. The city center is called uptown instead of downtown. The term downtown gives off a negative vibe; hence the term Uptown Charlotte.

Hotels. Just pick one, they're all about the same. However, if you are staying in a hotel on the south side of town near Pineville or Ballentyne, be prepared for I-77 and I-485 to be a parking lot during rush hour. Trust me on this one.

Transportation. The good news for anyone staying on the south side of town is the Lynx light rail. There is only one rail line but it runs from the center of town all the way south to Pineville. My suggestion is to take the light rail if it's near your hotel. Just get off at the 3rd St/Convention Center station, and the convention center is right across the street.

There is only one rail line but it runs from the center of town all the way south to Pineville. My suggestion is to take the light rail if it's near your hotel. Just get off at the 3rd St/Convention Center station, and the convention center is right across the street.

The CATS bus sytem is also not a bad option. The main transit center in uptown is only 3 blocks from the convention center. Any of the bus lines that end in an X are express routes (i.e. 54X) that pick you up from the commuter lots and head directly uptown. In uptown, there is a free bus line called the Goldrush. It different buses and only runs east/west along Trade Street.  It's helpful if you are staying in one of the hotels along that street.  And the best part is it's free.  Check out RideTransit.org for a complete system map.


If you like riding bicycles, the you'll want to check out CharlotteBcycle. There are about a dozen bicycle rentals around uptown. You just pay a small fee at the automated kiosk to share a bike, even if it's for a one way trip.


For those of you driving uptown, you'll need a place to park. There are over 40,000 parking spaces uptown, but you will have to compete with the daily workforce, like me. Most parking decks will run you about $15-20 per day. Once you get uptown, look for the giant "P" signs outside each of the parking decks. The signs will tell you the number of spaces available.


The parking lots are usually cheaper than the decks, $3-10 per day, and most of those you can pay by credit card at the kiosk. Some lots even allow you to pay using the Park Mobile app (Apple | Android | Windows). Just look for the Park Mobile sign near the kiosk for the lot number.

 

You might wonder what these over-street walkways are used for.  This is part of the Overstreet Mall.  It's a maze of walkways that interconnect some of the buildings and it's full of restaurants and shops.  Even if you're not interested in the shops, it's a nice way to get from building to building when it's raining.




While walking around uptown, you'll see these "You Are Here" street signs. The maps divide uptown four color-coded regions, North, South, East, and West. Each map provide you with information about attractions, hotels, and parking. 

















Dining. You shouldn't have any issue finding a place to eat uptown; however, there are a few places of interest you should try out.  

For breakfast:
For lunch:
For dinner:
Also, if you're thinking of going to Ruth's Chris Steakhouse, then chose Sullivan's Steakhouse or Morton's Steakhouse instead.  I've never had a good experience at the uptown location, but that's just my opinion.

On a side note, when eating out, just keep in mind that you're in the south.  If you order iced tea, it WILL be sweet tea.  If you want unsweet tea, then ask for it.

Entertainment. There's plenty to do uptown as well as around town after the conference is over.  Next door to the convention center is the NASCAR Hall of Fame.  There are several other museums: Mint Museum, Bechtler Museum of Modern Art, etc.  The EpiCentre is a multi-use entertainment complex only 2 blocks from the convention center. There are restaurants, bars, and other entertainment there.  For beer lovers, there are plenty of bars uptown.  There are way too many to list, but a few are:
For wine lovers, check out Threes and The Wooden Vine.  Both have a wide range of selections.

The NC Music Factory is about 2 mile walk north from the convention center or only a 4 or 5 minute drive, but they do have free parking.  It's an entertain complex with live music, restaurants, bars, and even stand up comedy at The ComedyZone.  If you head over that way, be sure to visit the VBGB Beer Hall and Garden; definitely the best bar at the music factory.

Don't forget about the Carolina Panthers.  They'll have a home game on Sunday, October 20th at 1PM.  It might be your only chance to see the future superbowl champions in action!  

I know some of you might health nuts and would like find a place to workout besides your hotel gym.  The YMCA has a location uptown in my building.  $10 will get you a day pass, and $20 will get a 7-day pass.

If you prefer jogging outdoors, any of the streets uptown will work nicely.  However, if you like a little more scenery for your job, then head over to the Little Sugar Creek greenway.  The Charlotte Parks and Recreation built 35 miles of greenways around town.


This one is a beautiful winding route nearly 6 miles long, and located just outside the south side of the I-277 belt loop uptown.


Finally, for the super adventurous attendees, the US National Whitewater Center is about 15 miles west of uptown, or head north to take a ride at 150mph at the Richard Petty Driving Experience.  It's only about 20 miles north of uptown at the Charlotte Motor Speedway.

As a bonus item, the very popular Showtime original Homeland is filmed right here in Charlotte.  If you have the time, why not try out as an extra for the show.

Other links with information about Charlotte:

I think I covered a lot, but if anyone has questions about Charlotte, please don't hesitate to contact me.

How to Tell If Your Users are Connecting to the Availability Group Listener

$
0
0
You've spent a lot of time planning and building out a new SQL Server 2012 environment complete with Availability Group Listeners, but how can you be sure the end users are connecting to the listener and not directly to the SQL Server instance?

So why would we care about this?  To begin with, if the users are not connecting to the listener, then upon a failover to another replica, those users would have to connect to a different SQL Server instance name.  Having a single point of connection is crucial for the high availability process to work correctly.

In a previous blog post, we setup an Availability Group Listener, AdventureWorks.mcp.com, with two IP addresses:   192.168.1.55 & 192.168.2.55.  We'll use this one for our example.


The DMV, sys.dm_exec_connections, contains information about each connection to a SQL Server instance, and can be used to answer our question.

Open a TSQL connection to either the Availability Group listener, and execute the following command.

SELECT
     session_id
    ,local_net_address
    ,local_tcp_port
FROMsys.dm_exec_connections;
GO


The local_net_address and local_tcp_port columns will display the IP address and port number of the client's connection target.  This will be the connection string the users entered to connect to the SQL Server instance.

If the IP address and port number match the Availability Group IP, then you're in good shape.  If they do not match, then some users are likely connecting directly to the SQL Server instance, and that will need to be changed.

By joining the sys.dm_exec_sessions DMV, you'll also be able to get the hostname and program name of each connection.

SELECT
     ec.session_id
    ,es.host_name
    ,es.program_name
    ,local_net_address
    ,local_tcp_port
FROMsys.dm_exec_connectionsec
    JOINsys.dm_exec_sessionsesONec.session_id=es.session_id;
GO


As you can see in this picture, we have one connection on session_id 62 that is connecting directly to the SQL Server instance and not the to the Availability Group Listener.  At this point, I would track down that user, and have them use the correct connection string.

Using this DMV will allow you to verify the users are connecting to SQL Server using the correct connection strings, and help prevent unneeded outages during a failover between replicas.

The Case of the NULL Query_Plan

$
0
0
As a DBA, we're often asked to troubleshoot performance issues for stored procedures.  One of the most common tools at our disposal is the query execution plan cached in memory by SQL Server. Once we have the query plan, we can dissect what SQL Server is doing and hopefully find some places for improvement.

Grabbing the actual XML query plan for a stored procedure from the cache is fairly easy using the following query.

USEAdventureWorks2012;
GO
SELECTqp.query_planFROMsys.dm_exec_procedure_statsps
    JOINsys.objectsoONps.object_id=o.object_id
    JOINsys.schemassONo.schema_id=s.schema_id
    CROSSAPPLYsys.dm_exec_query_plan(ps.plan_handle)qp
WHEREps.database_id=DB_ID()
    ANDs.name='dbo'
    ANDo.name='usp_MonsterStoredProcedure';

GO


From this point, we can open the XML query plan in Management Studio or Plan Explorer to start our investigation. But what happens if SQL Server returns NULL for the query plan?


Let's back up a little bit.  We were pretty sure the query plan is still in cache, right?  Let's verify it.

USEAdventureWorks2012;
GO
SELECT*FROMsys.dm_exec_procedure_statsps
    JOINsys.objectsoonps.object_id=o.object_id
WHEREo.name='usp_MonsterStoredProcedure';

GO

Sure enough.  The query plan is still cached in memory, and we even can even see the plan_handle.


So why did our first query not return the XML plan?  Let's copy the plan_handle and manually run it through the sys.dm_exec_query_plan function.

SELECT*FROMsys.dm_exec_query_plan(0x05000500DD93100430BFF0750100000001000000000000000000000000000000000000000000000000000000);
GO


Why are we getting NULL returned for the XML query plan when we know is in the cache?  In this case, because the query plan is so large and complex, we're hitting an XML limitation within SQL Server.  "XML datatype instance has too many levels of nested nodes. Maximum allowed depth is 128 levels".  

Let's try to pull the text version of query plan.

SELECT*FROMsys.dm_exec_text_query_plan(0x05000500DD93100430BFF0750100000001000000000000000000000000000000000000000000000000000000,DEFAULT,DEFAULT);
GO


It looks as though we have solved the issue; however, we didn't.  Management Studio has a 65535 character limit in grid results and 8192 character limit in text results.  Our query plan has been truncated far from the end.  Now it seems we are back to square one.  

We still know the query plan is in cache, but we just need a tool other than Management Studio to retrieve it.  This is where Powershell enters the picture.

With Powershell, we can create a simple script to execute the sys.dm_exec_text_query_plan function and then output the data to a file.  All we need is to pass two variables.  The first is the SQL Server name where the plan is cached, and the second is the plan_handle. 

param (    
    [Parameter(Mandatory=$true)]
    [ValidateNotNullOrEmpty()]
    [string]
    $SqlInstance

   ,[Parameter(Mandatory=$true)]
    [ValidateNotNullOrEmpty()]
    [string]
    $PlanHandle
)
The script will simply execute a TSQL script and capture the output into a string variable.

$SqlCommand="SELECT query_plan FROM sys.dm_exec_text_query_plan("
    + $PlanHandle+",DEFAULT,DEFAULT);"
$QueryPlanText=$cmd.ExecuteScalar()

The final step will use System.IO.StreamWriter() to output the data to a file.

$stream=New-ObjectSystem.IO.StreamWriter($FileName)
$stream.WriteLine($QueryPlanText)


The Powershell script will save the entire XML query plan to a file named output.sqlplan.  As you can see below, the actual plan was over 5MB.


Finally we're able to view the entire query plan in our favorite tool and see the complexity of the stored procedure.


This is just another example of why  DBAs need to set aside some time to learn Powershell.  The entire script is posted below.  Feel free to modify it as needed to fit your environment.


One Year Later

$
0
0
Wow!  It’s been one year since I launched my blog, and my how things have changed.

Accomplishments Over the Past Year
I’ve had a chance to interact with a lot of people relating to many of the posts on my blog, and even run into a few people that said “Hey I know you through your blog”. I’ve gotten much more involved in the #sqlfamily through Twitter, Stackexchange, as well as through my local SQL Server user group. Although I’ve attended meetings at my local group off and on over the past several years, I am now making a specific point to attend every meeting for both the Charlotte SQL Server User Group and the Charlotte BI Group. I’ve attended SQL Saturday’s. I’ve moved into a new job at my company, where I am now responsible for platform engineering of SQL Server for Wells Fargo Securities. I’ve gone from outdated MCP certifications to the more current MCITP: Database Administrator 2008. And most importantly, the Atlanta Braves won their first division title since 2005.

The Roadmap for the Upcoming Year
I plan to keep writing about SQL Server through my blog, as well as continue learning about SQL Server through reading other blogs. That’s one thing I learned quickly about blogging. The more I wrote about SQL Server, the more I have read. My wife keeps telling me “For someone who hates to read, you sure do read a lot."

I had hoped to eventually get to an MCM certification, but Microsoft derailed that recently by terminating the program. So for now, I’ll continue on with the development exams for SQL Server 2008 and then move to upgrade them to the SQL Server 2012 MCSE: Data Platform. For my new job, I’m not required to have certifications, but I do need to have a more holistic view of SQL Server, rather than have a more narrow view on just the database engine. Studying for the certifications has helped in those areas that I’m less familiar with, such as Analysis Services.

In just a few more weeks I’ll be attending my first SQL PASS Summit. I have been so excited about this ever since I found out it will be hosted in my town, Charlotte, NC. The Charlotte Convention Center is right next door to where I work, and I’m obviously familiar with the surrounding area. I’ve been to the DEV Connections conference in Las Vegas before, but this will be my first PASS Summit.

I also hope to start speaking at local events. I already do this within my company, so now I want to venture out and do it in a more public arena. I might start with my local user group and move up to SQL Saturdays and beyond.

I also want to make sure I set aside plenty of time for my own family. My wife has been incredibly supportive in my blogging, attending user group meetings, and studying for certifications. I want her to know how much I’m indebted to her.

Thanks to all who have read my blog, and I hope I can continue to provide quality information.


Go Braves!
</ <| <\ <| </ <| <\ <| </ <| <\ <|
(That’s the tomahawk chop)

TSQL Tuesday #47 - Your Best SQL Server SWAG

$
0
0
The host for T-SQL Tuesday #47 is Kendal Van Dyke (blog|twitter), and his topic of choice is about the best SQL Server SWAG we ever received at a conference; specifically, the “good stuff”.
I’ve been doing a lot of work with SQL Server over the years, but I’ve only had the opportunity to attend the big conferences a few times. As a matter of fact, next week will be my first time attending the SQL PASS Summit. We're supposed to talk about the “good stuff” and not any of the “cheap tchotchkes” that are given away by the vendors, but I feel that I really have to include both.


First, I’d like to talk about a piece of swag that I received while at the SQL Server Connection conference in Las Vegas in November 2007. This wasn't my first trip to Las Vegas, but it was my first conference in there. And to make it better, one of my best friends from college was attending the same conference. So you could only imagine the fun I had while in Las Vegas with “college buddy”. In the vendor area, one of the representatives from Microsoft was handing out koozies with SQL Server 2008 printed on the side. These were not your normal koozies. They were slap koozies!


I actually own two other slap koozies, but this one was definitely going to be my new favorite. Like I said, it's cheap, but I love it, and it's great conversation starter.


Now let’s talk about the good stuff. 
The date was November 7, 2005.  The location was the Moscone Center in San Francisco, CA. The event was the launch party for SQL Server 2005, Visual Studio 2005, .NET Framework 2.0, and BizTalk Server 2006. Microsoft had just spent years rewriting SQL Server, and now they were throwing this elaborate party to celebrate the release. Unlike a week-long conference, this one-day event was completely free. I was living in San Francisco at the time, so it made it really easy to get to this event. All I had to do was hop on my scooter and head downtown. Microsoft didn’t disappoint for their launch party. The event boasted some big headliners. Microsoft CEO Steve Ballmer gave the keynote speech.


The live entertainment was also exciting.  The headliner band was Cheap Trick.  Although not at the height of their popularity, they are a talented rock band in any day.


There was also another all girl cover band that played nothing but AC/DC music.  They were called AC/DShe. Quite a catchy name.


The other highlight of the night was the presence of Paul Teutul, Sr. from the Orange County Choppers show on the Discovery Channel. Not only was Paul Sr. there hanging out in the crowd taking pictures with the attendees, but his team build a chopper for Microsoft with the SQL Server logo on it.



So finally to the swag. Each attendee was given a free copy of SQL Server 2005 Standard Edition and Visual Studio 2005 Professional Edition. Most software given away at conferences are evaluation or time-bombed copy, but these were fully licensed copies. In 2005, this was probably $1000 worth of software that was now mine.


It may sound anticlimactic, but for a guy on a shoe-string budget, living in one of the most expensive cities in the country, this was definitely the best swag I’ve ever received.

In-Memory OLTP and the Identity Column

$
0
0
Over the past month I've been playing around with the new In-Memory OLTP (code name: "Hekaton") features within SQL Server 2014 CTP2. My organization is all about low latency applications, and this is one feature of SQL Server that I need to get familiar with ASAP.

To do this, I started my own little project that takes an existing database and converts parts of it into in-memory tables.  Once that step is complete, I could work on rewriting the TSQL code.

It might seem fairly simple, but with every new feature of SQL Server there are usually limitations. And one of the first ones I noticed was the use of an IDENTITY column. They are prohibited in Hekaton tables which means I had to find an alternative. This is where the new SEQUENCE object comes into play.

The CREATE SEQUENCE command allows you to create a user-defined numerical value that can be ascending or descending. This gives it much more flexibility than an IDENTITY column, and it's fully supported for use within an in-memory table.

Looking at the example below, we have a table with an IDENTITY value used for the OrderID column.

CREATE TABLE dbo.Orders (
OrderID INT IDENTITY(1,1) NOT NULL
,OrderDate DATETIME NOT NULL
,CustomerID INT NOT NULL
,NetAmount MONEY NOT NULL
,Tax MONEY NOT NULL
,TotalAmount MONEY NOT NULL
);
GO

And we have a typical insert statement to insert a new order. Notice the IDENTITY column is not specified because it's value is automatically generated during at runtime.

INSERT INTO dbo.Orders (OrderDate,CustomerID,NetAmount,Tax,TotalAmount)
VALUES (GETDATE(),16,9.99,0.80,10.79);
GO

So how would this need to be rewritten to be turned into an in-memory table?  First we just need to create the table without the IDENTITY value.

CREATE TABLE dbo.Orders (
OrderID INT NOT NULL
,OrderDate DATETIME NOT NULL
,CustomerID INT NOT NULL
,NetAmount MONEY NOT NULL
,Tax MONEY NOT NULL
,TotalAmount MONEY NOT NULL
);
GO

Then we'll need to create a SEQUENCE that produces the same order of values as the IDENTITY. In our example, it starts and 1 and increments by one.

CREATE SEQUENCE dbo.CountBy1 AS INT
START WITH 1
INCREMENT BY 1;
GO

The insert statement will look a little different, because we'll need to call the NEXT VALUE FOR function for the SEQUENCE we just created.

INSERT INTO dbo.Orders (OrderID,OrderDate,CustomerID,NetAmount,Tax,TotalAmount)
VALUES (NEXT VALUE FOR dbo.CountBy1,GETDATE(),16,9.99,0.80,10.79);
GO

You could also generate the next sequence number ahead of time and then insert the value in a later statement.

DECLARE @NextValue INT = NEXT VALUE FOR dbo.CountBy1;

-- Do some other stuff here then insert --

INSERT INTO dbo.Orders (OrderID,OrderDate,CustomerID,NetAmount,Tax,TotalAmount)
VALUES (@NextValue,GETDATE(),16,9.99,0.80,10.79);
GO

So far, I think Microsoft has done a great job with the new Hekaton feature. They are definitely marketing it as a feature to implement with little to no changes in code, but I think that really depends on the existing code.  This is very basic rewrite, but one that only took a few minutes to implement.

Check out Books Online for more detailed information about both Hekaton and Sequence Numbers.

Collecting Historical Wait Statistics

$
0
0
As a DBA, I'm sure you've heard many times to always check the sys.dm_os_wait_stats DMV to help diagnose performance issues on your server. The DMV returns information about specific resources SQL Server had to wait for while processing queries. The counters in the DMV are cumulative since the last time SQL Server was started and the counters can only be reset by a service restart or by using a DBCC command. Since DMVs don't persist their data beyond a service restart, we need to come up with a way to collect this data and be able to run trending reports over time.

Collecting the data seems easy enough by simply selecting all rows into a permanent table. However, that raw data won't help us determine the time in which a particular wait type occurred. Think about it for a minute. If the raw data for the counters is cumulative, then how can you tell if a bunch of waits occurred within a span of a few minutes or if they occurred slowly over the past 6 months that SQL Server has been running. This is where we need to collect the data in increments.

First, we need to create a history table to store the data. The table will store the wait stat values as well as the difference (TimeDiff_ms, WaitingTasksCountDiff, WaitTimeDiff_ms, SignalWaitTimeDiff_ms) in those values between collection times.

CREATE TABLE dbo.WaitStatsHistory
(
     SqlServerStartTime DATETIME NOT NULL
    ,CollectionTime DATETIME NOT NULL
    ,TimeDiff_ms INT NOT NULL
    ,WaitType NVARCHAR(60) NOT NULL
    ,WaitingTasksCountCumulative BIGINT NOT NULL
    ,WaitingTasksCountDiff INT NOT NULL
    ,WaitTimeCumulative_ms BIGINT NOT NULL
    ,WaitTimeDiff_ms INT NOT NULL
    ,MaxWaitTime_ms BIGINT NOT NULL
    ,SignalWaitTimeCumulative_ms BIGINT NOT NULL
    ,SignalWaitTimeDiff_ms INT NOT NULL
    ,CONSTRAINT PK_WaitStatsHistory PRIMARY KEY CLUSTERED (CollectionTime, WaitType)
)WITH (DATA_COMPRESSION = PAGE);
GO

Next, we need to get a couple of timestamps when we collect each sample. The first will be the SQL Server start time. We need the SQL Server start time, so we can identify when the service was restarted.

SELECT @CurrentSqlServerStartTime = sqlserver_start_time FROM sys.dm_os_sys_info;
GO

The second set is the previous start time and previous collection time, if they exist in the history table.

SELECT
     @PreviousSqlServerStartTime = MAX(SqlServerStartTime)
    ,@PreviousCollectionTime = MAX(CollectionTime)
FROM msdb.MSDBA.WaitStatsHistory;
GO

The last timestamp is the collection time. We’ll also use this timestamp to calculate the difference in wait stat values between each collection.

SELECT GETDATE() AS 'CollectionTime',* FROM sys.dm_os_wait_stats;
GO

We need to compare the current SQL Server start time to the previous start time from the history table. If they don’t equal, then we assume the server was restarted and insert “starter” values. I call them starter values, because we just collect the current wait stat values and insert 0 for each of the diff columns.

IF @CurrentSqlServerStartTime <> ISNULL(@PreviousSqlServerStartTime,0)
BEGIN
    -- Insert starter values if SQL Server has been recently restarted
    INSERT INTO dbo.WaitStatsHistory
    SELECT
         @CurrentSqlServerStartTime
        ,GETDATE()
        ,DATEDIFF(MS,@CurrentSqlServerStartTime,GETDATE())
        ,wait_type
        ,waiting_tasks_count
        ,0
        ,wait_time_ms
        ,0
        ,max_wait_time_ms
        ,signal_wait_time_ms
        ,0
    FROM sys.dm_os_wait_stats;
END
GO

If the timestamps are the same, we will collect the current wait stats and calculate the difference (in milliseconds) in collection time as well as the difference in values.

INSERT msdb.MSDBA.WaitStatsHistory
SELECT
     @CurrentSqlServerStartTime
    ,cws.CollectionTime
    ,DATEDIFF(MS,@PreviousCollectionTime,cws.CollectionTime)
    ,cws.wait_type
    ,cws.waiting_tasks_count
    ,cws.waiting_tasks_count - hist.WaitingTasksCountCumulative
    ,cws.wait_time_ms
    ,cws.wait_time_ms - hist.WaitTimeCumulative_ms
    ,cws.max_wait_time_ms
    ,cws.signal_wait_time_ms
    ,cws.signal_wait_time_ms - hist.SignalWaitTimeCumulative_ms
FROM CurrentWaitStats cws INNER JOIN MSDBA.WaitStatsHistory hist
    ON cws.wait_type = hist.WaitType
    AND hist.CollectionTime = @PreviousCollectionTime;
GO

You could filter the collection to only specific wait stat counters that you want to track by just a where clause, but I prefer to collect them all and then filter at the reporting end.

At this point, we’re ready to schedule the job. The script could be run at any interval, but I usually leave it to collect data once a day. If I notice a spike in a specific wait stat counter, then I could easily increase the job frequency to once every few hours or even once an hour. Having those smaller, more granular data samples will allow us to isolate which time frame we need to concentrate on.

For example, if we notice the CXPACKET wait suddenly spikes when collecting the data each day, then we could schedule the collection every hour to see if it’s happening during a specific window.

SELECT * FROM msdb.MSDBA.WaitStatsHistory
WHERE WaitType = 'CXPACKET';
GO


Finally, we can use Excel to format this raw data into an easy to read chart.


From this chart, we can see at 5PM there was a spike in CXPACKET waits, but a low number of tasks that waited. In this case, I would be assume there is a single process running in parallel that caused these waits and from there I could dig further into finding the individual query.

Data compression is enabled on this table to help keep it small. It can easily turn it off for the table by removing WITH (DATA_COMPRESSION = PAGE) from the CREATE TABLE statement. However, with page compression enabled, 24 collections (one per hour) only takes up 775KB of space. Without compression, the same sample of data consumes about 2.2MB. If you plan to keep a lot of history, then it's best to leave page compression enabled.

Hopefully, this script will help you keep track of your historical wait statistics, so you can have better knowledge of what has happened to your environment over time. The entire script is posted below. If you want to read further into what each wait statistics means, then check out Paul Randle’s article about wait stats. Additionally, if you want more info on using DMVs, check out Glenn Berry’s diagnostic queries.

/***********************************************
    Create the historical table
***********************************************/

USE msdb;
GO

-- Create the history table if it does not exist
IF OBJECT_ID('dbo.WaitStatsHistory') IS NULL
BEGIN
    CREATE TABLE dbo.WaitStatsHistory
    (
         SqlServerStartTime DATETIME NOT NULL
        ,CollectionTime DATETIME NOT NULL
        ,TimeDiff_ms INT NOT NULL
        ,WaitType NVARCHAR(60) NOT NULL
        ,WaitingTasksCountCumulative BIGINT NOT NULL
        ,WaitingTasksCountDiff INT NOT NULL
        ,WaitTimeCumulative_ms BIGINT NOT NULL
        ,WaitTimeDiff_ms INT NOT NULL
        ,MaxWaitTime_ms BIGINT NOT NULL
        ,SignalWaitTimeCumulative_ms BIGINT NOT NULL
        ,SignalWaitTimeDiff_ms INT NOT NULL
        ,CONSTRAINT PK_WaitStatsHistory PRIMARY KEY CLUSTERED (CollectionTime, WaitType)
    )WITH (DATA_COMPRESSION = PAGE);
END
GO

/***********************************************
    Schedule this section as an on-going job
***********************************************/

DECLARE
     @CurrentSqlServerStartTime DATETIME
    ,@PreviousSqlServerStartTime DATETIME
    ,@PreviousCollectionTime DATETIME;

SELECT @CurrentSqlServerStartTime = sqlserver_start_time FROM sys.dm_os_sys_info;

-- Get the last collection time
SELECT
     @PreviousSqlServerStartTime = MAX(SqlServerStartTime)
    ,@PreviousCollectionTime = MAX(CollectionTime)
FROM msdb.dbo.WaitStatsHistory;

IF @CurrentSqlServerStartTime <> ISNULL(@PreviousSqlServerStartTime,0)
BEGIN
    -- Insert starter values if SQL Server has been recently restarted
    INSERT INTO dbo.WaitStatsHistory
    SELECT
         @CurrentSqlServerStartTime
        ,GETDATE()
        ,DATEDIFF(MS,@CurrentSqlServerStartTime,GETDATE())
        ,wait_type
        ,waiting_tasks_count
        ,0
        ,wait_time_ms
        ,0
        ,max_wait_time_ms
        ,signal_wait_time_ms
        ,0
    FROM sys.dm_os_wait_stats;
END
ELSE
BEGIN
    -- Get the current wait stats
    WITH CurrentWaitStats AS
    (
        SELECT GETDATE() AS 'CollectionTime',* FROM sys.dm_os_wait_stats
    )
    -- Insert the diff values into the history table
    INSERT msdb.dbo.WaitStatsHistory
    SELECT
         @CurrentSqlServerStartTime
        ,cws.CollectionTime
        ,DATEDIFF(MS,@PreviousCollectionTime,cws.CollectionTime)
        ,cws.wait_type
        ,cws.waiting_tasks_count
        ,cws.waiting_tasks_count - hist.WaitingTasksCountCumulative
        ,cws.wait_time_ms
        ,cws.wait_time_ms - hist.WaitTimeCumulative_ms
        ,cws.max_wait_time_ms
        ,cws.signal_wait_time_ms
        ,cws.signal_wait_time_ms - hist.SignalWaitTimeCumulative_ms
    FROM CurrentWaitStats cws INNER JOIN dbo.WaitStatsHistory hist
        ON cws.wait_type = hist.WaitType
        AND hist.CollectionTime = @PreviousCollectionTime;
END
GO

Collecting Historical IO File Statistics

$
0
0
In a previous post, Collecting Historical Wait Statistics, I discussed how you can easily collect historical wait stats by using the DMV sys.dm_os_wait_stats. Well today, I'd like to cover the same concept, but this time collect historical IO file stats from the DMV, sys.dm_io_virtual_files_stats. However, I wanted to improve on the code to make it even easier to implement.

The data collection process is still implemented the same way.  First, we'll need to create a history table to store the data. The data is stored in time slices with the cumulative values as well as the difference (TimeDiff_ms, NumOfReadsDiff, NumOfWritesDiff, etc) in those values since the last collection time.

CREATE TABLE dbo.IoVirtualFileStatsHistory(
SqlServerStartTime DATETIME NOT NULL
,CollectionTime DATETIME NOT NULL
,TimeDiff_ms BIGINT NOT NULL
,DatabaseName NVARCHAR(128) NOT NULL
,DatabaseId SMALLINT NOT NULL
,FileId SMALLINT NOT NULL
,SampleMs INT NOT NULL
,SampleMsDiff INT NOT NULL
,NumOfReads BIGINT NOT NULL
,NumOfReadsDiff BIGINT NOT NULL
,NumOfBytesRead BIGINT NOT NULL
,NumOfBytesReadDiff BIGINT NOT NULL
,IoStallReadMs BIGINT NOT NULL
,IoStallReadMsDiff BIGINT NOT NULL
,NumOfWrites BIGINT NOT NULL
,NumOfWritesDiff BIGINT NOT NULL
,NumOfBytesWritten BIGINT NOT NULL
,NumOfBytesWrittenDiff BIGINT NOT NULL
,IoStallWriteMs BIGINT NOT NULL
,IoStallWriteMsDiff BIGINT NOT NULL
,IoStall BIGINT NOT NULL
,IoStallDiff BIGINT NOT NULL
,SizeOnDiskBytes BIGINT NOT NULL
,SizeOnDiskBytesDiff BIGINT NOT NULL
,FileHandle VARBINARY(8) NOT NULL
,CONSTRAINT PK_IoVirtualFileStatsHistory PRIMARY KEY CLUSTERED
(CollectionTime,DatabaseName,DatabaseId,FileId)
)WITH (DATA_COMPRESSION = PAGE);
GO

The next step will be to get the start time of SQL Server, so that we can compare it to the previous collection. If the dates are different, then we must take that into account when calculating the diff values. Because if SQL Server is restarted, then all values in the DMV are reset back to zero. At this point, we know the diff values are actually the same value as the current counters, because this is the first collection after a restart.

IF @CurrentSqlServerStartTime <> ISNULL(@PreviousSqlServerStartTime,0)
BEGIN
-- If SQL started since the last collection, then insert starter values
-- Must do DATEDIFF using seconds instead of milliseconds to avoid arithmetic overflow.
INSERT INTO dbo.IoVirtualFileStatsHistory
SELECT
@CurrentSqlServerStartTime
,CURRENT_TIMESTAMP
,CONVERT(BIGINT,DATEDIFF(SS,@CurrentSqlServerStartTime,CURRENT_TIMESTAMP))*1000
,@DatabaseName
,@DatabaseId
,file_id
,sample_ms
,sample_ms
,num_of_reads
,num_of_reads
,num_of_bytes_read
,num_of_bytes_read
,io_stall_read_ms
,io_stall_read_ms
,num_of_writes
,num_of_writes
,num_of_bytes_written
,num_of_bytes_written
,io_stall_write_ms
,io_stall_write_ms
,io_stall
,io_stall
,size_on_disk_bytes
,size_on_disk_bytes
,file_handle
FROM sys.dm_io_virtual_file_stats(@DatabaseId,NULL);
END
GO

You may notice the DATEDIFF is using "seconds" instead of "milliseconds".  This is because DATEDIFF only returns an INT value. The largest number it can return is equal to about 24 days before it hits an arithmetic overflow error. By converting it to seconds, we can avoid that error. All of the following data collections will do a DATEDIFF using milliseconds.

If the current start time is the same as the previous collection, then we'll grab the difference in values and insert those into the history table.

WITH CurrentIoVirtualFileStats AS
(
SELECT
CURRENT_TIMESTAMP AS 'CollectionTime'
,@DatabaseName AS 'DatabaseName'
,*
FROM sys.dm_io_virtual_file_stats(@DatabaseId,NULL)
)
INSERT INTO dbo.IoVirtualFileStatsHistory
SELECT
@CurrentSqlServerStartTime
,CURRENT_TIMESTAMP
,CONVERT(BIGINT,DATEDIFF(MS,@PreviousCollectionTime,curr.CollectionTime))
,@DatabaseName
,@DatabaseId
,file_id
,sample_ms
,curr.sample_ms - hist.SampleMs
,num_of_reads
,curr.num_of_reads - hist.NumOfReads
,num_of_bytes_read
,curr.num_of_bytes_read - hist.NumOfBytesRead
,io_stall_read_ms
,curr.io_stall_read_ms - hist.IoStallReadMs
,num_of_writes
,curr.num_of_writes - hist.NumOfWrites
,num_of_bytes_written
,curr.num_of_bytes_written - hist.NumOfBytesWritten
,io_stall_write_ms
,curr.io_stall_write_ms - hist.IoStallWriteMs
,io_stall
,curr.io_stall - hist.IoStall
,size_on_disk_bytes
,curr.size_on_disk_bytes - hist.SizeOnDiskBytes
,file_handle
FROM CurrentIoVirtualFileStats curr INNER JOIN dbo.IoVirtualFileStatsHistory hist
ON (curr.DatabaseName = hist.DatabaseName
AND curr.database_id = hist.DatabaseId
AND curr.file_id = hist.FileId)
AND hist.CollectionTime = @PreviousCollectionTime;
GO

At this point, we're through collecting the raw data. However, as I mentioned earlier, I added a lot of functionality into this script. The script is actually a stored procedure that can run all of this code for you; including creation of the history table, data collection, historical data purging and finally reporting.

The stored procedure has 5 input parameters.

@Database - This is used to specify a single database, a list of databases, or a wildcard.
  • '*' is the default value which selects all databases
  • 'MyDatabase1' will process only that single database
  • 'MyDatabase1,MyDatabase2,MyDatabase3' Comma delimited list of databases
  • 'USER_DATABASES' used to process all user databases
  • 'SYSTEM_DATABASES' used to process only the system databases.
@GenerateReport - Flag. When off, the stored procedure will collect data. When on, it will generate an aggregated report of the historical data.

@HistoryRetention - The is the number of days to keep in the history table. Default is 365.

@ExcludedDBs - Is a comma delimited list of databases to exclude from processing. It should be used when @Database is set to '*'.

@Debug - Flag. When on, it will output TSQL commands that being executed.

Examples:

1. Collect data for all databases.

EXEC dbo.sp_CollectIoVirtualFileStats
@Database = '*';
GO

2. Collect data for all databases except for AdventureWorks, and output all debug commands.

EXEC dbo.sp_CollectIoVirtualFileStats
@Database = '*'
,@ExcludedDBs = 'AdventureWorks'
,@Debug = 1;
GO

3. Output an aggregated report of data collected so far for tempdb.

EXEC dbo.sp_CollectIoVirtualFileStats
@Database = 'tempdb'
,@GenerateReport = 1;
GO

The report would look like this.


Finally, you can copy this report data into Excel and generate some easy to read charts.


From the chart, we can see there was a spike in write latency between 3PM and 4PM for tempdb. If we collect this data over time and identify a similar spike each day then we'd want to investigate further to find out what is causing it. But that can only be done if you're collecting these metrics and storing them for historical analysis. Hopefully, this stored procedure will help you be more proactive in collecting performance metrics for each of your servers.

The entire script is available on the downloads page.

My Experience Aboard SQL Cruise 2014

$
0
0
Where do I begin? First let me say, WOW what an experience!

How it All Began
When I first heard about SQL Cruise way back in 2012, I thought the idea of hosting training sessions aboard a cruise ship was a swell idea. However, talking my wife into going with me on the cruise or even letting me go on my own was next to impossible. Don’t get me wrong, my wife and I love cruising and we even took a cruise to our destination wedding in Bermuda. But no matter how I argued, my wife would not budge. We have a 2 year old daughter and it’s pretty hard for my wife to leave her for more than five minutes let alone leave her behind for an entire week. So I figured the SQL Cruise idea would have to be shelved for a couple of years until our daughter got a bit older.

Fast forward to November 2013. At this point in time, I was much more plugged into the SQL Server community; reading blogs, attending user groups, and even attending the PASS Summit in Charlotte, NC. From my many readings, I ran across a webinar that was being hosted by MSSQLTips and Sponsored by Dell Software. The speaker was Derek Colley and his topic was “Are Your SQL Servers Healthy?” which covered various aspects of proper SQL Server configurations, setup, and maintenance. But the biggest thing that caught my eye was that Dell Software would be giving away one ticket to SQL Cruise just for attending. I figured I’ll never win the raffle, but at least I’d get to hear more about SQL Server. Well a few days later, I get an email from a marketing representative from Dell Software congratulating me for being the winner of the SQL Cruise raffle. My response to him was “Are you being serious?” As it turned out he wasn't joking. Now all I had to do was convince my wife to go. It took a few weeks to get all of the logistics in place, but we were finally able to make it happen and accept the award from Dell Software. Just FYI, grandparents make the perfect babysitter for a week-long trip away from your child.

The Cruise
The cruise was aboard the Norwegian Epic, which sailed out of Miami, FL, and made four stops in the Caribbean: St Maarten, St Kitts, US Virgin Islands, and the Bahamas. This was the perfect itinerary because my wife and I had yet to go to any of those locations. For those of you that have never been on a cruise, you should try it at least once, even if it’s not SQL Cruise. That’s all it took for me to convince my wife to go on her first cruise, and that’s all it took for her parents to fall in love with it when they came on along for our wedding cruise. Although I will tell you that cruising with a group is lot more fun; especially when the group is a bunch of SQL nerds.

 

The Training
The training sessions were all scheduled for sea days. This means for the days the ship is sailing between ports, the SQL Cruisers were in class from about 8AM until about 5PM. This allowed everyone to have fun while in port and not have to miss any of the fun on the islands. I mean really, how could anyone not want to spend time at these types of destinations. The speakers for this cruise were some of the best in the industry. Kevin Klein (b|t), Grant Fritchey (b|t), Stacia Misner (b|t), Andrew Kelly (b|t), and Richard Douglas (b|t) covered various topics such as query optimization, performance monitoring, Power BI, backup strategies, and even Azure. The sessions ranged from about 30 minutes to 2 hours. There was always plenty of interaction between the cruisers and the speakers. This is a complete reversal things happened for me at the PASS Summit. At the summit, I had to compete with several other thousand attendees for face-time with the speakers. It's never easy to ask a speaker a question when there are 50 other people in line ahead of you. On board SQL Cruise, it was never like that. If I had a question for one of them, then I could easily just ask them if would go grab and beer and talk SQL. Trust me, it really was that easy. It wasn't just speakers, I was able to do this with the other cruisers as well. I make concerted effort to hang with different cruisers each day to get to know them and how they are using SQL Server in their environment.

 

 

The Networking and Friendships
I have to say, networking is the number one reason you should attend SQL Cruise. I had met most of the speakers before from other events, but I never had this much time to really get to know them. Aside from the training, there are other events like cocktail parties, group dinners, and office hours that give the cruisers every opportunity to talk about SQL. These extra events are where I got the most benefit. I not only got insight and advice from the trainers, but also from the other cruisers. You might not expect it, but several of the cruisers attending the training are certified masters in SQL Server. It's kind of hard to come up with a question that can't be answered by someone with those types of credentials. I just attended my first PASS Summit a few months ago, and even though there were thousands more people at that event, I found myself enjoying the confines of SQL Cruise better. There were only about 20 cruisers which made it extremely easy to get to know each one of them. Outside of class, we didn’t talk about SQL Server 24/7. I remember while at one of the beaches, I found myself in an hour long conversation with a group of cruisers talking about the video games we used to play 10 years ago. At the beginning of the cruise I didn't know much about anyone on the cruise other than their names, but after spending a week with them, I know we'll be friends for a long time. Or if I'm really lucky, maybe I'll get a chance to work with one of them on a future project.

The Sponsors
Nothing like this would be possible without sponsors such as Dell Software, SQLSentry, Red Gate, New Belgium Brewing, and B-Side Consulting. I have to give a big thank you to Dell Software since they were generous enough to give away a ticket. All the sponsors played a big part in the event, and they provide DBAs with top-notch tools for making our jobs a lot easier. Yes, even New Belgium Brewing. They provided some really good wheat beer, Snapshot, for the bon voyage party on the night before departure from Miami.

The Mastermind
Tim Ford (b|t) is the mastermind behind the SQL Cruise. I’m not sure when he first came up with the idea of SQL Server training aboard a cruise ship, but it definitely was the work of a pure genius. I must have thanked him a dozen times over the week for creating this event. And I’m pretty sure my wife thanked him just as much. On some days, I think she had more fun than I did, and she’s not even into technology. One other thing I noticed, is that Tim doesn't do all of this work by himself. His wife, Amy, seems like she plays a big part in the planning and coordination.

Final Thoughts
SQL Cruise really is the epitome of the "SQL Family". It has given me the opportunity to build friendships with the trainers and cruisers that I don’t think would have happened any other way. Because of that, its value is well beyond any dollar figure attached to it. I think I would have eventually convinced my wife to on SQL Cruise, but thanks to Dell Software, I was able to just make it happen a bit sooner. Now all I have to do is convince my wife to go again next year. Fingers crossed!


How Long is that SQL Command Going to Take?

$
0
0
Have you ever needed to restore a large database while someone is standing over your shoulder asking “How long is that going to take"? If that hasn't happened to you yet, then it’s only a matter of time.

Let’s throw out all the reasons why you need to do the restore and just discuss the technical part. Obviously the easiest way to know how long the restore will take is to use the “WITH STATS” option in the restore database command. But let’s say in the heat of the moment you forgot that little piece of the statement. Now what?

In older versions of SQL Server, there was really no way to tell how long a database restore operation would take to complete. You could make a rough guesstimate that if it took SQL Server one hour to backup the database, then it’s likely the restore would take the same amount of time. But in reality, that’s just a guess, and the person standing over your shoulder probably wants a more accurate estimation.

First introduced in SQL Server 2005, DMVs give us a wealth of new information on the internals of SQL Server, and for our dilemma above, we can use sys.dm_exec_requests. This DMV returns one row for each session that actively executing a command. One of the columns returned by the DMV is percent_complete which returns the percent complete for the currently executing command.

USE master;
GO
SELECT
   session_id
  ,start_time
  ,status
  ,command
  ,percent_complete
FROM sys.dm_exec_requests
WHERE session_id = 56;
GO


It looks like the database restore is about 33% complete. Now you have a more accurate idea of how long it will take to complete.

This may seem like a useless tidbit of information, since you can use the “WITH STATS” option in the restore database command to get the same information, but what happens when the command your running doesn't have that option; for example, DBCC SHRINKFILE.

On the rare occasion when you need to shrink a database to free up disk space, SQL Server needs to physically move data pages from one part of the file to another. Depending on how much data needs to be moved, this could take a long time. Using the same TSQL statement from above, we can query the sys.dm_exec_requests DMV.


This is awesome! A percent complete value for every command executing on SQL Server? Not so fast. The percent_complete column in sys.dm_exec_requests only works a few commands.

From Books Online:
  • ALTER INDEX REORGANIZE
  • AUTO_SHRINK option with ALTER DATABASE
  • BACKUP DATABASE
  • CREATE INDEX
  • DBCC CHECKDB
  • DBCC CHECKFILEGROUP
  • DBCC CHECKTABLE
  • DBCC INDEXDEFRAG
  • DBCC SHRINKDATABASE
  • DBCC SHRINKFILE
  • KILL (Transact-SQL)
  • RESTORE DATABASE
  • UPDATE STATISTICS
What a bummer that it doesn't work for every command. But from a DBA’s point of view, this list comprises quite a few of the administrative commands you’d use on a regular basis. These are commands that you would run instead of an end user, and knowing that you can relay a “percent complete” value back to them will assure them you are taking good care of their database.

For more details on this DMV, check out Books Online.

Viewing all 71 articles
Browse latest View live


Latest Images