Why Change Management

My entire career I have been working as an external IT’er for lot’s of companies and they all had their specific view on Change Management. Comparing them I can categorize them in 4 types:

Type 1: Change ManageWhat?

This is the typical approach in the smaller companies. The changes are not managed at all not that they don’t want to but don’t have the people, the budget, the know-how, the will… to do this. So changes are not managed and in most cases not even tested. The (not so) funny part is that in most of these kind of companies I was hired because something was going wrong and they had no clue where to start looking or how to start solving the issues.

Type 2: Change Management, Yes we have to!

It has to be something Belgian, we just tend to dislike rules and procedures so if some external organisation obliges us to have a Change Management procedure, we’ll have one but… the changes are not really managed. I saw companies where approvals where already given to proceed to the next step before there was an approval from the test team, changes approved before the developers finished coding and all kinds of exceptions to the procedure so that the change process had a minimal impact on the development teams. The only reason I found for these companies to have change management is that they needed to document every change because of ISO or SOx compliance.

Type 3: Change Management = Cost Management

When working with smaller IT-teams where people can wear different hats (fire brigade, architect, designer, implementor…) it’s different to manage the costs and resources. In most cases the original tasks of these teams is to support the IT environment. But after time goes by the rest op the company will start thinking that maintaining, upgrading, adapting, replacing… is also normal support from the IT department. But all these tasks are not foreseen in the budget, even worse if you let the business free, they’ll expect that the IT-department also pays for extra hardware en licensing costs. So starting up a basic Change Management procedure can help manage the cost. From the moment a request is not in the task list of the support-team it will be categorized as a Request For Change that needs an approval from the IT-manager. And after that it’s up to him/her to discuss with the other business unit managers from wich budget the change will be paid.

Type 4: Change Management to manage the changes

If this is the reason why a company starts with Change Management, there is a great chance that they will succeed to create a decent procedure. Saying we want to manage our changes is saying we want to know why, when, what… without saying we want to start a bureaucracy to discourage changes and improvement of the company. They will also keep changing and improving the Change Management process itself. Resulting in a stable and cost-efficient it-environment.

Conclusion

For me the most relaxed way of working is in Type 4 environments. You know what is expected from you and you know that the things you need to do are tested so the risk of problems when executing the change is relatively small. Type 3 can be the first steps to full Change Management and will already be a relieve for the people in the IT-department when their Change tasks will be planned. Type 1 and 2 are, for me, almost equal to each other only nr 2 takes more effort because of the paper work and because there are names in the documents there might be a chance that some real testing was done.

Advertisements

Book review: Microsoft SQL Server 2008 Administration with Windows PowerShell

Almost every training I teach I get the question: “What with PowerShell?”. Students want to know what they can do with PowerShell, how it’s used and most important where they can learn it. I learned the PowerShell (very) basics from articles on the internet, demo’s on events and so on. But now I think a found a winner for DBA’s who want to start learning PowerShell. The Microsoft SQL Server 2008 Administration with Windows PowerShell book by Ananthakumar Muthusamy (blog) and Yan Pan (blog) is a perfect guide for DBA’s who start with PowerShell or who already know PowerShell but want to use it to administer their MS SQL Servers. Also beginning DBA’s can start using the book because it’s full of tips, tricks and explanations on how and why certain tasks are done.

The first 4 chapters bring the PowerShell ‘dummy’ up to speed with the very basics of PowerShell. Readers will learn how to use commands, create scripts and work with Functions, Parameters and so on. Extensive code examples help to master the basics fast.

In chapter 5 to 8 the writers dig in deeper on the use of PowerShell and how to get information and to get things done on Windows level, covering the File System, Registry, Event Logs, Services and the WMI Provider

And finally starting from chapter 9 to 13 the reader starts reading what a DBA can do with Windows PowerShell starting with the provider for Configuration Management over the SQL Server PowerShell provider, policies and the SMO.

After these chapters the book could have stopped. Everything about PowerShell and SQL Server is covered to get a DBA started but the real surplus of the book is about to start. First chapter 14 is there to advise on using and creating standards within SQL Server and PowerShell. Nothing new, nothing spectacular but it’s nice to see the writers care about standards and want to help and guide beginning coders to use them.

Starting from chapter 15 to 21 all the knowledge of the previous chapters is put into practise. The chapters are no easy read but show the readers how to create a complete PowerShell solution to inventory, monitor and manage a complete SQL Server environment. And even if the reader doesn’t want to create such a solution. The coding examples are just great to lookup how a certain task can be accomplished with PowerShell.

Conclusion

Microsoft SQL Server 2008 Administration with Windows Powershell is not a book to read on a holiday nor in bed. But I think it’s a must have on every DBA’s bookshelf whether the DBA is a starter or a senior. Everybody will find value in this book.

SQL 2008 R2 upgrade error: Wait on the Database Engine recovery handle failed

While preparing for an upgrade of a complex SQL Server 2000 installation to SQL Server 2008 R2 I had to figure out a way of reusing the MSDB database of the original SQL Server 2000 installation on the new SQL Server 2008 R2 setup. The reason is the third-party job scheduling tool refers to the jobs with their job_id and neither scripting the jobs or using the SSIS ‘Transfer Jobs Task’ gave me the possibility to keep the job_id. The easiest way to accomplish this is:

  • Backup the original SQL Server 2000 MSDB database
  • Restore the SQL Server 2000 MSDB database on a temporary SQL Server 2000 with the same patch level as the original
  • Do an in place upgrade of the tempory SQL Server 2000 instance to SQL Server 2008 R2
  • Backup the upgraded SQL Server 2008 R2 MSDB database
  • Restore the upgrade SQL Server 2008 R2 database on the newly installed SQL Server 2008 R2 server

Unfortunately trying out this approach I stumbled upon the following error during the in place upgrade from SQL Server 2000 to SQL Server 2008:

 The SQL Server error log during setup is stored in ‘%programfiles\Microsoft SQL Server\100\Setup Bootstrap\Log’. In this folder a ‘Summary.txt’ is found with the summary of the last install attempt. Looking in the newest sub folder more detailed log files are found. Opening the ‘SQLServer_ERRORLOG_somedatetime.txt’ and scrolling down gave me this error:

The database owner SID recorded in the master database differs from the database owner SID recorded in database ‘msdb’. You should correct this situation by resetting the owner of database ‘msdb’ using the ALTER AUTHORIZATION statement.

 And off course, since the msdb database is owned by the SQL Login ‘sa’ the SID of the ‘sa’ account on the originating server is not the same as the SID of the ‘sa’ account on the upgrade server. To correct the situation and let the upgrade continue I ran the following commands from a command prompt:

Stop the running SQL Server Service:

net stop mssqlserver

Start the SQL Server services with these trace flags so the upgrade scripts don’t start running again:

net start mssqlserver /f /T3608

Use a dedicated admin connection to connect to the SQL Server Instance and change the owner of the MSDB database back to the ‘sa’ login:

sqlcmd -E -A
USE msdb
GO
sp_changedbowner ‘sa’
GO
EXIT

Use the net stop mssqlserver command again to stop the SQL Server Service.

Now I can restart the setup program of SQL Server 2008 R2 but instead of choosing a new installation I need to choose to REPAIR my latest installation. After the repair finishes I finally have my upgraded MSDB database. And when restoring the database to my target server I must keep in mind to change the owner of the restored MSDB database to the ‘sa’ login of the new server.

I registered for SQL Server Days

For the fourth time in a row I will be visiting the Belgian SQL Server Day(‘s) , an organisation of the Belgian SQL Server User Group and Microsoft Belux. The first 3 editions were 1-day events filled with SQL stuff for DBA’s, Developpers and BI-people brought by a mixture of local SQL Server Guru’s working for the Event partners, Microsoft employees and international SQL speakers. And looking at the calendar nothing has changed… except, the organisation has found so much content they are spreading the event over 2 days and starting from this year the event isn’t free anymore. But hey where can you find 2 days of in depth SQL Server training 79€ (if you register before october 1st)?

And besides all the knowledge you can gain, it’s always nice to go to a local event with only SQL-people to meet colleauges, co-workers, students and tweeps in person.

See you there?

Executing multiple SQL Scripts

A customer asked if it was possible to execute multiple .sql scripts from a single command. I knew the SQLCMD utility could be used to work with input files and that a .bat file could be created executing SQLCMD commands one by one. Looking up the details of the SQLCMD utility I stumbled upon the :r <filename> command wich can be used to parse additional Transact-SQL statements and SQLCMD commands from the file specified by <filename> into the statement cache. So this means I can create a ‘master’ script calling the other scripts and just need to execute the master script with the SQLCMD utility. So I’m going to create a database, add some tables and insert some data from 3 different scripts.

First I create the CREATE_TABLES.sql script to create the tables (duh). Because I’m ‘parsing statements in the statement cache’ it should be possible to declare variables in one script and use them in another script. So I declare and fill a variable to hold the table count:

PRINT ‘Creating tables’
GO

USE Axel8s
GO

IF OBJECT_ID(‘Article’) IS NOT NULL
DROP TABLE dbo.Article
GO
CREATE TABLE dbo.Article
 (
 Article_ID int IDENTITY (1,1) NOT NULL PRIMARY KEY,
 Name varchar (50),
 Remark varchar (50)
 )
GO

IF OBJECT_ID(‘Category’) IS NOT NULL
DROP TABLE dbo.Category
GO
CREATE TABLE dbo.Category
 (
 Category_ID int IDENTITY (1,1) NOT NULL PRIMARY KEY,
 Article_ID int,
 DateAdded DATETIME
 )
GO

DECLARE @iTableCount int
SET @iTableCount = 2

The INSERT_DATE.sql script will insert some data and print the value of the variable from the CREATE_TABLES script:

PRINT ‘TOTAL TABLES CREATED = ‘ + CAST(@iTableCount AS varchar)
GO
PRINT ‘INSERTING DATA INTO Article’
GO

USE Axel8s
GO

INSERT INTO dbo.Article (Name, Remark)
 SELECT ‘Orange’,’The fruit ones’
GO
INSERT INTO dbo.Article (Name, Remark)
 SELECT ‘Coke’,’To drink’
GO
INSERT INTO dbo.Article (Name, Remark)
 SELECT ‘Hamburger’,’No not the germans’
GO

Now it’s time to create a MASTER.sql script that will create the database and will call the other 2 scripts:

SET NOCOUNT ON
GO

PRINT ‘CREATING DATABASE’
IF EXISTS (SELECT 1 FROM sys.databases WHERE name = ‘Axel8s’)
DROP DATABASE Axel8s
GO
CREATE DATABASE Axel8s
GO

:On Error exit

:r c:\Axel8s\CREATE_TABLES.sql
:r c:\Axel8s\INSERT_DATA.sql

PRINT ‘DATABASE CREATION COMPLETED SUCCESFULLY’

Almost everything is in place now to start a double-click creation of my new database. The final script is a .bat file with the following command:

SQLCMD -S -E -dmaster -ic:\Axel8s\master.sql
PAUSE

And with a double-click on the create_Axel8s.bat file my new database is created:

 

Creating an ‘Attach all databases’ script

Finally, customers are more and more seeing the importance of a decent test environment before things are put in production. So as a DBA I get  more questions about how to set up those test environments and how to keep them in sync with the production environment. One trick I pulled of lately for a +100 databases environment was the following:

The database files on the production server were on a SAN and lucky me, the files for system and user databases were on separated (Windows) disks. So a test server was installed  (MS SQL Server 2008 R2 while production was MS SQL Server 2000) with the new company defaults to do the upgrade tests. The only thing we needed to care about was not to assign drive letters that were used on the current production environment.

With the basic server setup in place it was time to get the user databases on the server. At this point the SAN-team comes into the picture. They cloned the production disks within seconds and were able to present them  to the test server in the same order as on the production server. Switching back to the test server the newly presented disks were assigned the same drive letters as on production. Finally it was time to get the databases online.

If there were only 5 databases, I could have manually attached them or have generated the attach scripts one by one on the production server. But because I had more than 100 databases to attach it was time to do some scripting.

First I created a temporary table on the production server to gather the file information of the user databases:

CREATE TABLE #DBFiles
(
 db varchar(50),
 fileid int,
 name varchar (50),
 filename varchar (250)
)

With the temp table in place I can start gathering the file information from the user databases. Using the sp_MSforeachdb stored procedure I can loop my SELECT statement over every user database:

INSERT INTO #DBFiles
 EXEC sp_MSforeachdb ‘
  USE ?
  IF ”?” <> ”master” AND ”?” <> ”model” AND ”?” <> ”msdb” AND ”?” <> ”tempdb” AND ”?” <> ”distribution”
  BEGIN
    SELECT ”?”, SF.fileid, SF.name, SF.filename FROM sysfiles SF
  END’

Running trough the gathered data made clear that there were databases with let’s say, quite a history. Files were added and removed over the years so the file_id’s where not consistent any more. Some databases had over 16 files, more than you can attach with ‘sp_attach_db’. But while the production server was Ms SQL Server 2000, the new test server was MS SQL Server 2008 R2. This means that if my database files are all on the same location I only need the location of the primary data file in a ‘CREATE DATABASE … FOR ATTACH’ statement. SQL Server will take care of the rest of the files. Having all the file information already in place in my temp table I can simply run this query to generate a script to attach all the user databases:

SET NOCOUNT ON

SELECT ‘CREATE DATABASE ‘ + db + ‘ ON (FILENAME =  ”’ + RTRIM(filename) + ”’) FOR ATTACH;’
FROM #DBFiles
WHERE fileid = 1

Putting the output of above query to Text (CTRL + T) results in a nice script that I can copy and paste to my new test server. Starting the script and waiting for 10 minutes is all I have to do to start testing on the new test environment.

Reporting on fn_virtualfilestats

In my previous post about monitoring database file IO I described how I monitor the IO for every file in every desired database on a certain server. After I ran the procedure at regular intervals during a week I now have a lot of data ready to analyze. So let’s get started. To make it easy to get the data I need in Excel I first create a view with the tables I need. Remark that I use a LEFT to get the drive letter from the OSFilename because I want to see the concurrent load on my drives and that I use the DATEDIFF function to round-up the time of the snapshot to the minute and I filter out the NULL values because they only exists in the resultset of the first execution of the script:

CREATE VIEW ReportData
AS
SELECT M.DBNAme
 , M.intFileName
 , LEFT(M.OSFileName,1) AS Drive
 , FS.DNReads
 , FS.DNWrites
 , FS.DBRead
 , FS.DBWritten
 , FS.DIoSTall
 , DATEADD(mi, DATEDIFF(mi, 0, CAST(FS.SnapTime AS DATETIME(1))),0 ) AS SnapTime
FROM DB2Monitor M
 INNER JOIN FileStats FS
  ON M.DatabaseID = FS.DatabaseID
  AND M.FileID = FS.FileID
WHERE DNReads IS NOT NULL

 With the view in place I can start using my Excel to insert a PivotChart, I create a new external connection to my ReportData view in my MonitorDB database:

Now I have my connection I can start designing my Chart. For the layout I choose  Stacked Column. Next I design the fields of the PivotTable. The Axis will made of the SnapTime field, the Legend will be grouped by Drive and IntFileName and for this report I choose the delta fields DNReads and DNWrites as Values. In the end it looks like this:

 

 The chart is now completely unreadable because I was taking snapshots every 5 minutes for a week so I use the Filter Pane to select some interesting, heavy load moments. To find these I run the following query on my RepordData view:

SELECT TOP 20 Snaptime, SUM(DNReads) + SUM(DNWrites) AS TopUsage
FROM ReportData
GROUP BY Snaptime
ORDER BY TopUsage DESC

These Snaptime values are selected in the Filter Pane that looks like this:

And finally I have a nice chart to show to my colleagues and use for further analysis of the usage of the disks:

%d bloggers like this: