Archive

Archive for December, 2009

How Can You Tell if a Database is in Pseudo Full/Bulk Logged Mode?

December 13, 2009 Leave a comment

I was asked on Friday, “how do you tell if a database logging mode is reporting bulk or full, but it is still in simple?” – as mentioned before, a database is not in really full / bulk logged unless a full backup has been taken. Until that time the database is still running in a simple mode, sometimes referred to as pseudo-simple. It is not easy to spot, because the properties of the database will report full / bulk as appropriate and give no indication that it is not actually logging in the way it says.

The existence of a backup of the database is not a reliable enough mechanism for this, since the database can be backed up and then moved out of full / bulk logged mode into simple and back again. This breaks the backup and transaction log chain, but the database is still reporting full – to make it worse there is a backup record showing on the history, giving it an air of legitimacy.

The backup records can be accessed from the sys.sysdatabases and msdb.dbo.backupset, MSDN even has an example script showing how to see when a database was last backed up and by whom.

SELECT 
T1.Name as DatabaseName, COALESCE(Convert(varchar(12), MAX(T2.backup_finish_date), 101),'Not Yet Taken') as LastBackUpTaken, COALESCE(Convert(varchar(12), MAX(T2.user_name), 101),'NA') as UserName
FROM sys.sysdatabases T1 LEFT OUTER JOIN msdb.dbo.backupset T2 ON T2.database_name = T1.name
GROUP BY T1.Name
ORDER BY T1.Name

To play around with the scripts you probably want a test database:

CREATE DATABASE [LogModeTest] ON  PRIMARY
( NAME = N'LogModeTest', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\LogModeTest.mdf' , SIZE = 3072KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )
 LOG ON
( NAME = N'LogModeTest_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\LogModeTest_log.ldf' , SIZE = 1024KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)
 COLLATE Latin1_General_CI_AI

With a minor alteration to the MSDN script you can get the backup history for this database:

SELECT
T1.Name as DatabaseName, COALESCE(Convert(varchar(12), MAX(T2.backup_finish_date), 101),'Not Yet Taken') as LastBackUpTaken FROM sys.sysdatabases T1
LEFT OUTER JOIN msdb.dbo.backupset T2 ON T2.database_name = T1.name
WHERE T1.Name = 'LogModeTest'
GROUP BY T1.Name

The results show the database is not yet backed up:

DatabaseName                  LastBackUpTaken 
----------------------------- ---------------
LogModeTest                   Not Yet Taken

That is easy to fix, so let’s take a backup of the database and recheck the last backup value.

BACKUP DATABASE [LogModeTest] TO  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\LogModeTest.bak' WITH NOFORMAT, NOINIT,  NAME = N'LogModeTest-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10

DatabaseName                   LastBackUpTaken 
------------------------------ ---------------
LogModeTest                    12/13/2009

As expected the date of the backup is now set. If we alter the logging mode of the database to simple we will break the transaction log chain. To demonstrate the backup information being an unreliable source, let’s change to simple, create a table and then return to the fully logged mode.

ALTER DATABASE [LogModeTest] SET RECOVERY SIMPLE WITH NO_WAIT
CREATE TABLE foo(id int identity)
ALTER DATABASE [LogModeTest] SET RECOVERY FULL WITH NO_WAIT

If we now attempt to backup the transaction log, SQL is going to throw an error.

BACKUP LOG [LogModeTest] TO  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\LogModeTest.bak' WITH NOFORMAT, NOINIT,  NAME = N'LogModeTest-Transaction Log  Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10 

Msg 4214, Level 16, State 1, Line 1
BACKUP LOG cannot be performed because there is no current database backup.
Msg 3013, Level 16, State 1, Line 1
BACKUP LOG is terminating abnormally.

And if we check the database backup history using the MSDN script:

DatabaseName                   LastBackUpTaken
------------------------------ ---------------
LogModeTest                    12/13/2009

So the backup history continues to show a date of the last full backup even though the transaction log chain is now broken. SQL certainly knows the database has not had a full backup since swapping into fully logged mode, so any transaction log backup is invalid, thus the error.

There is an easier way to find out that you are in pseudo-simple mode, without trying to perform a transaction log backup:

SELECT name, COALESCE(Convert(varchar(30),last_log_backup_lsn), 'No Full Backup Taken') as BackupLSN 
FROM sys.databases
INNER JOIN sys.database_recovery_status on sys.databases.database_id = sys.database_recovery_status.database_id

Run this against your server and it lists the databases that have had a backup taken (by the existence of a backup LSN) and which have not had a full backup that could be used in recovery. If we then backup the database and recheck the values, the test database now records an LSN, showing it is out of psuedo-simple and into the full / bulk logged modes.

So that indicates whether we are in pseudo simple or not, but does not link back to the properties of the database to check what is the actual database logging mode – you are primarily only interested in databases that are not in simple mode in the first place, but are running in psuedo-simple due to the lack of a relevant full database backup. We can alter the query to handle this specific situation and the result is:

SELECT name, recovery_model_desc, COALESCE(Convert(varchar(30),last_log_backup_lsn), 'No Full Backup Taken') as BackupLSN 
FROM sys.databases
INNER JOIN sys.database_recovery_status on sys.databases.database_id = sys.database_recovery_status.database_id
WHERE sys.databases.recovery_model <> 3 AND last_log_backup_lsn is null

If you run that query against your database server and get any results then you have databases that are not running the recovery mode they are indicating / you that you think they are – which would generally not be a good thing.

Rolling a Partition Forward – Part 2

December 10, 2009 2 comments

The first part of this topic provided a mini-guide to loading data into a partitioned table and a few helpful DMV based statements that can help you automate the process. The unloading of the data should in theory be easier, but to do this in an automated fashion you are more reliant on the DMVs and system views to get to the right information.

The steps to unload a partition of data are:

  • Discover which file group the oldest partition is on.
  • Create a staging table on the same filegroup with an identical schema and indexes
  • Switch the data out to the staging table
  • Merge the partition function
  • Archive / Drop the data as appropriate.

 As in part 1, there are 3 sections of the process which are not so common, whilst the creation of a table and the archive / drop of the old data at the end is standard T-SQL that you will be using regularly.

Discover which Filegroup the Oldest Partition is On

When checking for the oldest filegroup, I have assumed that the basis of the rolling window is that the highest boundary is the most recent data, whilst the lowest boundary is the oldest – in essence time is moving forward and the partition key ascends, not descends. The oldest boundary will therefore be boundary 1, how do you get the Filegroup name of the filegroup this partition is on? A somewhat complex use of a set of DMV’s.

SELECT sys.filegroups.Name as FileGroupName FROM sys.partition_schemes 
INNER JOIN sys.destination_data_spaces ON sys.destination_data_spaces.partition_scheme_id = sys.partition_schemes.data_space_id
INNER JOIN sys.filegroups ON  sys.filegroups.data_space_id = sys.destination_data_spaces.data_space_ID
INNER JOIN sys.partition_range_values ON  sys.partition_range_values.Boundary_ID = sys.destination_data_spaces.destination_id
AND sys.partition_range_values.function_id = sys.partition_schemes.function_id
WHERE sys.partition_schemes.name = 'YourPartitionScheme'
and sys.partition_range_values.boundary_id = 1

This will return the name of file group, which allows you to create the staging table for the partition switch out on the correct filegroup.

Whilst the data space ID’s do alter in sequence depending on the partition function being a left or right based partition, the boundary ID remains consistent, which is why it is used to discover the oldest and not the destination_id / data_space_id.

Switch the Data Out to the Staging Table

Switching the data out is not complex, it just is the reverse syntax of switching the partition in essence. Under the hood you are redirecting IAM pointers, so the switch is considered a meta-data command and exceptionally fast.

ALTER TABLE YourPartitionedTable SWITCH PARTITION 1 TO YourStagingTable

The partition number used is in effect the boundary id, and the oldest boundary is for partition 1 the rolling window.

Merge the Partition Function

The last complex stage is the merging of the partition function, the command explicitly needs the value from the partition function that represents the partition. If you were doing this by hand you would know it, but to automate the process requires the discovery of this information from the DMV’s again.

SELECT value
FROM sys.partition_range_values
INNER JOIN sys.partition_functions ON sys.partition_functions.function_id = sys.partition_range_values.function_id  
 WHERE name = 'YourPartitionFunctionName' AND boundary_id = 1

Again, we are using the boundary value of 1 to extract only the oldest partition function value, but this can then be used in a partition function merge command.

ALTER PARTITION FUNCTION YourPartitionFunctionName() MERGE RANGE (YourBoundaryValue)

 

Conclusion

Using the DMV’s and appropriate stored procedures, the rolling window can be automated and does not require hand-crufted SQL to work – just use of the DMV’s to get the key values you need to be able to construct the harder parts of the process.

If you are following the guide on partition layout I wrote before, then the filegroup you have just removed the data from becomes the next spare filegroup to be used to house the next time data is imported. If you store this within the database, the next load will be able to automatically know where to place the data and set the next used filegroup value to, closing the loop so to speak.

Rolling a Partition Forward – Part 1

December 9, 2009 Leave a comment

I have covered how to layout a partitioned table across filegroups previously, but have not gone through the steps of rolling a partitioned window – it sounds a simple process but with all the file group and pre-requisites for it to run smoothly anyone starting with partitioned tables could probably use a little guide. As you are about to see the process is quite intricate so I will go through the load process on this post and the unload on the next.

Because no one case fits all, I have made some assumptions / limitations to provide a guide, specifically:

  • The main partitioned table has a clustered index.
  • The layout is following the mechanism of keeping a staging filegroup and spare filegroup as detailed in the layout post.
  • The rollout process intends to remove the oldest data / partition.
  • The process is designed for large loads, not single inserts.

So let’ s see what it takes to prepare and get data into a partitioned table:

  • Create a staging table on your dedicated ETL filegroup, of an identical column schema to your partitioned table.
  • Load the data into the staging table.
  • Move the staging table to the spare filegroup, using a clustered index creation. (The need for the spare was covered in the layout post)
  • Add any additional Non-Clustered indexes required to match the partitioned table indexes.
  • Constrain the data so that it is considered trusted – the constraint must ensure all values are within the partition boundary you intend to place it within.
  • Set the Partition Schema Next Used Filegroup
  • Split the Partition Function
  • Switch the staging table into the main partitioned table

That was all just to bulk load data into a partitioned table – a long list and plenty of opportunity for it to go wrong, but most of these steps use T-SQL that you will be very familiar with – it is only the last 3 items that use less common SQL and are harder to automate, since there is no built-in tools to do the work for you.

Setting the Next Used Filegroup

The intention when setting the filegroup is to declare where the partition should locate data for the partition function split that is about to occur. Whilst you can discover what the previous setting might be, it is not advisable to rely on it but set it every time, just before performing a partition function split. The syntax for the command is:

ALTER PARTITION SCHEME YourPartitionSchemeName NEXT USED [YourSpareFG]

Splitting the Partition Function

This splitting of the partition function is in effect the creation of an extra dividing section on the number line / date line representing the partitioned table. If you split a partition that already has data the operation will be quite expensive since can be forced to move data between filegroups, so it is common in a rolling window scenario that you split to handle the incoming data, which is always in advance of your existing data, e.g. If you are storing sales data partitioned by the month/year of the sales date, and currently only hold up until November, you would not insert any data for December until the partition for December had been created.

The syntax forward:

ALTER PARTITION FUNCTION YourPartitionFunctionName() SPLIT RANGE (YourBoundaryValue)

But when importing new data in an automated fashion, you might not know whether the new partition split has already been performed or not, so how can you check whether the new boundary value is already created in the partition function? DMV’s can provide the answer:

SELECT count(value) as ValueExists FROM sys.partition_range_values
INNER JOIN sys.PARTITION_FUNCTIONS ON  sys.PARTITION_FUNCTIONS.function_id  = sys.partition_range_values.function_id
WHERE name = 'YourPartitionFunctionName' AND value = YourBoundaryValue

A returned value of 0 would indicate it did not exist, whilst a 1 would indicate a boundary value had already been created.

Switching the Staging Table In

Switching the staging table into the newly created partition looks relatively easy but needs the partition number:

ALTER TABLE yourStagingTable SWITCH TO YourPartitionedTable PARTITION PartitionNumber

Where do you get the partition number from? The partition number is the boundary ID, and is numbered starting at 1 from the furthers left partition sequentially upwards. If you know the boundary value you have set for the partition, you can get the boundary id using the DMV’s again

SELECT boundary_id
FROM sys.partition_range_values
INNER JOIN sys.partition_functions ON sys.partition_functions.function_id  = sys.partition_range_values.function_id
WHERE  name = 'YourPartitionFunctionName' AND value= YourBoundaryValue

These additional DMVs allow you to get access to the data you need to automate the process in stored procedures, finding out the boundary IDs in one step, to be used in the next step etc.

These are the trickier parts of the process to automate that need the help of the DMVs. In the next post I will go through the unloading of the old data.

How Can You Spot the Procedure Cache Being Flooded?

December 3, 2009 Leave a comment

This comes from a question I had a couple of days ago – the SQL Server: Buffer Manager : Page Life Expectancy provides a performance counter that indicates the current lifetime of a page within memory. As data pages and query object pages are being added to the buffer pool they will of course build up and SQL will come under memory pressure as a result. The normal advice is that this figure should be above 300 seconds, indicating that a page should stay in memory for at least 5 minutes.

This figure however, includes both the data cache and the procedure cache – which means you can not determine whether the pages being flushed are a result of churning data pages or you are in a situation where ad hoc queries are flooding the procedure cache. You can of course look at the procedure cache using DMV’s and see the number of objects grow and then shrink, but this is not particularly scientific, nor is it measurable within a trace.

The page life expectancy can easily be traced within Perfmon, but how do you measure the procedure cache? well are a couple of events you can trace in SQL profiler, the primary one I would like to be working do not seem to properly register the event, whilst the secondary does at least work. The two counters are SP:Cache Remove and SP:Cache Insert.

The SP:Cache Remove has 2 Event Sub Classes listed in documentation produced by the SQL Programmability team, sub class 2 is for a deliberate procedure cache flush, such as a DBCC FreeProcCache command, sub class 1 is for when a compiled plan is removed due to memory pressure. In testing the deliberate procedure cache flush does show up in the profiler traces, with an event subclass value of ‘2 – Proc Cache Flush’ – but after a number of tests, I can not ever get the event to be raised when the procedure cache is under memory pressure. If it did then we  would have exactly what I was after, an easy, traceable and recordable way to show a procedure cache under too much pressure.

The SP:Cache Insert is more of a backup mechanism to show the procedure cache is being flooded, but only on the basis that you would count the number of times this event shows up within a trace over a period of time. In essence a SP:Cache Insert is only going to occur if a query does not have a matching query plan within the cache. A large number of these within a short period of time is also going to be an indication that the procedure cache is potentially being flooded.

Combine a large number of SP:Cache Inserts with a low Page Life Expectancy and you can suspect you definitely have a procedure cache flooding problem.

So there is a kind of mechanism to determine whether a low page life expectancy is from data page churn or query page churn, but if the SP:Cache Remove subclass 1 event actually worked, it would be a lot easier. Once you know your plan cache is being flooded, you are then looking to check whether forced parameterization is the worth using to eliminate the issue.

Prodata SQL Academy Events

December 2, 2009 Leave a comment

If you haven’t seen them advertised, Bob Duffy from Prodata is running a series of SQL Academy half day training session in Dublin, hosted at the Microsoft Auditorium in their offices in Leopardstown – the events are level 300 which suits the half day slot allocated for the sessions – yesterday’s was about performance tuning an optimisation so myself and a colleague took a short flight over and enjoyed the excellent Irish hospitality. The talk was recorded so there will no doubt be a webcast published at some point published by Technet in Ireland. The talk primarily went through using perfmon counters and wait states – and the available tools that can make this a lot easier by wrapping up and correlating results from different logging mechanisms.

I would recommend keeping an eye out for the cast when it appears, since troubleshooting a production environment is all about using non-intrusive means to understand what is crippling the systems – memory, cpu, IO etc. If you are not practised at this form of troubleshooting it is very difficult to know which performance counters and wait states to observe amongst the thousands that exist – as well as which DMV’s can give you the critical information to diagnose the problems. (It was quite interesting that the demonstration performance issue he was looking at was fundamentally a combination of a missing index but more critically was a lack of query parameterisation since it was in simple mode. The counters used to diagnose this problem, and the symptoms that you might encounter I have previously written about.)

The wait-state side of the talk was very interesting, I often use a combination of DMV’s and perfmon in the field to diagnose, but have only used a certain amount of the wait-state information and do not delve into it as deeply – I will definitely be adding a few more wait states to the list for the future.

The next event is on February 16th and covers SQL Analysis Services – registration is already open.