Archive

Posts Tagged ‘SQL Server 2008’

SQL Immersion Event – Dublin 2010

April 13, 2010 Leave a comment

I attended the SQL Immersion event last year in Dublin and can honestly say that it was the best training course I have ever attended. The level of detail is phenomenal and the interaction with Paul and Kim is superb. I can not recommend the course heavily enough and anyone who is serious about SQL should make the effort to attend one of these, I would even go as far as to say fund it yourself if you have to. Prodata have not only managed to get Paul and Kim back to run it again, but have also got 2 additional master class courses being scheduled as well. I have a feeling these must be closer to including more information / material from the SQL MCM course, which would be superb, but I will have to check whether the bank balance can handle doing them.

The Immersion event is split again into two tracks, DBA and Developer although both sides of that fence benefit from having a good in-depth understanding of the other, so I would ignore the distinction and go for the full ‘Immersion’, there is no reason that a DBA shouldn’t understand indexes and index tuning in depth, or that a developer shouldn’t have a good understanding of the transaction log and internal structure within SQL. I did the full course before and spent most evenings doing even more stuff and using the day’s material to find out new things, many of which have become topics that I have written about.

Early registration to the courses attracts a 15% discount, but using the promotion code SQLH you will get a 20% discount instead. On the full Immersion course that is a further ~100 euros off the price, which can’t be bad.

The Immersion event is running from the 28th June to 1st July, and registration is here.

The two additional master classes are being run the week after, and these are advertised as being material that is not on the Immersion course, but as mentioned – I’m not entirely sure what that is, and given how much detail is on the immersion course, that is going to have to be some very deep internals stuff.

The performance master class is being run on the 5th and 6th of July, registration is here, whilst the DR master class registration is here. Where’s that cheque book?

Query Parameterization and Wildcard Searches

April 6, 2010 1 comment

Time flies when you’re busy, and it has been far too long since I last posted. To business however, and I noticed a problem in query parameterization the other day which does not make much sense at first glance. To demonstrate, I will use the AdventureWorks example database and use a couple of simple queries. As with all parameterization issues, you need to make sure that you know which mode the database is within, so I’ll begin by setting it to Simple Mode

ALTER DATABASE AdventureWorks SET PARAMETERIZATION SIMPLE

And then run two queries, separately so that they are not considered a single batch.

Select * from HumanResources.Employee where loginid like '%a%'
Select * from HumanResources.Employee where loginid like '%b%'

Under simple parameterization it should not be too surprising to see that when the query cache is inspected, the queries have not been parameterized, and 2 entries exist within the cache. So what happens when the mode is changed to Forced?

ALTER DATABASE AdventureWorks SET PARAMETERIZATION FORCED

Clear down the query cache and try the two queries again, in the hope of a plan cache hit – and it hasn’t changed. Two query plans still show in the cache and there was no parameterization. Perhaps it is the existence of the 2 wildcard characters? no, altering the wild-cards makes no difference, removing them entirely still results in the query plan generating a separate plan cache entry.

Parameterization is not limited to dates and numbers, it will work on strings without any problem, but clearly the introduction of the like clause prevents the cache hit. This behaviour is on both SQL Server 2005 and 2008 – which is a bit annoying.

So how can we get around this problem?

Well bizarrely by just using a different syntax to mean the same thing. PatIndex works just like the like clause and takes a wildcard, but returns the position. In a like clause we are just interested in a match – whilst the pat index gives us a character position. If the pattern is not found it returns zero, so the simple replacement is to using patindex and look for any values greater than zero.

Select * from HumanResources.Employee where patindex('%a%',loginid) > 0 
Select * from HumanResources.Employee where patindex('%b%',loginid) > 0

In simple mode this still produces 2 cache hits, but in forced mode you get a plan cache hit finally!

If solving it was only that simple… by using PatIndex the query where clause has become non-sargable – which makes no difference if you have a wild card either side of your expression but if you only had a trailing wildcard then this would produce a very bad performance hit. The cost of the extra query plan in memory is unlikely to be more than the use of scans to resolve the query, so faced with a few additional query plans in memory using wildcards, you might be best to leave them there.

Locating Table Scans Within the Query Cache

March 10, 2010 1 comment

Some time back, Pinal Dave published a blog entry with an example of an XML script that used XQuery to examine the query cache – the script in itself is a very useful example of using XQuery against the query plans, but doesn’t quite hit the mark in terms of being an invaluable performance tuning script since it provides an information overload and doesn’t help locate those annoying query problems.

Using it as inspiration however, you might find this useful when tracking down dodgy queries. A number of key items have been added:

  • The database the scan has occurred in.
  • The schema the scan has occured in.
  • The table name the scan has been performed on.

Understandably, very useful fields to additionally expose, since these allow filtering of the results to exclude tables that are of no interest due to their size (small dimension / lookup tables for example.)

WITH XMLNAMESPACES(DEFAULT N'http://schemas.microsoft.com/sqlserver/2004/07/showplan'),
CachedPlans
(DatabaseName,SchemaName,ObjectName,PhysicalOperator, LogicalOperator, QueryText,QueryPlan, CacheObjectType, ObjectType)
AS
(
SELECT
Coalesce(RelOp.op.value(N'TableScan[1]/Object[1]/@Database', N'varchar(50)') , 
RelOp.op.value(N'OutputList[1]/ColumnReference[1]/@Database', N'varchar(50)') ,
RelOp.op.value(N'IndexScan[1]/Object[1]/@Database', N'varchar(50)') ,
'Unknown'
)
as DatabaseName,
Coalesce(
RelOp.op.value(N'TableScan[1]/Object[1]/@Schema', N'varchar(50)') ,
RelOp.op.value(N'OutputList[1]/ColumnReference[1]/@Schema', N'varchar(50)') ,
RelOp.op.value(N'IndexScan[1]/Object[1]/@Schema', N'varchar(50)') ,
'Unknown'
)
as SchemaName,
Coalesce(
RelOp.op.value(N'TableScan[1]/Object[1]/@Table', N'varchar(50)') ,
RelOp.op.value(N'OutputList[1]/ColumnReference[1]/@Table', N'varchar(50)') ,
RelOp.op.value(N'IndexScan[1]/Object[1]/@Table', N'varchar(50)') ,
'Unknown'
)
as ObjectName,
RelOp.op.value(N'@PhysicalOp', N'varchar(50)') as PhysicalOperator,
RelOp.op.value(N'@LogicalOp', N'varchar(50)') as LogicalOperator,
st.text as QueryText,
qp.query_plan as QueryPlan,
cp.cacheobjtype as CacheObjectType,
cp.objtype as ObjectType
FROM
sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
CROSS APPLY qp.query_plan.nodes(N'//RelOp') RelOp (op)
)
SELECT
DatabaseName,SchemaName,ObjectName,PhysicalOperator
, LogicalOperator, QueryText,CacheObjectType, ObjectType, queryplan
FROM
CachedPlans
WHERE
CacheObjectType = N'Compiled Plan'
and
(
PhysicalOperator = 'Clustered Index Scan' or PhysicalOperator = 'Table Scan' or
PhysicalOperator = 'Index Scan')

The final alteration is the limitation of the results to only those query plans that include scans, although you could use this to target hash matches or other potentially expensive operations that indicate there is a query plan / indexing opportunity to investigate.

Using this script makes it far easier to run through the query cache and can easily be further modified to include a link to the sys.dm_exec_query_stats via the plan_handle so you could also pull the execution count for the queries with the scans to further prioritize performance tuning work.

Decoding a Simple Update Statement Within the Transaction Log

February 21, 2010 2 comments

After decoding the delete statement within the transaction log, I thought I would tackle another statement – an update statement, although I am attempting the simple version of the update which is only to update a single value within the fixed width fields section. Again to assist in decoding the log entry, I have used the AdventureWorks database and know the values of the data before and after the modification, so it is easier to decode how the log entry is constructed.

As before the database has been backed up whilst in fully logged mode and the transaction log backed up to make it as small as possible. Once the database was ready I issued a simple statement against the database

update HumanResources.Employee
set MaritalStatus = 'M' where employeeid = 6

The employee with the ID  of 6 has changed marital status, congratulations employee id 6.

The first confusion is that two LOP_MODIFY_ROW entries show up against the LCX_Clustered context, for the HumanResources.Employee.PK_Employee_EmployeeID allocation unit / name, but only 1 modification has been made.

What is the second modification?

Against each LOP_MODIFY_ROW there is an [Offset in Row] column, which indicates where within the row the modification has been made. The first modification is recorded at byte 24, the second is at byte 57. Instead of providing a field number / name, the alteration is being recorded at a byte level, so to figure out which columns have changed, we must map out the structure in terms of byte positions for the row.

Offset Name Size in Bytes Value
0 Status Bits 2  
2 Column Number Offset 2  
4 EmployeeID 4 6
8 ContactID 4 1028
12 ManagerID 4 109
16 BirthDate 8 1965-04-19 00:00:00.000
24 MaritalStatus 2 S
26 Gender 2 M
28 HireDate 8 1998-01-20 00:00:00.000
36 SalariedFlag 1 1
37 VacationHours 2 40
39 SickLeaveHours 2 40
41 CurrentFlag 1 1
41 rowguid 16 E87029AA-2CBA-4C03-B948-D83AF0313E28
57 ModifiedDate 8 2010-02-20 18:13:03.710
65 Column Count 2  
67 Variable Column Count 2 3
69 Variable Offets 6  
75 NationalIDNumber 16 24756624
91 LoginID 44 adventure-works\david0
135 Title 100 Marketing Manager

Again, because there are no nullable columns at all, there is no nullability bitmap within the structure. This mapping is specific to this row, but the offsets for the fixed portion of the row would be valid for all the records, it is only the variable width columns that would change per data row.

The two modifications are at byte 24 and 57, which is clearly the Marital Status and the Modified Date. 

Why is the modified date being changed? The adventure works employee table has an update trigger on it, which alters the modified data column to be the current date when any change is made to the row.

For the Marital status transaction log entry, there are two values we need to logically know – what was it before the change, and what is the value after the change.

The Row Log Contents 0 and Row Log Contents 1 fields clearly provide that information

Row Log Contents 0 : 0x53
Row Log Contents 1 : 0x4D
select convert(nchar,0x53) = S
select convert(nchar,0x4D) = M

So they are clearly the before and after results.

Checking the transaction log entry against the trigger, the log row contents are dates within hex format, which are not too difficult to deal with

In Hex, and Row Log Contents  0 contains the previous modified date, although the SQL Management Studio results show the Hex has been shortened somewhat from the normal  8 bytes for a date. Padding the extra zeros in gives us:

0x 00 00 00 00 35 95 00 00 = 0x 00 00 95 35 00 00 00 00  = 2004-07-31 00:00:00.000

The same rules apply to the  RowLog Contents 1 which has also been cut short.

0x A5 F1 D6 00 24 9D 00 00 =  00 00 9D 24 00 D6 F1 A5 =  2010-02-21 13:02:35.217

Which is unsurprisingly today’s date and when I made the modification.

The immediate question is did the log contain two entries because there were two different causes of the change, or does it record a transaction log row per value modified. Since the Offset In Row value is within the transaction log row, you could guess at 1 entry per change.

To check, I restored the database to the starting position and issued a statement with two modifications:

update HumanResources.Employee
set MaritalStatus = 'M',
VacationHours = 60
where employeeid = 6

The transaction log alters signficantly, the modified date LOP_MODIFY_ROW entry with an offset of 57 still gets added, but the alteration of the two values does not produce 2 x LOP_MODIFY_ROW but a single LOP_MODIFY_COLUMNS – this appears to have a more complex format that is going to be tougher to crack, so that will have to wait until another day.

Why Are Date Tricks in SQL 2005 a Problem in Waiting?

January 26, 2010 Leave a comment

One of the long time annoyances about the Date functions in SQL is that a number of them do not function as developer would wish, the prime example I’m going to focus on is the DateDiff function. The way it operates is very contrary to what a developer might expect or want from it – the function counts the number of boundaries that crossed for the specific units selected, not the number of whole units between the dates, as an example:

declare @date1 datetime
declare @date2 datetime
set @date1 = '20100101 23:00:00'
set @date2 = '20100102 01:00:00'
select datediff(d, @date1, @date2)

And the result is 1, since the units selected was days, the boundary line is predictably at midnight, so even though the time span is only 2 hours, it would count as 1 day – that is not intuiative. Now this is all documented so we cannot complain or grumble. If you wanted to know whether a full day has passed, you used hourly units instead and made sure you had the logic to deal with this.

All of this leaves you with a pretty bad resolution however, you can get the hours difference, but minutes and seconds are not available – so you have to datediff on those units and do some maths. It really makes for a ham-fisted way of getting a duration.

So people work around the problem by converting the date to the numeric equivalent and manipulating that directly.

declare @date1 datetime
set @date1 = '20100101 12:15:30'
select convert(float,@date1) 40177.5107638889

The decimal part represents the percentage through the day, which is not really how the underlying binary storage stores it, it uses a number to represent the number of 1/300ths of a second since the day started.

This format was very forgiving though, if you wanted to add a day, instead of using DateAdd, you could just add 1 to the number, very convienient.

It does however make it easier to create a pseudo-timespan by deducting one date’s numeric representation from another dates, although the code is somewhat long-winded. As a side note, make sure you convert to float and not real – real has not go sufficient accuracy for this to work.

declare @date1 datetime
declare @date2 datetime
set @date1 = '20100101 12:00:00'
set @date2 = '20100102 13:15:35'
declare @result float
set @result = convert(float,@date2) - convert(float,@date1) declare @DurationDays float
declare @DurationTime float
declare @DurationHours float
declare @DurationMinutes float
declare @DurationSeconds float
set @DurationDays = floor(@result)
set @DurationTime = (@result - floor(@result) )
set @DurationTime = @DurationTime * 86400 set @DurationHours = floor(@DurationTime / 3600)
set @DurationTime = @DurationTime - @DurationHours * 3600
set @DurationMinutes = floor(@DurationTime/60)
set @DurationTime = @DurationTime - @DurationMinutes * 60
set @DurationSeconds  = @DurationTime
select @DurationDays as Days,  @DurationHours as Hours ,  
@DurationMinutes as Minutes,  @DurationSeconds as Seconds

Days              Hours              Minutes           Seconds
----------------- ------------------ ----------------- -----------------
1                 1                  15                35.0000002188608

Bit of a hack and was it really any shorter or better? Debatable. Whilst it can get time span information out, when used within SQL 2008 using the new datetime2 types, the wheels fall off:

declare @date1 datetime2(7)
set @date1 = '20100101 12:00:00'
select convert(float,@date1)
Msg 529, Level 16, State 2, Line 3 Explicit conversion from data type datetime2 to float is not allowed.

And that is where the problem comes in – the new datetime2 types will not allow the date to be converted to a number, and a number of these developer tricks no longer work.

Most if not all the tricks can be re-written to use multiple date functions with some mathematical logic – and it can be done without the code ever knowing if it had been embedded within a function / stored procedure.  Where you would see a less transparent move to the datetime2 data types would be where developers had embedded some of the tricks directly into ad-hoc SQL, that will fail if the type is altered. In an ideal world, is that the code never contains these techniques of course, but we do not all live in that nirvana.

So on the one hand Datetime2 gives great accuracy and can reduce storage, but on the other hands, the tricks used in the past to deal with the inadequacies of the built-in Date functions no longer work.

What I would really like is a revamp of the Date functions and the introduction of  a time span type – could be a long wait.

Case Sensitivity in the Query Plan Cache

January 17, 2010 Leave a comment

A surprising facet of the query plan cache is that it matches ad-hoc queries not only on their text, but the case of the query text must exactly match as well. This is documented quite clearly on MSDN although it is a bit of a surprising behaviour. It also does not change based on whether the collation of the server is case-sensitive or not.

The documentation gives a statement on case-sensitivity in the plan cache, but no mention of whether the behaviour changes under ‘Forced’ parameterization, which asks SQL to be more aggressive in extracting query literals and generating a query cache hit – so I decided to have a look and see how it acted in ‘Forced’ vs ‘Simple’.

Whether the database was in ‘Simple’ or ‘Forced’ the behaviour did not change – but it turns out that it is not case-sensitive on keywords,  just on the object names.

To show the case sensitivity behaviour I have used the AdventureWorks sample database as a testing ground. Prior to each test I cleared the procedure cache using DBCC FreeProcCache.

I then issued two very simple queries:

SELECT * from humanresources.employee Select * from humanresources.employee

When the query cache is inspected, there are two entries – it remains case-sensitive.

sql_statement                                 execution_count
--------------------------------------------- --------------------
SELECT * from humanresources.employee         1  
SELECT * from humanresources.employee         1

The query so simple and has no parameters so I suspect the simple / forced parameterization routines must not activate.

If we add a parameter to the query, then the parameterization activates and gets to work on the query string prior to the cache lookup. Both simple and forced are able to cope with such a simple query so both performed the parameterization.

SELECT * FROM humanresources.employee WHERE employeeID = 1
Select * From humanresources.employee Where employeeID = 1

Inspect the query plan cache when running in forced:

sql_statement                                             execution_count
--------------------------------------------------------- --------------------
(@0 int)select * from humanresources . employee           2
where employeeID = @0

Inspect the query plan cache when running in simple:

sql_statement                                             execution_count
--------------------------------------------------------- --------------------
(@1 tinyint)SELECT * FROM [humanresources].[employee]     2
WHERE [employeeID]=@1

The results show a plan cache hit, but more importantly show up a rather obvious difference in the parameterization routines for each mode:

  • Simple changes keywords to upper case, Forced changes them to lowercase.
  • Simple places square brackets around the objects, forced does not.
  • Simple chooses to replace the literal with a tinyint, Forced uses an Int.
  • Simple starts the parameters at @1, forced starts at @0

The differences can be filed under bizarre, strange and just inconsistent, although they do both get the job done, which counts at the end of the day.

What is then disappointing is that the same is not true for the tables and fields named in the query. Changing the case of one of the objects prevents the caching again.

select * from humanresources.employee where EmployeeID = 1
Select * From humanresources.employee where employeeID = 1

Inspect the query cache (this one from forced mode):

sql_statement                                            execution_count
-------------------------------------------------------- --------------------
(@0 int)select * from humanresources . employee          1
where EmployeeID = @0
(@0 int)select * from humanresources . employee          1
where employeeID = @0

So we are back to the situation of no cache hit.

It seems very strange that the parameterization only ensures the casing of the keywords is consistant to give it a better chance of a query plan cache hit – if this was a case-insensitive server than it is a valid optimization to try increase the chances of a plan cache hit.

The converse you would think, is that it would be an inherently risky optimization on a case-sensitive database? – but in fact it is an optimization that would never be needed or made – if anything a case-sensitive database server will have a better chance of making a query plan cache hit since all the tables names and field names have to exactly match the stored object names – and so the queries will which have a greater chance of matching each other.

It could clearly do more to try give a match, but I suspect the complications and edge cases, such as  database / server case-sensitive collation mis-match account for why it might seem easier than it really would be to make better.

Dynamic Partitioning : What objects are using a Partition Schema? (SQL Tuesday #002)

January 12, 2010 5 comments

As part of the look at dynamic partitioning one of the first problems I have come across is finding what objects are currently placing data within a partition schema, this can be both tables as well as indexes for a table or indexes for a view (which can also be partitioned).

This has tied in nicely with Adam Machanic’s SQL Tuesday in which we were to describe a confusing situation and possible solution.

Certainly it has been a bit confusing to get to the bottom of what should be a relatively trivial question – programmatically determine what objects are using my partition schema? 

If I am going to auto-balance the partitions then I have to know what objects are using the partition schema and will be affected by any balancing – we can not assume it is a single object, since we know both the table and non-clustered indexes will often be aligned on the same partition schema, as could other tables / objects.    

So the first place I chose to check was the system views for the partition objects, sys.partition_functions and sys.partition_schemes – with the partition schemes being where you would expect to start.      

SELECT * FROM sys.partition_schemes

 

name data_space_id type type_desc is_default function_id
psBalanceLeft 65601 PS PARTITION_SCHEME 0 65536

Unfortunately the results from partition schemes view is spectacularly unhelpful, aside from inheriting a number of columns from the data spaces system view, it only adds function_id – being the ID of the partition function used in the schema. It at least has the name of the partition scheme, so that definitely is going to have to be used at a later point.    

The immediately useful looking value is the function_id linking the scheme to the partition function, so I had a look inside the partition functions view remember what it has.      

SELECT * FROM sys.partition_functions

 

name function_id type type_desc fanout boundary_values_on_right
pfBalanceLeft 65536 R RANGE 10 0

The output does not particularily lead anywhere useful – the function most certainly is not going to tell me which objects are assigned to it, since the tables / indexes get directly assigned to the partition scheme, this looks a dead end. So the only other option is to go to the data spaces system view:    

SELECT ps.name, ds.*
FROM sys.partition_schemes ps
INNER JOIN sys.data_spaces ds on ps.data_space_id = ps.data_space_id

 

name name data_space_id type type_desc is_default
psBalanceLeft psBalanceLeft 65601 PS PARTITION_SCHEME 0

Not a stellar move – there are no obvious leads here.

So I can obtain the relation between the partition scheme and the storage but that it is so far. Given those two dead ends I next considered the problem from the opposite direction – sys.partitions claims to contain a row for each partitioned tables and index in the database – which should provide another starting point.    

SELECT * FROM sys.partitions

Some of the output was as follows:    

partition_id object_id index_id partition_number hobt_id rows
72057594040549376 53575229 2 1 72057594040549376 10
72057594040614912 53575229 2 2 72057594040614912 10
72057594040680448 53575229 2 3 72057594040680448 10
72057594040745984 53575229 2 4 72057594040745984 10
72057594040811520 53575229 2 5 72057594040811520 10
72057594040877056 53575229 2 6 72057594040877056 10
72057594040942592 53575229 2 7 72057594040942592 10
72057594041008128 53575229 2 8 72057594041008128 10
72057594041073664 53575229 2 9 72057594041073664 10
72057594041139200 53575229 2 10 72057594041139200 136
72057594041204736 149575571 1 1 72057594041204736 10
72057594041270272 149575571 1 2 72057594041270272 10
72057594041335808 149575571 1 3 72057594041335808 10
72057594041401344 149575571 1 4 72057594041401344 10
72057594041466880 149575571 1 5 72057594041466880 10
72057594041532416 149575571 1 6 72057594041532416 10
72057594041597952 149575571 1 7 72057594041597952 10
72057594041663488 149575571 1 8 72057594041663488 10
72057594041729024 149575571 1 9 72057594041729024 10
72057594041794560 149575571 1 10 72057594041794560 136

This definitely has my partition schema in there somewhere since I know I have 10 partitions and have set the row quantities up to be 10 rows for the first 9 partitions and 136 rows in the tenth, it is pretty visible.

I’ve also got an indexed view on the same table which explains the duplicate set of values, and a NC index on the table which explained the triplicate set of values I’ve not pasted in. This is in essence what I am after though; finding out which object_id’s reference the partition schema.

A couple of immediate dead ends have also appeared:    

  • The partition_id looks useful but is the unique id of the partition record not an ID relating to the partition schema.
  • The hobt_id is the heap or b-tree pointer for that specific partition so is not going to be able to help us since there are 10 hobt_id’s per object on the schema, all different.

It does however provide the object_id which we know we can translate into a name very easily and a partition_number column which only ever exceeds 1 on a partitioned table. So with a bit of a throw-away style query to just select those with a partition_number of 2 to make sure we only select partitioned objects gave me the following:  

SELECT o.Name, s.*
FROM sys.partitions s
INNER JOIN sys.objects o on s.object_id = o.object_id
WHERE partition_number = 2

 

Name partition_id object_id index_id partition_number
tblTestBalance 72057594039304192 53575229 1 2
foo 72057594039959552 85575343 0 2
tblTestBalance 72057594040614912 53575229 2 2
vwTestView 72057594041270272 149575571 1 2

So now I can see the partitioned objects from the other direction, but I have found no relation between the objects identified as partitioned and the partition schemas discovered earlier. 

There is also an index_id being shown and documented as ‘the index within the object to which this partition belongs’, but this is not an object_id for an index, but the index_id you would normally use within a dbcc in command, 0 for heap, 1 for clustered index etc, so there must be a relation to the sys.indexes view – which when I thought about it made complete sense – the sys.indexes table is really badly named, since it is not a row per index, but a row per heap or index. 

Not the best name in the world for it, but let’s move on – given we have both the object ID and the index_id we can join on both / filter in the where clause. 

SELECT O.Name, p.*
FROM sys.partitions p
INNER JOIN sys.objects O on p.object_id = o.object_id
INNER JOIN sys.indexes I on O.object_id = I.object_id and p.index_id = I.index_id
WHERE partition_number = 2

Same output as before, since I have selected no fields from sys.indexes yet, checking the list an immediate candidate jumped out, data_space_id – I already had an odd-looking data_space_id earlier, so can the index link to it successfully?  

SELECT O.Name, ds.name
FROM sys.partitions p
INNER JOIN sys.objects O on p.object_id = o.object_id
INNER JOIN sys.indexes I on O.object_id = I.object_id and P.index_id = I.index_id
INNER JOIN sys.data_spaces ds on i.data_space_id = ds.data_space_id WHERE partition_number = 2

Which gave the following results:   

name name
tblTestBalance psBalanceLeft
foo psBalanceLeft
tblTestBalance psBalanceLeft
vwTestView psBalanceLeft

I then rejoined in the partition schemes using the same join I did earlier:     

SELECT O.Name, ds.name
FROM sys.partitions p
INNER JOIN sys.objects O on p.object_id = o.object_id
INNER JOIN sys.indexes I on O.object_id = I.object_id and P.index_id = I.index_id
INNER JOIN sys.data_spaces ds on i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes ps on ds.data_space_id = ps.data_space_id

Running this gave 40 rows, 4 objects x 10 partitions, so I was filtered down to the partitioned objects but I returned too many rows – I could have used a group by clause but it seemed simpler to just select a single partition number, and since I know partition number 1 will always exist, that was the simplest to use.

I am now down all the way to the partition scheme, time to select some of the more interesting columns I found along the way, that are applicable to the dynamic partitioning problem I am looking at, the main one being the object name and type. 

The final solution I’ve arrived at to get to the tables / indexes and indexed views using a partition scheme is:     

SELECT O.Name as TableName, I.Name as IndexName, I.Type, I.type_desc as IndexType, ps.name as PartitionSchema
FROM sys.objects O
INNER JOIN sys.partitions p on P.object_id = O.object_id
INNER JOIN sys.indexes i on p.object_id = i.object_id and p.index_id = i.index_id
INNER JOIN sys.data_spaces ds on i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes ps on ds.data_space_id = ps.data_space_id
WHERE p.partition_number = 1
TableName IndexName Type_ID Type_Desc PartitionSchema
tblTestBalance PK_tblTestBalance 1 CLUSTERED psBalanceLeft
foo NULL 0 HEAP psBalanceLeft
tblTestBalance testncindex 2 NONCLUSTERED psBalanceLeft
vwTestView IxViewTest 1 CLUSTERED psBalanceLeft

This can be further filtered to an individual partition scheme based on the name trivially, but the output is giving us the information I am after – a list of what objects and type of object is allocated to the partition schemas.       

There should be an easier way to get to this information than joining 5 system views, but that seems to be the only way I could manage to solve the problem.

Follow

Get every new post delivered to your Inbox.