Archive

Author Archive

Subtle change to ::fn_dblog in SQL 2008

July 15, 2010 2 comments

Bizarrely given that transaction log decoding is not really viable as a technique beyond interest, it’s quite surprising how many people visit just to understand or learn how to do it. Either a lot of data is being lost and no backup’s exist – or it fascinates a large number of people.

In either instance, there are sometimes subtle changes can pass you by, and the use of old habits can make life a bit harder. One such recent change that I missed was an alteration to the ::fn_dblog function. I really should have spent some time investigating the new fields that had been added – but with documentation being limited to say the least, it has not been the top priority and my habit is to fire up SQL 2005 when checking something in the transaction log – so I had not noticed them.

One of the changes is the creation of a link between the contents of sys.dm_tran_current_transaction and the transaction log itself. The contents of the view have not altered, but one of the new fields within the output from ::fn_fblog is a new field called Xact_ID – this field contains the same value as the Transaction_ID that is output from the DMV.

Now that is significantly convenient for when you’re poking around in logs trying to understand cause and effect .

A couple of simple stored procedures later to assist in saving and returning that filtered information as follows:

CREATE PROCEDURE sp_store_transaction
<AS
BEGIN
  DECLARE @dm_transaction_id bigint
  SELECT @dm_transaction_id = transaction_id FROM sys.dm_tran_current_transaction
  IF OBJECT_ID (N'tempdb.dbo.##db_last_transaction') IS NOT NULL
    DROP TABLE ##db_last_transaction
  SELECT [transaction id] INTO ##db_last_transaction
  FROM ::fn_dblog(null,null)
  WHERE [Xact ID] = @dm_transaction_id
<END
GO
CREATE PROCEDURE sp_get_last_transaction
AS
BEGIN
  SELECT *
  FROM ::fn_dblog(null,null)
  WHERE [Transaction ID] = (SELECT [transaction id] FROM ##db_last_transaction)
END
GO

And that is going to make looking at cause an effect far easier – first procedure you call anywhere within your transaction to store the current transaction_id within the log, and the second to retrieve the values for that stored transaction id. Something like:

BEGIN TRAN
UPDATE tbltest SET firstname = 'ab'

EXEC sp_store_transaction
COMMIT TRAN

sp_get_last_transaction

If you are wondering why I did not dump the log records out and only the transaction id that ties them together – if you dumped the records before the transaction committed or rolled back, you would not see the effect of that action – you need to retrieve all the log rows associated to that transaction, after it has finished.

SQL Server Hash Partitioning

May 31, 2010 10 comments

It’s been a while since the last post, primarily due to changing jobs and now spending most of my time on Oracle – although it is always good to see the other side of the coin and see what it has to offer, but I won’t be abandoning SQL Server, that is for certain.

One of the more interesting features to me in Oracle is hash partitioning – the ability to create a partition across a defined number of partitions, and then arbitrarily decide which partition the data will go in based on a hashing function. Why would that be handy? SQL Server partitioning is in effect a range partition, in which you define the dividing points on the number line / alphabet – which suits partitions based on a defined number range or date range, but does not suit partitioning of other types such as a GUID.

The merits of such a partition could be debated, since with a decent index in place the benefits of the partition elimination within the query plan can be limited. Regardless of those merits (and I am pretty sure it is not going to be performant at scale, however could SQL Server implement Hash Partitioning? On a side note, this could be considered semi-dynamic partitioning in that the partition is able to cope with additional data outside of the expected range, due to the hash function.

I’ve seen a few articles try and perform hash partitioning by pre-processing the insert statement, prior to insertion into the database, but what about something a bit more native?

To start with, we need to create a partition function and partition schema to support this endeavour, both are pretty easy to construct.

CREATE PARTITION FUNCTION [myPartitionFunction] (int)
AS RANGE LEFT FOR VALUES (100,200,300,400,500,600,700,800,900)
CREATE PARTITION SCHEME [myPartitionScheme] AS
PARTITION [myPartitionFunction] ALL TO ([FG1])

I’ve set up the partition scheme to assign all of the partitions to FG1, just for convenience, it could easily be set to multiple filegroups, and instead of 9 partitions, this could be constructed with 999 partitions.

There are a variety of hashing algorithms and functions, but given the range covered by the partition function, I have chosen to use a very simple modulo on the converted binary of the unique identifier. The only trick here is that we must create the function with schema binding, otherwise SQL will refuse to use the function later on when we persist the column and partition on it.

CREATE FUNCTION GuidHash (@guid_value uniqueidentifier) RETURNS int
WITH SCHEMABINDING AS
BEGIN
 RETURN abs(convert(bigint,convert(varbinary,@guid_value))) % 999
END

That is a pretty simple hashing function, but the point is to demonstrate is can be done, not to implement the best hashing algorithm that will give the most even distribution etc. The next step is to create the table, with the persisted column defined using the GuidHash function. If the function is not schema bound, you get an error thrown at this stage.

CREATE TABLE MyTable(  MyID UniqueIdentifier not null,  
SomeField Char(200), 
PartitionID as dbo.GuidHash(MyId) PERSISTED
)
ON myPartitionScheme(PartitionID)

The surprise here is that is accepts the table creation definition – since when would you expect a partitioned table’s column to be a computed column?

Time to put an index on the table, given the data is indexed off a unique identifier, it would not be unusual to place a non-clustered index on the table and to use index-alignment, e.g. place it on the same partitioning scheme.

CREATE NONCLUSTERED INDEX [ix_id] ON [dbo].[MyTable] (  
[MyID] ASC,  
[PartitionID] ASC
) ON [myPartitionScheme]([PartitionID])

Populate the table with some test data:

DECLARE @guid uniqueidentifier
SET @guid = newid()
INSERT INTO mytable (myid, somefield) VALUES (@guid, 'some text')
go 10000

So what happens when we select a single row from our data, for convienience I looked up a value in the table and grabbed the GUID – comparing the two queries side by side,

SELECT * FROM mytable WHERE myid = 'D41CA3AC-06D1-4ACC-ABCA-E67A18245596' 
SELECT * FROM mytable WHERE (partitionid = dbo.guidhash ('D41CA3AC-06D1-4ACC-ABCA-E67A18245596') 
and myid = 'D41CA3AC-06D1-4ACC-ABCA-E67A18245596')

The comparison is interesting, in percentage terms, it was 85% to 15% for the batch, the IO Statistics reads:

First Query : Scan count 10, Logical Reads 21
Second Query : Scan count 1, Logical Reads 3

So the hashing has clearly made the query faster – but that just means it was faster than the query that didn’t use the partition key which shouldn’t be too surprising – partition elimination vs checking every partition should win, so all it shows is that partition elimination is occurring. So how does it stack up against a normal table, e.g. have we gained anything? To test we need to put 10k rows into the same table, minus the computed column, index it and perform the same kind of select – all easy stuff so I will not write the code here, the results of a select from a normal table?

Normal Table Query : Scan Count 1, Logical Reads 3

And when run side by side, the SSMS window reports a 50% split of work between the two queries within the batch – which is not surprising given the IO costs were listed as the same – so where is the catch? There is no such thing as a free lunch, and the additional cost here is the CPU to generate the PartitionID value for the hashed GUID, but as a technique to partition based on a GUID, it has some merits.

One final thing that I did notice is that under Simple Parameterization, the GuidHash based query does not parameterize, which would start having detrimental effects on the query cache, once the database was placed under forced parameterization, then the query did parameterize appropriately – so you either want forced parameterization on or to use stored procedures – I would vote for the later there.

As a technique is has some merits, but you have to remember to manually include the PartitionID column within each query and run it through the hashing function – which is not ideal, but manageable.

Immutable Primary Key vs Immutable Clustered Key

April 19, 2010 2 comments

It is often said that a primary key should be immutable, and this advice is echoed on a multitude of sites some of which strengthen it to a ‘law’ – but we know with databases that absolutes are rare and it is very difficult to be 100% prescriptive. There is then no mention of the clustering key being immutable alongside it, which strikes me as strange since it is just as important.

What happens within SQL Server to the row if you change the clustered key?

  • If you change the clustered key value the row must be physically moved and can result in page splits / fragmentation.
  • A change of the clustered key requires all the non-clustered indexes to be updated to reflect the clustering key choice.

And if you change the primary key?

  • A change of the primary key, has to be reflected in each of the other tables that use the key as a linking mechanism.
  • The primary key must still uniquely identify the row within the table.

Clearly different issues, but why does the primary key immutability get so much attention and not the clustering key? The default behaviour of SQL Server is that the primary key becomes the clustering key, so in essence all 4 points get applied, but you can choose a different primary key to the clustering key.

What sort of expense are we talking about if we allow the clustering key to not be immutable and started altering a row’s clustered key value? To get a better understanding of what is going on under the hood, I’ll construct an example and check the transaction log.

Creating a simple table in SQL and inserting a few rows to set up the test is pretty easy:

CREATE TABLE [dbo].[MyTable](
 [MyID] [int] IDENTITY(1,1) NOT NULL,
 [FirstName] [varchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
 [SecondName] [varchar](30) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
 CONSTRAINT [PK_Table_1] PRIMARY KEY CLUSTERED
(
 [FirstName] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]

Insert a few rows and then issue a simple update statement.

update mytable set firstname = 'TestFirstName', Secondname = 'TestSecondName' where MyID = 2

Inspect the transaction log and it is noticable that the log does not contain a LOP_MODIFY_ROW or LOW_MODIFY_COLUMNS within it, but contains a LOP_DELETE_ROWS and a LOP_INSERT_ROWS. Instead of just modifying the data, SQL has removed the row and reinserted it. A few other items appear with the transaction, LOP_BEGIN_XACT and LOP_COMMIT_XACT as you would expect to start and commit the transaction. There is also a LOP_SET_BITS on the LCX_PFS which is not surprising to see either, since we have potentially affected the free space level of the page the data was inserted into.

That maps to exactly what we expect from a high level logical perspective – the row has to be moved and there is no LOP_MOVE_ROW operation. This results in the row being placed into the transaction log twice, as a before and after.

What happens if we perform an update that does not include the clustered key?

update mytable set Secondname = 'AnotherTestSecondName' where MyID = 3

This time the log only includes 2 entries, the LOP_BEGIN_XACT / LOP_COMMIT_XACT and a single LOP_MODIFY_ROW which is more as you would expect.

Size wise, the transaction log entry length for the first alteration was 96 + 148 + 56 + 160 + 52 = 512 bytes. For the second entry it was only 96 + 144 + 52 = 292. So the alteration used more log space and due to write ahead logging it must be committed to the disk, but the actual difference for a single row does not look too significant.

Well, whilst it does not look significant, you have to remember that the row being modified was very small. As previous examples have shown the LOP_DELETE_ROWS and LOP_INSERT_ROWS include the entire contents of the row being removed / added, so with a larger row the entire contents of the row would be added to the log twice, compared to the simple modification. That would start to get expensive.

So altering the clustering key is clearly expensive for the transaction log in comparison to a normal update, and this example did not have additional non-clustered indexes added to the table, which would also then require even more entries to deal with the removal and re-insertion of the non-clustered index values.

Given a choice I would make both immutable; the primary key shouldn’t be the only one to get special treatment and be designed to be immutable, make the clustering key immutable as well.

SQL Immersion Event – Dublin 2010

April 13, 2010 Leave a comment

I attended the SQL Immersion event last year in Dublin and can honestly say that it was the best training course I have ever attended. The level of detail is phenomenal and the interaction with Paul and Kim is superb. I can not recommend the course heavily enough and anyone who is serious about SQL should make the effort to attend one of these, I would even go as far as to say fund it yourself if you have to. Prodata have not only managed to get Paul and Kim back to run it again, but have also got 2 additional master class courses being scheduled as well. I have a feeling these must be closer to including more information / material from the SQL MCM course, which would be superb, but I will have to check whether the bank balance can handle doing them.

The Immersion event is split again into two tracks, DBA and Developer although both sides of that fence benefit from having a good in-depth understanding of the other, so I would ignore the distinction and go for the full ‘Immersion’, there is no reason that a DBA shouldn’t understand indexes and index tuning in depth, or that a developer shouldn’t have a good understanding of the transaction log and internal structure within SQL. I did the full course before and spent most evenings doing even more stuff and using the day’s material to find out new things, many of which have become topics that I have written about.

Early registration to the courses attracts a 15% discount, but using the promotion code SQLH you will get a 20% discount instead. On the full Immersion course that is a further ~100 euros off the price, which can’t be bad.

The Immersion event is running from the 28th June to 1st July, and registration is here.

The two additional master classes are being run the week after, and these are advertised as being material that is not on the Immersion course, but as mentioned – I’m not entirely sure what that is, and given how much detail is on the immersion course, that is going to have to be some very deep internals stuff.

The performance master class is being run on the 5th and 6th of July, registration is here, whilst the DR master class registration is here. Where’s that cheque book?

Query Parameterization and Wildcard Searches

April 6, 2010 1 comment

Time flies when you’re busy, and it has been far too long since I last posted. To business however, and I noticed a problem in query parameterization the other day which does not make much sense at first glance. To demonstrate, I will use the AdventureWorks example database and use a couple of simple queries. As with all parameterization issues, you need to make sure that you know which mode the database is within, so I’ll begin by setting it to Simple Mode

ALTER DATABASE AdventureWorks SET PARAMETERIZATION SIMPLE

And then run two queries, separately so that they are not considered a single batch.

Select * from HumanResources.Employee where loginid like '%a%'
Select * from HumanResources.Employee where loginid like '%b%'

Under simple parameterization it should not be too surprising to see that when the query cache is inspected, the queries have not been parameterized, and 2 entries exist within the cache. So what happens when the mode is changed to Forced?

ALTER DATABASE AdventureWorks SET PARAMETERIZATION FORCED

Clear down the query cache and try the two queries again, in the hope of a plan cache hit – and it hasn’t changed. Two query plans still show in the cache and there was no parameterization. Perhaps it is the existence of the 2 wildcard characters? no, altering the wild-cards makes no difference, removing them entirely still results in the query plan generating a separate plan cache entry.

Parameterization is not limited to dates and numbers, it will work on strings without any problem, but clearly the introduction of the like clause prevents the cache hit. This behaviour is on both SQL Server 2005 and 2008 – which is a bit annoying.

So how can we get around this problem?

Well bizarrely by just using a different syntax to mean the same thing. PatIndex works just like the like clause and takes a wildcard, but returns the position. In a like clause we are just interested in a match – whilst the pat index gives us a character position. If the pattern is not found it returns zero, so the simple replacement is to using patindex and look for any values greater than zero.

Select * from HumanResources.Employee where patindex('%a%',loginid) > 0 
Select * from HumanResources.Employee where patindex('%b%',loginid) > 0

In simple mode this still produces 2 cache hits, but in forced mode you get a plan cache hit finally!

If solving it was only that simple… by using PatIndex the query where clause has become non-sargable – which makes no difference if you have a wild card either side of your expression but if you only had a trailing wildcard then this would produce a very bad performance hit. The cost of the extra query plan in memory is unlikely to be more than the use of scans to resolve the query, so faced with a few additional query plans in memory using wildcards, you might be best to leave them there.

Locating Table Scans Within the Query Cache

March 10, 2010 2 comments

Some time back, Pinal Dave published a blog entry with an example of an XML script that used XQuery to examine the query cache – the script in itself is a very useful example of using XQuery against the query plans, but doesn’t quite hit the mark in terms of being an invaluable performance tuning script since it provides an information overload and doesn’t help locate those annoying query problems.

Using it as inspiration however, you might find this useful when tracking down dodgy queries. A number of key items have been added:

  • The database the scan has occurred in.
  • The schema the scan has occured in.
  • The table name the scan has been performed on.

Understandably, very useful fields to additionally expose, since these allow filtering of the results to exclude tables that are of no interest due to their size (small dimension / lookup tables for example.)

WITH XMLNAMESPACES(DEFAULT N'http://schemas.microsoft.com/sqlserver/2004/07/showplan'),
CachedPlans
(DatabaseName,SchemaName,ObjectName,PhysicalOperator, LogicalOperator, QueryText,QueryPlan, CacheObjectType, ObjectType)
AS
(
SELECT
Coalesce(RelOp.op.value(N'TableScan[1]/Object[1]/@Database', N'varchar(50)') , 
RelOp.op.value(N'OutputList[1]/ColumnReference[1]/@Database', N'varchar(50)') ,
RelOp.op.value(N'IndexScan[1]/Object[1]/@Database', N'varchar(50)') ,
'Unknown'
)
as DatabaseName,
Coalesce(
RelOp.op.value(N'TableScan[1]/Object[1]/@Schema', N'varchar(50)') ,
RelOp.op.value(N'OutputList[1]/ColumnReference[1]/@Schema', N'varchar(50)') ,
RelOp.op.value(N'IndexScan[1]/Object[1]/@Schema', N'varchar(50)') ,
'Unknown'
)
as SchemaName,
Coalesce(
RelOp.op.value(N'TableScan[1]/Object[1]/@Table', N'varchar(50)') ,
RelOp.op.value(N'OutputList[1]/ColumnReference[1]/@Table', N'varchar(50)') ,
RelOp.op.value(N'IndexScan[1]/Object[1]/@Table', N'varchar(50)') ,
'Unknown'
)
as ObjectName,
RelOp.op.value(N'@PhysicalOp', N'varchar(50)') as PhysicalOperator,
RelOp.op.value(N'@LogicalOp', N'varchar(50)') as LogicalOperator,
st.text as QueryText,
qp.query_plan as QueryPlan,
cp.cacheobjtype as CacheObjectType,
cp.objtype as ObjectType
FROM
sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
CROSS APPLY qp.query_plan.nodes(N'//RelOp') RelOp (op)
)
SELECT
DatabaseName,SchemaName,ObjectName,PhysicalOperator
, LogicalOperator, QueryText,CacheObjectType, ObjectType, queryplan
FROM
CachedPlans
WHERE
CacheObjectType = N'Compiled Plan'
and
(
PhysicalOperator = 'Clustered Index Scan' or PhysicalOperator = 'Table Scan' or
PhysicalOperator = 'Index Scan')

The final alteration is the limitation of the results to only those query plans that include scans, although you could use this to target hash matches or other potentially expensive operations that indicate there is a query plan / indexing opportunity to investigate.

Using this script makes it far easier to run through the query cache and can easily be further modified to include a link to the sys.dm_exec_query_stats via the plan_handle so you could also pull the execution count for the queries with the scans to further prioritize performance tuning work.

A Strange Case of a Very Large LOP_FORMAT_PAGE

February 28, 2010 1 comment

I have been involved in a long running case with a client, trying to establish why an ETL process of a couple of gig of data, would result in over 5 gig of transaction log space being used.

The database was set in bulk mode, and confirmed as being correctly in bulk mode with appropriate full backups and transaction log backups occurring – so how could the log be expanding so rapidly for such a simple import. One clue was that this was only occurring on a single system, and that no other system had ever witnessed this behaviour, so it had to be something environmental.

The output from the transaction log captured during a load window was sent to me and some quick aggregation of the rows show that there was close to 280k LOP_FORMAT_PAGE entries, but instead of the normal log entry size of 84 bytes, they were coming in at 8276 bytes. This was due to the log record overhead but basically meant that for each page being allocated by the inload, an entry was added larger than the page just added.

I contacted Paul Randal to ask why the log entry would get allocated this large and he identified it as not being in bulk logged mode – which made sense given the log expansion but there was little proof. Examining an individual LOP_FORMAT_PAGE entry I could see a clear repeating pattern within the hex data – which would be consistent to a page with rows within it.

One field of what was being imported was a known value for every row (the data related to a specific week number), and translating that value into hex, it was visible multiple times. Using the breakdown of a row and that this value was field number 3 the row could be decoded and actual values of a row imported reconstructed.

Using these values the main database was queried and that row did indeed exist – the data has 27 decimal values within it also, so the chances of a random match are incredibly small. This proved that the log was recording every single row, even though it was in bulk logged mode.

There are a number of things that can prevent bulk logged mode from operating correctly, and on examining the list the problem became clear to the client’s DBA; the main database backup was still running at the same time as the data inload was being scheduled. The bulk log operations have to be fully logged during the backup to ensure that the database can be returned to a consistent state once it has been restored.

It is the only time I have seen the LOP_FORMAT_PAGE taking up this much space in a transaction log, and if anyone else gets stuck on a similar problem, google should at least find something this time.

Decoding a Simple Update Statement Within the Transaction Log

February 21, 2010 3 comments

After decoding the delete statement within the transaction log, I thought I would tackle another statement – an update statement, although I am attempting the simple version of the update which is only to update a single value within the fixed width fields section. Again to assist in decoding the log entry, I have used the AdventureWorks database and know the values of the data before and after the modification, so it is easier to decode how the log entry is constructed.

As before the database has been backed up whilst in fully logged mode and the transaction log backed up to make it as small as possible. Once the database was ready I issued a simple statement against the database

update HumanResources.Employee
set MaritalStatus = 'M' where employeeid = 6

The employee with the ID  of 6 has changed marital status, congratulations employee id 6.

The first confusion is that two LOP_MODIFY_ROW entries show up against the LCX_Clustered context, for the HumanResources.Employee.PK_Employee_EmployeeID allocation unit / name, but only 1 modification has been made.

What is the second modification?

Against each LOP_MODIFY_ROW there is an [Offset in Row] column, which indicates where within the row the modification has been made. The first modification is recorded at byte 24, the second is at byte 57. Instead of providing a field number / name, the alteration is being recorded at a byte level, so to figure out which columns have changed, we must map out the structure in terms of byte positions for the row.

Offset Name Size in Bytes Value
0 Status Bits 2  
2 Column Number Offset 2  
4 EmployeeID 4 6
8 ContactID 4 1028
12 ManagerID 4 109
16 BirthDate 8 1965-04-19 00:00:00.000
24 MaritalStatus 2 S
26 Gender 2 M
28 HireDate 8 1998-01-20 00:00:00.000
36 SalariedFlag 1 1
37 VacationHours 2 40
39 SickLeaveHours 2 40
41 CurrentFlag 1 1
41 rowguid 16 E87029AA-2CBA-4C03-B948-D83AF0313E28
57 ModifiedDate 8 2010-02-20 18:13:03.710
65 Column Count 2  
67 Variable Column Count 2 3
69 Variable Offets 6  
75 NationalIDNumber 16 24756624
91 LoginID 44 adventure-works\david0
135 Title 100 Marketing Manager

Again, because there are no nullable columns at all, there is no nullability bitmap within the structure. This mapping is specific to this row, but the offsets for the fixed portion of the row would be valid for all the records, it is only the variable width columns that would change per data row.

The two modifications are at byte 24 and 57, which is clearly the Marital Status and the Modified Date. 

Why is the modified date being changed? The adventure works employee table has an update trigger on it, which alters the modified data column to be the current date when any change is made to the row.

For the Marital status transaction log entry, there are two values we need to logically know – what was it before the change, and what is the value after the change.

The Row Log Contents 0 and Row Log Contents 1 fields clearly provide that information

Row Log Contents 0 : 0x53
Row Log Contents 1 : 0x4D
select convert(nchar,0x53) = S
select convert(nchar,0x4D) = M

So they are clearly the before and after results.

Checking the transaction log entry against the trigger, the log row contents are dates within hex format, which are not too difficult to deal with

In Hex, and Row Log Contents  0 contains the previous modified date, although the SQL Management Studio results show the Hex has been shortened somewhat from the normal  8 bytes for a date. Padding the extra zeros in gives us:

0x 00 00 00 00 35 95 00 00 = 0x 00 00 95 35 00 00 00 00  = 2004-07-31 00:00:00.000

The same rules apply to the  RowLog Contents 1 which has also been cut short.

0x A5 F1 D6 00 24 9D 00 00 =  00 00 9D 24 00 D6 F1 A5 =  2010-02-21 13:02:35.217

Which is unsurprisingly today’s date and when I made the modification.

The immediate question is did the log contain two entries because there were two different causes of the change, or does it record a transaction log row per value modified. Since the Offset In Row value is within the transaction log row, you could guess at 1 entry per change.

To check, I restored the database to the starting position and issued a statement with two modifications:

update HumanResources.Employee
set MaritalStatus = 'M',
VacationHours = 60
where employeeid = 6

The transaction log alters signficantly, the modified date LOP_MODIFY_ROW entry with an offset of 57 still gets added, but the alteration of the two values does not produce 2 x LOP_MODIFY_ROW but a single LOP_MODIFY_COLUMNS – this appears to have a more complex format that is going to be tougher to crack, so that will have to wait until another day.

Prodata March Event Sign Up Available

February 7, 2010 Leave a comment

Just a quick note to say the sign up for the March SQL Academy event run by Prodata is available here. The February event is coming shortly and I will be taking the flight over for some more Irish hospitality and in-depth discussions on SQL Server Analysis services. You can probably still sign up for the February event on the 16th, throw me a comment if you plan on going.

Categories: SQL Server

How Do You Decode A Simple Entry in the Transaction Log? (Part 2)

February 5, 2010 1 comment

In the previous post we pulled out the raw information relating to a transaction log entry for a deletion of a single row.  In this post we will decode the binary section of the log entry for the deletion performed.

A reminder of the Row Log Contents for the Clustered Index log entry that we need to decode.

0x3000410001000000B90400001000000000000000406700004D004D0000000000CB8900004A15001E004AD0E1AA37C27449B4D593524773771800000000359500001000000003005D008500BD003100340034003100370038003000370061006400760065006E0074007500720065002D0077006F0072006B0073005C006700750079003100500072006F00640075006300740069006F006E00200054006500630068006E0069006300690061006E0020002D0020005700430036003000

So we have 180 bytes of information relating to the deletion, the question is – what is it? The most obvious answer has to be the record that has been deleted – lots of fixed log information has already been exposed as named fields, this generic variable length value must be the row that we deleted.

How to decode it though? applying the structure of a row is a good place,  the format mentioned before is detailed very nicely detailed in Kalen Delaney’s SQL Internal’s book so let’s use that as a guide.

  • 2 Byte : Status Bits
  • 2 Bytes: Offset to find number of columns
  • X Bytes:Fixed Length Columns
  • 2 Bytes: Total Number of Columns in the data row
  • 1 Bit per column, Rounded up: Nullability Bitmap
  • 2 Bytes:Number of Variable Length Columns within the data row
  • 2 Bytes per variable length column : Row Offset marking the end of each variable length column
  • X Bytes:Variable Length Columns

Ok, let’s start assigning the values from the log hex dump into the fields.

Status Bits (2 Bytes) : 3000
Offset to Find Number of Columns (2 Bytes) : 4100
Because of the endian ordering in effect, the offset should be read as 0041 – 65 in Decimal.  So there must be 61 Bytes of Fixed width data, since we have already used 4 bytes.

Let’s remind ourselves of the table structure:

CREATE TABLE [HumanResources].[Employee](
 [EmployeeID] [int] IDENTITY(1,1) NOT NULL,
 [NationalIDNumber] [nvarchar](15) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
 [ContactID] [int] NOT NULL,
 [LoginID] [nvarchar](256) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
 [ManagerID] [int] NULL,
 [Title] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
 [BirthDate] [datetime] NOT NULL,
 [MaritalStatus] [nchar](1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
 [Gender] [nchar](1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
 [HireDate] [datetime] NOT NULL,
 [SalariedFlag] [dbo].[Flag] NOT NULL CONSTRAINT [DF_Employee_SalariedFlag]  DEFAULT ((1)),
 [VacationHours] [smallint] NOT NULL CONSTRAINT [DF_Employee_VacationHours]  DEFAULT ((0)),
 [SickLeaveHours] [smallint] NOT NULL CONSTRAINT [DF_Employee_SickLeaveHours]  DEFAULT ((0)),
 [CurrentFlag] [dbo].[Flag] NOT NULL CONSTRAINT [DF_Employee_CurrentFlag]  DEFAULT ((1)),
 [rowguid] [uniqueidentifier] ROWGUIDCOL  NOT NULL CONSTRAINT [DF_Employee_rowguid]  DEFAULT (newid()),
 [ModifiedDate] [datetime] NOT NULL CONSTRAINT [DF_Employee_ModifiedDate]  DEFAULT (getdate()),
 CONSTRAINT [PK_Employee_EmployeeID] PRIMARY KEY CLUSTERED

The fixed columns are as follows:

[EmployeeID] [int]  = 4
[ContactID] [int] = 4
[ManagerID] [int] = 4
[BirthDate] [datetime] = 8
[MaritalStatus] [nchar](1) = 2
[Gender] [nchar](1) = 2
[HireDate] [datetime] = 8
[SalariedFlag] [dbo].[Flag] = 1
[VacationHours] [smallint] = 2
[SickLeaveHours] [smallint] = 2
[CurrentFlag] [dbo].[Flag] = 0 - second bit.
[rowguid] [uniqueidentifier] = 16
[ModifiedDate] [datetime] = 8

Adventure Works uses a user defined type of Flag, which is just declared as a bit. So thats 61 bytes of fixed width storage – which is the value we expected.

We need to know the column order, we could assume it is the same as the table declaration, but we should double check the actual colum ordering to be certain.

select colorder, syscolumns.name
from syscolumns
inner join systypes on syscolumns.xusertype = systypes.xusertype
where id =object_id('HumanResources.Employee') and variable = 0
order by colorder

colorder name
-------- -------------
1        EmployeeID
3        ContactID
5        ManagerID
7        BirthDate
8        MaritalStatus
9        Gender
10       HireDate
11       SalariedFlag
12       VacationHours
13       SickLeaveHours
14       CurrentFlag
15       rowguid
16       ModifiedDate

So the order looks good, let’s start assigning values:

[EmployeeID] [int]  = 01000000 = 00000001 = 1
[ContactID] [int] = B9040000 = 000004B9 = 1209
[ManagerID] [int] = 10000000 = 00000001 = 1
[BirthDate] [datetime] = 0000000040670000 = 0000674000000000 = '1972-05-15 00:00:00.000'  - (Use Select convert(datetime, 0x0000674000000000) to check)
[MaritalStatus] [nchar](1) = 004D = 77 = UniCode for M (Use Select nchar(77) to check)
[Gender] [nchar](1) = 4D00 = 004D = 77 = M
[HireDate] [datetime] = 00000000CB890000 = 000089CB00000000 = '1996-07-31 00:00:00.000'
[SalariedFlag] [dbo].[Flag] = 4A = 01001010 in Binary, which will need further work.
[VacationHours] [smallint] = 1500 = 0015 = 21
[SickLeaveHours] [smallint] = 1E00 = 001E = 30
[CurrentFlag] [dbo].[Flag] = Contained within the First Flag Byte
[rowguid] [uniqueidentifier] = 4AD0E1AA37C27449B4D5935247737718
[ModifiedDate] [datetime] = 0000000035950000 = 0000953500000000 = '2004-07-31 00:00:00.000'
Number of Columns : 1000 = 0010 - = 16 Columns - which is correct.
Nullability Bitmap : 1 bit per Column, so 16/8 = 2 Bytes : 0000
Number of Variable Rows : 0300 = 0003 = 3 Variable Length Columns
Var Column End 1 : 5d00 = 005D = 93
Var Column End 2 : 8500 = 0085 = 133
Var Column End 3 : BD00 = 00BD = 189

The number of columns was at the offset of 65, that used 2 bytes, the nullability was 2, the variable number of rows was 2 bytes and each variable row pointer used a further 2 each, so the data starts at byte after e.g. 65 + 12 = 77. First variable width data ends at 93, so the first variable width column is using 16 bytes of space.

I should mention why the nullability bitmap was 2, all the columns are declared as not null, so the bitmap did not increase in size, if there was a single nullable column, then the bitmap would have a bit per column in the table  – not a bit per nullable column, it is an all or nothing approach.

Raw Values : 3100 3400 3400 3100 3700 3800 3000 3700

Reverse Bytes : 0031 0034 0034 0031 0037 0038 0030 0037
Result : 14417807  (You can use Select NChar(0x0031) etc to check each value.)

Next up is 93 to 133, a gap of 40 bytes.

Raw Values    : 6100 6400 7600 6500 6E00 7400 7500 7200 6500 
2D00 7700 6F00 7200 6B00 7300 5C00 6700 7500 7900 3100
Reverse Bytes : 0061 0064 0076 0065 006E 0074 0075 0072 0065 
002D 0077 006F 0072 006B 0073 005C 0067 0075 0079 0031
Output : adventure-works\guy1

Final column, should be 189 – 133 = 56 bytes long, which it is, and decoding it again in the same way gives the following.

Raw Values    : 5000 7200 6F00 6400 7500 6300 7400 6900 6F00 6E00 2000 5400 6500 6300
6800 6E00 6900 6300 6900 6100 6E00 2000 2D00 2000 5700 4300 3600 3000
Reverse Bytes : 0050 0072 006F 0064 0075 0063 0074 0069 006F 006E 0020 0054 0065 0063
 0068 006E 0069 0063 0069 0061 006E 0020 002D 0020 0057 0043 0036 0030
Result : Production Technician - WC60

That is the row contents decoded, and the deleted row’s original values are now exposed.

As you can see from the complexity of decoding one part of a trivial delete operation within the log, this is not something that you would wish to rely on / use as an audit trail etc. But hopefully it has provided some insight and added to the minimal documentation on how to approach the decoding process. I will try to decode more of the operations as, insert and update shouldn’t be too hard to manage, but sucess is not guarenteed.