Archive

Posts Tagged ‘Internals’

How Do You Decode A Simple Entry in the Transaction Log? (Part 1)

February 3, 2010 7 comments

There is always a lot of interest in reading log records within SQL, although when you start getting into it you soon realise that it is an undocumented quagmire and quickly decide that there are better things you can do with your time.

It has taken a non-trivial amount of time to decode and has created a long documented process, so it will be in two parts, so I should caveat this undertaking with two clear statements first:

  • There is no real reason you want to go poking around in the log except for ‘interest value’ – if you need to actually properly read log files then as you are about to see, using commercial applications will look very cheap in comparison to the time you would spend trying to do this. Even better – make sure you have proper backup’s so that you never need to try retrieve rows from the file!
  • There is basically very little external documentation to guide you in reading an entry, I am learning it by doing it, and using what little documentation there is, along with trying some logical deductions and good understanding of internals from training – it is designed to act as an example of figuring the log entry out, but also demonstrating that this is not trivial and not something you will want to do.

That said, let’s start by trying to work out a single log record.

Using the adventure works example database, I have set it into fully logged mode and taken a backup to ensure that the transaction log is being used correctly. I’ve removed the foreign key constraints and triggers on the HumanResources.EmployeeTable, since I want to be able to delete a row and not trip over the example FK’s and trigger that prevents deletion.

To get to the log file values out, I am using the following snippet of SQL:

Select * from ::fn_dblog(null,null)

On initial inspection the log has only a not too many rows, altering the Diff Map, File Header etc. So I issue the command to delete a single row from the table.

Delete from HumanResources.Employee where EmployeeId = 1

And then select the contents of the log, and I have copied some of the fields from the output:

Current LSN Operation Context Transaction ID Log Record Length AllocUnitName
000001bd:000001d8:0002 LOP_BEGIN_XACT LCX_NULL 0000:0000da58 96 NULL
000001bd:000001d8:0003 LOP_DELETE_ROWS LCX_MARK_AS_GHOST 0000:0000da58 148 HumanResources.Employee.AK_Employee_LoginID
000001bd:000001d8:0004 LOP_SET_BITS LCX_DIFF_MAP 0000:00000000 56 Unknown Alloc Unit
000001bd:000001d8:0005 LOP_DELETE_ROWS LCX_MARK_AS_GHOST 0000:0000da58 124 HumanResources.Employee.AK_Employee_NationalIDNumber
000001bd:000001d8:0006 LOP_SET_BITS LCX_DIFF_MAP 0000:00000000 56 Unknown Alloc Unit
000001bd:000001d8:0007 LOP_DELETE_ROWS LCX_MARK_AS_GHOST 0000:0000da58 120 HumanResources.Employee.AK_Employee_rowguid
000001bd:000001d8:0008 LOP_DELETE_ROWS LCX_MARK_AS_GHOST 0000:0000da58 108 HumanResources.Employee.IX_Employee_ManagerID
000001bd:000001d8:0009 LOP_SET_BITS LCX_DIFF_MAP 0000:00000000 56 Unknown Alloc Unit
000001bd:000001d8:000a LOP_DELETE_ROWS LCX_MARK_AS_GHOST 0000:0000da58 288 HumanResources.Employee.PK_Employee_EmployeeID
000001bd:000001d8:000b LOP_SET_BITS LCX_PFS 0000:00000000 56 HumanResources.Employee.PK_Employee_EmployeeID
000001bd:000001d8:000c LOP_COMMIT_XACT LCX_NULL 0000:0000da58 52 NULL

That’s a lot of entries for a single delete – which is explained when you check the AllocUnitName, the delete has to also delete the entries within the indexes and the Adventure works table I am working against does indeed have 5 indexes, 4 Non-clustered and 1 Clustered. So that is making a lot of sense, and the whole operation is surrounded by a LOP_BEGIN_XACT and LOP_COMMIT_XACT, and we know from normal SQL terminology with SET XACT ABORT ON / OFF that it is about transactional scope and whether a whole transaction rolls back if a single item within the statement.

Let’s concentrate on the record deletion for the clustered index, with the larger log length of 288. That is a long record for what is in theory marking the row as a ghost, which suggests the row is within the log record. This is also backed up by the differing lengths for the other ghost record marks, which differ in size, just as the index row size does.

What are the interesting fields available to us from the Log record that we can pick up on straight away:

  • Transaction ID : All the deletions belong to the same transaction ID, so we know they are related / part of the same transactional operation.
  • Previous LSN : Each subsequent deletion in the list, shows the previous one as the previous LSN, so we can see the chain of log records.
  • Page ID : We can tell which page was altered.
  • Slot ID : Which slot within the page was altered.
  • SPID : The LOP_BEGIN_XACT row shows the SPID that issued the command
  • BeginTime : The LOP_BEGIN_XACT row shows the DateTime when the command was issued.
  • Transaction Name : The LOP_BEGIN_XACT row shows the type of transaction, for this one shows DELETE.
  • EndTime : The LOP_COMMIT_XACT row shows the end time of the transaction.

Interestingly, there is no indication as to who issued the command, there is a TransactionSID – which might turn out to be the account SID, but that will be a tangental investigation.
The final column to have a look at is RowLogContents, for the clustered index row deletion, rowlog contents 0 has the larger set of data to work on.

0x3000410001000000B90400001000000000000000406700004D004D0000000000CB8900004A15001E004AD0E1AA37C27449B4D593524773771800000000359500001000000003005D008500BD003100340034003100370038003000370061006400760065006E0074007500720065002D0077006F0072006B0073005C006700750079003100500072006F00640075006300740069006F006E00200054006500630068006E0069006300690061006E0020002D0020005700430036003000

If that does make you go running for the hills, then you have a chance of decoding it.

That’s only 180 Bytes, but this is the log row contents, not the actual log entry itself, so the fixed log fields are not within it, let’s work on it. The structure of a row is a good place, since we know expect the structure of the row to be present within the log. The format is very nicely detailed in Kalen Delaney’s SQL Internal’s book so let’s use that.

  • 2 Byte : Status Bits
  • 2 Bytes: Offset to find number of columns
  • X Bytes:Fixed Length Columns
  • 2 Bytes: Total Number of Columns in the data row
  • 1 Bit per column, Rounded up: Nullability Bitmap
  • 2 Bytes:Number of Variable Length Columns within the data row
  • 2 Bytes per variable length column : Row Offset marking the end of each variable length column
  • X Bytes:Variable Length Columns

We can use this in Part 2 to decode the binary contents of the log record.

How Can You Tell if a Database is in Pseudo Full/Bulk Logged Mode?

December 13, 2009 Leave a comment

I was asked on Friday, “how do you tell if a database logging mode is reporting bulk or full, but it is still in simple?” – as mentioned before, a database is not in really full / bulk logged unless a full backup has been taken. Until that time the database is still running in a simple mode, sometimes referred to as pseudo-simple. It is not easy to spot, because the properties of the database will report full / bulk as appropriate and give no indication that it is not actually logging in the way it says.

The existence of a backup of the database is not a reliable enough mechanism for this, since the database can be backed up and then moved out of full / bulk logged mode into simple and back again. This breaks the backup and transaction log chain, but the database is still reporting full – to make it worse there is a backup record showing on the history, giving it an air of legitimacy.

The backup records can be accessed from the sys.sysdatabases and msdb.dbo.backupset, MSDN even has an example script showing how to see when a database was last backed up and by whom.

SELECT 
T1.Name as DatabaseName, COALESCE(Convert(varchar(12), MAX(T2.backup_finish_date), 101),'Not Yet Taken') as LastBackUpTaken, COALESCE(Convert(varchar(12), MAX(T2.user_name), 101),'NA') as UserName
FROM sys.sysdatabases T1 LEFT OUTER JOIN msdb.dbo.backupset T2 ON T2.database_name = T1.name
GROUP BY T1.Name
ORDER BY T1.Name

To play around with the scripts you probably want a test database:

CREATE DATABASE [LogModeTest] ON  PRIMARY
( NAME = N'LogModeTest', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\LogModeTest.mdf' , SIZE = 3072KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )
 LOG ON
( NAME = N'LogModeTest_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\LogModeTest_log.ldf' , SIZE = 1024KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)
 COLLATE Latin1_General_CI_AI

With a minor alteration to the MSDN script you can get the backup history for this database:

SELECT
T1.Name as DatabaseName, COALESCE(Convert(varchar(12), MAX(T2.backup_finish_date), 101),'Not Yet Taken') as LastBackUpTaken FROM sys.sysdatabases T1
LEFT OUTER JOIN msdb.dbo.backupset T2 ON T2.database_name = T1.name
WHERE T1.Name = 'LogModeTest'
GROUP BY T1.Name

The results show the database is not yet backed up:

DatabaseName                  LastBackUpTaken 
----------------------------- ---------------
LogModeTest                   Not Yet Taken

That is easy to fix, so let’s take a backup of the database and recheck the last backup value.

BACKUP DATABASE [LogModeTest] TO  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\LogModeTest.bak' WITH NOFORMAT, NOINIT,  NAME = N'LogModeTest-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10

DatabaseName                   LastBackUpTaken 
------------------------------ ---------------
LogModeTest                    12/13/2009

As expected the date of the backup is now set. If we alter the logging mode of the database to simple we will break the transaction log chain. To demonstrate the backup information being an unreliable source, let’s change to simple, create a table and then return to the fully logged mode.

ALTER DATABASE [LogModeTest] SET RECOVERY SIMPLE WITH NO_WAIT
CREATE TABLE foo(id int identity)
ALTER DATABASE [LogModeTest] SET RECOVERY FULL WITH NO_WAIT

If we now attempt to backup the transaction log, SQL is going to throw an error.

BACKUP LOG [LogModeTest] TO  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\LogModeTest.bak' WITH NOFORMAT, NOINIT,  NAME = N'LogModeTest-Transaction Log  Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10 

Msg 4214, Level 16, State 1, Line 1
BACKUP LOG cannot be performed because there is no current database backup.
Msg 3013, Level 16, State 1, Line 1
BACKUP LOG is terminating abnormally.

And if we check the database backup history using the MSDN script:

DatabaseName                   LastBackUpTaken
------------------------------ ---------------
LogModeTest                    12/13/2009

So the backup history continues to show a date of the last full backup even though the transaction log chain is now broken. SQL certainly knows the database has not had a full backup since swapping into fully logged mode, so any transaction log backup is invalid, thus the error.

There is an easier way to find out that you are in pseudo-simple mode, without trying to perform a transaction log backup:

SELECT name, COALESCE(Convert(varchar(30),last_log_backup_lsn), 'No Full Backup Taken') as BackupLSN 
FROM sys.databases
INNER JOIN sys.database_recovery_status on sys.databases.database_id = sys.database_recovery_status.database_id

Run this against your server and it lists the databases that have had a backup taken (by the existence of a backup LSN) and which have not had a full backup that could be used in recovery. If we then backup the database and recheck the values, the test database now records an LSN, showing it is out of psuedo-simple and into the full / bulk logged modes.

So that indicates whether we are in pseudo simple or not, but does not link back to the properties of the database to check what is the actual database logging mode – you are primarily only interested in databases that are not in simple mode in the first place, but are running in psuedo-simple due to the lack of a relevant full database backup. We can alter the query to handle this specific situation and the result is:

SELECT name, recovery_model_desc, COALESCE(Convert(varchar(30),last_log_backup_lsn), 'No Full Backup Taken') as BackupLSN 
FROM sys.databases
INNER JOIN sys.database_recovery_status on sys.databases.database_id = sys.database_recovery_status.database_id
WHERE sys.databases.recovery_model <> 3 AND last_log_backup_lsn is null

If you run that query against your database server and get any results then you have databases that are not running the recovery mode they are indicating / you that you think they are – which would generally not be a good thing.

Why is SQL Azure and Index Fragmentation a Bad Combination?

November 25, 2009 Leave a comment

I’ve been thinking through and experimenting a bit more with some of the concepts in SQL Azure – specifically I was considering the impact of fragmentation on both the storage (in terms of the storage limit) as well as the maintenance. This is not a new issue, DBA’s face fragmentation regularly and can deal with it in a variety of ways, but with SQL Azure the problem looks magnified by a lack of tools and working space. Whilst looking into this, I then realised that there is an unfortunate consequence of not knowing how much data space your index is actually using.

Each table in SQL Azure has to have a clustered index if data is going to be inserted into it and clustered indexes can suffer from fragmentation if chosen poorly. The combination of SQL Azure and the time-honoured fragmentation provides three consequences about it, fragmentation:

  • will occur and you have no way in which to measure it due to the lack of DMV support.
  • will create wasted space within your space allocation limit.
  • will reduce your performance.

You could work it out if you knew how much space you had actually used vs. what the size of the data held is, but we are unable to measure either of those values. If you have chosen the data compression option on the index then even those values would not give you a fragmentation ratio.

This leaves us with a situation in which you can not know how much you are fragmented, meaning:

  • You schedule a regular index rebuild.
  • Hope SQL Azure performs index rebuilds for you.

I’m not aware of SQL Azure doing this for you – and you do not have SQL Agent facilities either.

So this seems very wrong, the concept of SQL Azure is to take away a lot of the implementation details and hassle from the subscriber – DR and failover is handled etc. But there looks to be as gap in which certain items such as fragmentation is falling – I have not seen any documentation saying SQL Azure handles it (but there could be some hidden somewhere and I hope there is!) and neither are you given the right tools in which to program and handle it yourself.

What happens when you hit that size limit?

Msg 40544, Level 20, State 5, Line 1 The database has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions. Code: 524289 

That took a lot of time to get to, (SQL Azure is not fast), but was generated using a simple example that would also demonstrate fragmentation.

Create Table fragtest ( id uniqueidentifier primary key clustered,
padding char(3000)
) 

Very simple stuff, deliberately using a clustered key on a GUID to cause a decent level of fragmentation, and also using the padding fixed with character field to ensure 2 rows per page only, maximising the page splits.

insert into fragtest values (newid(), replicate('a',1000))
go 200000

Because of the randomness of the newid() function, the level of fragmentation is not predictable but will certainly occur – in my test I hit the wall on 196,403 records and failed with an out of space message.

Given the 2 rows per page and the number of rows, with ~0% fragmentation the data should be able ~767Mb – that is considerably short of 1 Gb – so there is a significant level of fragmentation in there wasting space, about 23% of it. If you include the 2k per page being wasted by the awkward row size then the actual raw data stored is roughly ~60% of the overall size allowing for row overheads etc.

So there are two important points from this contrived example:

  • You can lose significant space from bad design.
  • Doing this backs you into a corner that you will not be able to get out of – this is the worst part.

How are you cornered? well, try work out how to get out of the situation and defrag the clustered index / free up the space, you could:

  • Attempt an index rebuild.
  • Try to rebuild it with SORT_IN_TEMP.
  • Drop the index.
  • Delete data.

The first three fail, the SORT_IN_TEMP is not supported and would not of rescued the situation either since you have no working space in which to write the newly sorted rows prior to removing the old ones.  So do you really want to delete data? I don’t think we can consider that an option for now.

This all seems like a ‘rock’ and a ‘hard place’; whilst SQL Azure can support these data quantities,  it seems prudent that you never consider actually going close to them at all – and that you equally are going to find it difficult to understand if you are close to them, since there is no way of measuring the fragmentation. The alternative is that you manually rebuild indexes on a regular basis to control fragmentation, but then enough free space is going to have to be left to allow you to rebuild your largest index without running out of space – reducing your data capacity significantly.

The corner is not entirely closed off, the way out of the corner would be to create another SQL Azure database within my account and select the data from database1.fragtest to database2.fragtest and then drop the original table and transfer it back – not ideal but it would work in an emergency.

I think the key is to design to make sure you do not have to face this issue; keep your data quantities very much under the SQL Azure size limits, and watch for the potential of tables being larger than the remaining space and preventing an re-indexing from occurring.

Interested to know your thoughts on this one, and what other consequences of being close to the limit will come out.

Categories: SQL Server Tags: , ,

The Sequence of an Index Uniquifier

October 20, 2009 Leave a comment

During a training session today, I was asked about the structure of the Uniquifier, and whether it was a straight identity column. Off the top of my head I couldn’t remember the exact structure, I considered it a 4 byte int, but was not sure whether it acted as a pure identity value or acted in a more complex manner when incrementing, so decided to investigate it tonight.

To start from the beginning, an index uniquifier is the term given to the field that is automatically generated by SQL Server when you create a clustered index, but the index key is not specified as unique. Since each record in the table has to be uniquely identifiable, SQL will automatically assigned a 4 byte field to the row to make it unique, commonly called the ‘Uniquifier’. At this point I am sure English scholars will be frowning, pondering on the nature of the word and whether it qualifies as English; however that the term used so we will run with it.

It is actually quite easy to see this field in action, let’s create a simple table:

CREATE TABLE dbo.unique_test  (  
firstname char(20) NOT NULL,  
surname char(20) NOT NULL  
)  ON [PRIMARY] GO
CREATE CLUSTERED INDEX [ix_test] ON [dbo].[unique_test]
 (
[firstname] ASC
)
ON [PRIMARY]

The clustered index is not unique, by design, so let’s start adding duplicate rows to see the effect:

insert into unique_test values ('John', 'Smith')
go 10

The table now contains 10 rows, each with the same details. This does not cause any undue concern, because each row is actually still unique – the way to show this is using the DBCC INC and DBCC Page commands, I’ve cut the output down since it is so wide.

dbcc ind ('testdb','unique_test',1)
PageFID PagePID     IAMFID IAMPID      PageType
------- ----------- ------ ----------- --------
1       41          NULL   NULL        10      
1       174         1      41          1       

The output shows a data page numbered 174 for my example and the IAM page with an ID of 41. We can crack open the page and view the contents very easily using DBCC Page.

dbcc dbcc traceon(3604)
dbcc page (idtest,1,174,3)

The output is quite large, but in essence, the first record is stored with the following details: 

UNIQUIFIER = [NULL]                 
Slot 0 Column 1 Offset 0x4 Length 20
firstname = John                    
Slot 0 Column 2 Offset 0x18 Length 20

 The second record:

Slot 1 Column 0 Offset 0x33 Length 4
UNIQUIFIER = 1                      
Slot 1 Column 1 Offset 0x4 Length 20
firstname = John                    
Slot 1 Column 2 Offset 0x18 Length 20
surname = Smith  

 The third record:

Slot 2 Column 0 Offset 0x33 Length 4
UNIQUIFIER = 2                      
Slot 2 Column 1 Offset 0x4 Length 20
firstname = John                    
Slot 2 Column 2 Offset 0x18 Length 20
surname = Smith                     

And so forth. The first record’s uniquifier is visible and clearly named within the data page, but set to null. The second copy of the same value receives the uniquifier of one, the third copy receives a 2 etc.  This count is maintained separately for each duplication, so the insert of a new name multiple times will also receive its own counter, beginning at null and working upwards, 1,2,3 etc. So just because the uniquifier is 4 bytes, this does not limit the total number of rows in the table to ~2.1 billion, but does logically limit the total number of duplicates to 2.1 billion. I must confess to not having tested that limit, generating 2.1 billion rows of duplicate data is not trivial and a scrapbook calculation predicts 435 hours of processing on a virtual pc. I suspect the error message it raises when it hits the limit would be interesting.

If we remove all the rows from the table and then add 10 more does the uniquifier reset? Easy to test but the short answer was no, the uniquifier continued to rise, 10 thru 19.

I was a bit suspicious of this since any requirement for the uniquifier to rise / remember what existed before requires it to be stored somewhere – it has to survive a crash after all, but there is no apparent place the current count is stored. If there was, you wouldn’t be storing just 1 value, you would be forced to store a value for each record key that had duplicates. This could run into thousands of separate counters being maintained per clustered key so it just doesn’t make sense that it is stored, it would be a very noticable overhead.

When checking the DBCC Ind for the empty table it insisted it still had a data page, but the only contents of the data page was a single ghost record – a row that has been marked as deleted. The ghost record was the for the ‘John Smith’ with the highest uniquifier before, was this coincidence? The other ghost records had not hung around, so why did this one persist?

I dropped and recreated the table again, inserted 10 rows and then deleted them. Checking DBCC Ind the table still showed a HoBT IAM allocation page for the table and a data page, the data page contained a single ghost record, the one with a Uniquifier of 9 – the highest given out when 10 duplicates were added. Even waiting some considerable time the ghost record was not cleaned up, so it appears that it will not delete it.

If I added another duplicate row, it picked up the next number in the sequence (10) and shortly after the ghost record was removed from the page. Very convenient and not a coincidence at all – the memory of the last uniquifier given out  persists as a ghost record, even if all the duplicates for the table have been removed.  What seems strange is this ghost record hanging about, persisting an entire record, to keep the duplicate count for that key, when no instances of it remain on the table.

It can not possibly do this for every key since the space overhead would become very noticable again, so how does it choose what to persist, the last entry? unfortunately it doesn’t appear that simple at all, after a number of tests it appeared to only be interested in keeping the ghost entry for the record that had the highest key value, so alphabetically, the one closed to ‘Z’ for my example.

Conclusion? On the evidence, whilst the other ghost records still persist for a short time, even deleting and then adding more duplicates can see the number continue from where it left of, but given a short time for the ghost records to be removed the uniquifier will restart the sequence back at Null,1,2 etc. Except in the case of the highest entry from the index perspective, that ghost record decides to stick around until there is another entry using the same key, continuing the sequence, at which point the ghost record then finally disappears.

I can not think of any sensible reason why it would do this, can you?

Overall, the uniquifier is a cost overhead of not having a unique index, and at a cost of 4 bytes, an int identity column makes a lot of sense – for all purposes it acts the same and serves the same purpose but in a far more visible manner – so it really does not make much sense to rely on the uniquifier provided for you, take control and create your own.

What is the SQL Server 2008 DateTimeOffset Internal Structure?

October 13, 2009 Leave a comment

After decoding the DateTime2 internal structure I thought I would take a quick look at the DateTimeOffset structure, since it should not present too many difficulties and post it up quickly, but was surprised at the initial result of a test. It follows the same basic premise that the time portion is followed by the date portion and the time / date is based on a day count from the epoch time of 0001/01/01 and the time the number of intervals since midnight, where the interval is defined by the accuracy.

I was expecting the datetime offset value itself to occupy the additional 2 bytes quoted, and not affect the other values. Bad assumption, as soon as I cracked open a few examples I could immediately see that setting the offset also alters the underlying time / date component values as well.

Using the same comparison methods as before the underlying time value is clearly being adjusted:

'0001/01/01 00:00:00 -0:00' => 0x0700000000000000000000
'0001/01/01 00:00:00 -12:00' => 0x0700E034956400000030FD

So, even though an offset is being stored, the underlying time is also being altered to match, and the internal storage is using UTC as the reference point. This makes sense as the most valid reference that you could use.

'0001-01-01 12:00:00.0000000 +00:00' => 0x0700E03495640000000000
'0001-01-01 00:00:00.0000000 -12:00' => 0x0700E034956400000030FD

The same time / date is generated for the two values, but the last two bytes hold the offset and have stored the offset used when the date was initially stored. The underlying storage of the time though is clearly identical in both,  so UTC is the common ground they each get stored again.

The final 2 bytes for the offset are pretty easy to decode, since the pattern is the same as before with a slight twist. The offset time records the number of minutes for the offset in hex, with the first byte of the two being the least significant as before with the time, so you end up reading the two bytes left-to-right and then decode that byte right-to-left.

The twist is that for positive offsets, the value increments 1,2,3, in hex as appropriate, but for negative values, it starts decrementing by considering -1 equal to ‘FFFF’, I’ve split the hex output into the individual components by adding some spaces to make it easier to read. (Accuracy Code, Time Value, Date Value, Offset Used)

'2001-01-01 12:00:00.0000000 +00:01' => 0x07   009A717164   75250B  0100
'2001-01-01 12:00:00.0000000 +00:00' => 0x07   00E0349564   75250B  0000
'2001-01-01 12:00:00.0000000 -00:01' => 0x07   0026F8B864   75250B  FFFF

Since the offsets supported at only +14 hours to -14 hours, there is no risk of the two ranges overlapping. When I think about this a bit more, it is acting as a signed number, -1 being 11111111111 etc. So the 2 bytes at the end is a signed int of the number of minutes offset.

There are a number of time zones in the world that do not occur at exact hourly intervals from UTC, some are on the half hour mark such as Caracas (-4:30) or Delhi (+5:30) to name a few, whilst Kathmandu (+5:45) is even more specific.  In theory the format allows offsets specified to even greater levels of distinction, although I am not sure as to why you would wish to use it. Do you really want an offset of +3:17 minutes? That is potentially scary to consider that as a valid input to the value.

That has made me wonder as to why the accuracy was set so high, when in reality 15 minute intervals would of been sufficient, with a -14 to +14 range, that is 113 different values inclusively, which could be accomodated within a single byte.

So why spend 2 bytes on the format, when 1 was enough? Was it to just be compatible to the ISO format in some way that required it? Not sure.

What is the SQL Server 2008 DateTime2 Internal Structure?

October 11, 2009 2 comments

SQL Server has a number of new date time formats, but the one I am most interested in is DateTime2. The internal format of the SQL DateTime is commonly mistaken as 2×4 byte integers, with the latter integer being milliseconds since midnight. It is in fact the number of 1/300ths of a second since midnight which is why the accuracy of the DateTime within SQL Server has historically been 3.33ms. (If you really want to see it, crack it open by converting it to a binary, adding 1 and re-converting, you add 3.33ms, not 1 ms.)

So DateTime2 must use a different format, and as a weekend exercise that had no purpose than understanding the internals I thought I’d take a look. I have not seen the information in the BoL or posted as yet, so might be of use.  I am starting with the DateTime2(7) and looking at the maximum accuracy structure. The code used to crack it open each time is basically as follows:

declare @dt datetime2(7)
set @dt = '2000/01/01 00:00:00'
declare @bin varbinary(max)
set @bin = CONVERT(varbinary(max), @dt)

To make my life easier, SQL conveniently outputs all the values as hexi-decimal numbers. The results are not what you would expect.

0x07000000000007240B

The date which traditionally occupied the first 4 bytes, clearly is occupying the last few bytes. So the format is not going to be obvious or simple. Interestingly the returned result is 9 bytes, but the length is quoted as 8. It is returning 8 when checked using the length, that first byte is somewhat odd to make an appearance.  It’s also suspiciously the accuracy value, and with a few tests using a change of accuracy, it show that value changes. So the first pseudo-byte is the accuracy indicator.

To start figuring out some more, let’s take the time back to the beginning point, which in this case is not 1900/01/01 but 0001/01/01 which when converted gives us:

'0001/01/01 00:00:00' => 0x070000000000000000

Start incrementing the day portion and there is an obvious pattern, the 6th byte changes.

'0001/01/02 00:00:00' => 0x070000000000010000
'0001/01/03 00:00:00' => 0x070000000000020000
'0001/01/31 00:00:00' => 0x0700000000001E0000

As you try the 2nd month, to check where the month is, the same byte alters, so it represents days, not specific date parts. Is it the number of days since the beginning of the year? No.

'0001/02/01 00:00:00' => 0x0700000000001F0000

If it was, there would be an issue since 1 byte does not represent enough values, as we can see, FF occurs on the 13th of September, and then it rolls over and puts a 1 in the 7th Byte position.

'0001/09/13 00:00:00' => 0x070000000000FF0000
'0001/09/14 00:00:00' => 0x070000000000000100
'0001/09/15 00:00:00' => 0x070000000000010100

It rolls over, then carries on as before. This immediately suggests the next test, to roll over the year, and the pattern continues.

'0001/12/31 00:00:00' => 0x0700000000006C0100  
'0001/12/31 00:00:00' => 0x0700000000006D0100

So the format is just counting, we see it in the example as hex, but it is a straight number count going on but the hex values are left-to-right. Only 2 bytes are used so far, which do not represent enough day combinations, add the third byte in by going past 180 years:

'0180/06/06 00:00:00' => 0x070000000000FFFF00  
'0180/06/07 00:00:00' => 0x070000000000000001

So the final byte is then increased, so the number of combinations becomes 16777215 – that seems a lot better and certainly going to cover the range required.

'2001/01/01 00:00:00' => 0x07000000000075250B

So that is the final 3 bytes decoded, a simple pattern – and provides the template of how the time is also stored.

'0001/01/01 00:00:00.0000000' => 0x070000000000000000
'0001/01/01 00:00:00.0000001' => 0x070100000000000000
'0001/01/01 00:00:00.0000255' => 0x07FF00000000000000
'0001/01/01 00:00:00.0065535' => 0x07FFFF000000000000
'0001/01/01 00:00:00.0065536' => 0x070000010000000000

So to check whether the format is the same,

'0001/01/01 00:00:00.9999999' => 0x077F96980000000000

Decode that again and it all matches:

select (152 * 256 * 256) + (150 * 256) + 127
-----------
9999999

When we click over into 1 second exactly, we increment the first byte by 1, so the time portion is still represented in 100ns intervals, with the normal system of each byte counting up 1 every time the previous byte rolls over. As we get to the limit of the 3 bytes, it rolls into the 4th and then the 5th.

'0001/01/01 00:00:01.0000000' => 0x078096980000000000

So the internal format of the DateTime2(7) is decoded, not difficult but it is an interesting choice – it is now a straight binary number, with the Least Significant Byte being on the Left, the Most Significant being on the right (for each section.) Within the byte however, to convert it you must still read it right-to-left.

The first 5 bytes are recording how many time units intervals have passed since midnight, and the last 3 bytes recording how many days have passed since 0001/01/01.

The time unit intervals are dictated by the accuracy of the number, 100ns for DateTime2(7), and 1 Micro second intervals for a DateTime2(6) etc.  The way in which you interpret it does not change, but the units you are multiplying the time portion by, alters based on the accuracy.

You could construct 2 dates that are identical at a binary level, but due to the field meta-data on accuracy, they do not represent the same date time.

declare @dt1 dt1 datetime2(6)
set @dt1 = '0001/01/01 00:00:00.000001'
declare @dt2 datetime2(7)
set @dt2 = '0001/01/01 00:00:00.0000001'

0x060100000000000000
0x070100000000000000 

 And that is perhaps why on output they automatically have prefixed the binary value with the datetime accuracy, so that they are not entirely identical? I’m not sure but would be interested to find out.