Archive

Posts Tagged ‘Transaction Log’

Which User Made That Change?

January 8, 2011 4 comments

If you have spent any time tinkering about in the transaction log, you will of already come across a bit of a problem when trying to decide what was done and by whom – the ‘what part’ I have decoded in a few posts, but the ‘whom’ part is a lot harder. As far as I can tell the log only contains the SPID of the user who opened the transaction, and does not give us any indication as to who that user really was.

From an actual investigative perspective this is a bit of a painful exercise, I can see a row was deleted but to find out who / what did that actual deletion I would have to start examining either the SQL Server logs or the Windows Server Logs. The default behaviour of SQL Server security though is to only log failed login attempts so the successful ones will not show up by default – to get those appearing you need to change your SQL Server security settings. You can access these logs from the SQL management studio using either the xp_readerrorlogs or sp_readerrorlogs procedures although the nature of the log and textural values make it difficult to then combine in a set based manner – I can humanly read the values but machine reading them for any purpose is a bit of a pain – there is also the issue that those logs will be cycled – and the old logs could well be completely offline.

So I would prefer an easier solution, keeping a record of the logins within the database regardless of the SQL Server security settings, and being in a form that allows me to use a bit more of a set based solution against it. To start with, we will need a table to store the information available to us during the logon process:

create table master.dbo.spidArchive (
 LoginTime  datetime2(7)
 ,SPID   integer
 ,ServerName  nvarchar(100)
 ,LoginName  nvarchar(100)
 ,LoginType  nvarchar(100)
 ,LoginSID  nvarchar(100)
 ,ClientHost nvarchar(100)
 ,IsPooled  tinyint
)

The spidArchive table here is created in the master database so that it can cover the connections for any of the databases. You can see we have access to a lot of useful information, not just who executed the command, but from which machine they logged in from. The next step is to get SQL Server to add a row to the table every time a login occurs – from SQL Server 2005 onwards we have had access to DDL triggers as well as DML triggers and have the ability to intercept a number of non-DML events.

create trigger spidLogin on all server
after logon
as
 declare @eventdata xml;
 set @eventdata = EVENTDATA();

 INSERT INTO master.dbo.spidArchive
 (
  LoginTime
  ,SPID   
  ,ServerName  
  ,LoginName  
  ,LoginType  
  ,LoginSID  
  ,ClientHost 
  ,IsPooled
 )
 VALUES 
 (
  @eventdata.value('(/EVENT_INSTANCE/PostTime)[1]','datetime2(7)')
  ,@eventdata.value('(/EVENT_INSTANCE/SPID)[1]','nvarchar(100)')
  ,@eventdata.value('(/EVENT_INSTANCE/ServerName)[1]','nvarchar(100)')
  ,@eventdata.value('(/EVENT_INSTANCE/LoginName)[1]','nvarchar(100)')
  ,@eventdata.value('(/EVENT_INSTANCE/LoginType)[1]','nvarchar(100)')
  ,@eventdata.value('(/EVENT_INSTANCE/SID)[1]','nvarchar(100)')
  ,@eventdata.value('(/EVENT_INSTANCE/ClientHost)[1]','nvarchar(100)')
  ,@eventdata.value('(/EVENT_INSTANCE/IsPooled)[1]','tinyint')
 )

During the login process, the EventData() function returns a fixed format XML fragment from which we can extract the values we seek and simply insert into our spidArchive table. Now we have a log being taken of all connections being established to the server, we can start using this to translate from a SPID to a user, even when the user is no longer connected – as long as we know the SPID and the time, we just need to look for the closest entry in the past for that SPID, and that will indicate which user was currently logged on at the time. This function should go in the master database again.

CREATE FUNCTION dbo.ConvertSpidToName(@SPID integer, @Date datetime2(7)) RETURNS nvarchar(100) AS
BEGIN
 DECLARE @name nvarchar(100)
 SELECT TOP(1) @name = LoginName
 FROM master.dbo.spidArchive
 WHERE SPID = @SPID AND LoginTime <= @Date
 ORDER BY LoginTime DESC;
 RETURN @name;
END

This function just performs the logic stated above  and converts the SPID and DateTime into the login name for the user. Once this infrastructure is in place we can now directly use that in a call to ::fn_dblog(null,null) to translate the SPID column

select master.dbo.ConvertSpidToName(log.SPID, log.[Begin Time]) as UserName, log.* from ::fn_dblog(null,null) log

What you will notice is that for the majority of log lines, there is no user name displayed – this is because the SPID is only recorded against the LOP_BEGIN_XACT entry, the beginning of the transaction. This doesn’t really present a problem, from previous experiments we know all the entries for an individual transaction are given a unique Tansaction ID which we can use to group them together. It becomes pretty trivial to join back to the log, and connect any transaction entries to the LOP_BEGIN_XACT record and produce the name on every row possible.

select master.dbo.ConvertSpidToName(log2.SPID, log2.[Begin Time]) as UserName, log.*
from ::fn_dblog(null,null) log
left join ::fn_dblog(null,null) log2 on log.[Transaction ID] = log2.[Transaction ID] and log2.Operation = 'LOP_BEGIN_XACT'

So overall it is not too hard to get the log entries attributed to the accounts that generated them.

A couple of final notes / caveats:

  • If your application is using a trusted sub-system approach this of course will not work as a technique, since all the users will be logged into the application through an internal mechanism (such as a users table) and then the application service connects using it’s own credentials – always a good thing since then the user’s have no direct access to the database. In that kind of situation this is of no value, every connection will be shown up as the same user/ source.
  • Within my code I chose to use datetime2(7), to be as accurate as possible on the connections and timings, you could drop to just datetime for SQL Server 2005 but with only 1/300ths of a second accuracy there is a chance on a very busy server that you could see two entries for a single SPID at the same datetime – which would pose a bit of a problem.
  • The spidArchive table can not be allowed to grow unconstrained – I have not included anything here for clearing down the table, but it is not difficult to conceive of it being archived off, or cleaned up weekly via a SQL Agent job.


SQL Internals Viewer

November 27, 2010 1 comment

As Captain Oates once said, ‘I am just going outside and may be some time’ – feels like quite a while since I had to time to see down and write something.

I had a bit of time to take a look at the SQL Internals Viewer (http://internalsviewer.codeplex.com/) , it has been out for some time but I had never downloaded it to play around to see how useful it is in terms of a way of learning more about the internals.

The Page Viewer is excellent, the breakdown of a page into the component parts for a row and the display of the page data is a superb aid to anyone wanting to understand how the data is stored on a page. Whilst you can use DBCC PAGE to get at all this information, presenting it in such a readable form will satisfy most people easily.

The page allocation map is a nice little addition, but really is just an extension of showing you what pages belong to which object etc.

The transaction log viewer though I was really looking forward to seeing, primarily to help me decode more transactions, but it has been a bit disappointing – the level of detail shown is very limited, and provides no real benefit over just looking at the log directly, or using the last transaction log trick I have previously posted.

As you can see from the screenshot, the level of details is pretty light for a simple transaction, no actual breakdown of the log record itself is provided, which is a shame – whilst it does given you some basic information and will help some people, I think if you are at the stage where you are taking an interest in the transaction log, you are already beyond this point.

So as an educational / learning aid, it is pretty good on the page internals side – and anyone wanting an easier way to visualize that for learning it is still worth grabbing. I would love to see more on the Log side – but at present the project appears to be in hibernation, with no changes in some considerable time, so I suspect we will not see any enhancements now.

Blank Transactions

September 5, 2010 Leave a comment

Busy time with a new addition to the household – sleep is clearly an optional parameter these days, but on to one of those oddities you might see in the transaction log. On many occasions you will see transactions in the log that have no operations, the individual transaction entry just has a LOP_BEGIN_XACT following be a LOP_COMMIT_XACT, with no operations being recorded.

So what causes these?
The immediate thought is a straight:

Begin Transaction
Commit Transaction

If you try that and inspect the log, you will notice that it has not added in this mysterious, zero-operation transaction. So that is not the cause.

How about a rolled back transaction? well you should already know that this would not be the answer since the log reserves space ahead during a transaction to record undo operations, due to a rollback. To show that in more detail, given the following simple snippet of SQL:

begin transaction
insert into test (name) values ('a')
rollback transaction

The log then shows the transaction beginning, performing the insert, and then rolling it back by deleting the record and recording it as an aborted transaction.

LOP_ABORT_XACT
LOP_DELETE_ROWS
LOP_INSERT_ROWS
LOP_BEGIN_XACT

So neither of the first obvious choices are the cause, the reason seems a bit bizarre, but centres around whether any change was attempted but ignored due to being unecessary. Setting up a test table and inserting that single row into it with a value of ‘a’, run the following statement:

begin transaction
update test set name = 'a'
rollback transaction

Now when you inspect the log, there is a blank transaction, it recorded the start and end of the transaction, but no operations are shown. The same is true if the transaction is rolled back.

If the code is altered slightly to deliberately mean that no modification would occur though, the same does not hold true:

begin transaction
update test set name = 'a' where 1 = 0
commit transaction

Clearly the code is designed to make no modifications, so it is not surprising that no entry occurs in the transaction log, to make the test a bit fairer, let’s design the code in a way that it might make a modification, but it doesn’t.

begin transaction
update test set name = 'a' where name = 'b'
commit transaction

Still no entry in the transaction log.

So the distinction in the logging of these zero-op transactions is whether or not there was matching data to be altered, if a record was found but that the alteration was unnecessary we get a zero-op transaction appear in the log. It does make you wonder, why?

It also means that from an auditing perspective, the attempt to modify the data was not logged, not because it was unsuccessful, but because it was being altered to the values it already had.

Subtle change to ::fn_dblog in SQL 2008

July 15, 2010 2 comments

Bizarrely given that transaction log decoding is not really viable as a technique beyond interest, it’s quite surprising how many people visit just to understand or learn how to do it. Either a lot of data is being lost and no backup’s exist – or it fascinates a large number of people.

In either instance, there are sometimes subtle changes can pass you by, and the use of old habits can make life a bit harder. One such recent change that I missed was an alteration to the ::fn_dblog function. I really should have spent some time investigating the new fields that had been added – but with documentation being limited to say the least, it has not been the top priority and my habit is to fire up SQL 2005 when checking something in the transaction log – so I had not noticed them.

One of the changes is the creation of a link between the contents of sys.dm_tran_current_transaction and the transaction log itself. The contents of the view have not altered, but one of the new fields within the output from ::fn_fblog is a new field called Xact_ID – this field contains the same value as the Transaction_ID that is output from the DMV.

Now that is significantly convenient for when you’re poking around in logs trying to understand cause and effect .

A couple of simple stored procedures later to assist in saving and returning that filtered information as follows:

CREATE PROCEDURE sp_store_transaction
<AS
BEGIN
  DECLARE @dm_transaction_id bigint
  SELECT @dm_transaction_id = transaction_id FROM sys.dm_tran_current_transaction
  IF OBJECT_ID (N'tempdb.dbo.##db_last_transaction') IS NOT NULL
    DROP TABLE ##db_last_transaction
  SELECT [transaction id] INTO ##db_last_transaction
  FROM ::fn_dblog(null,null)
  WHERE [Xact ID] = @dm_transaction_id
<END
GO
CREATE PROCEDURE sp_get_last_transaction
AS
BEGIN
  SELECT *
  FROM ::fn_dblog(null,null)
  WHERE [Transaction ID] = (SELECT [transaction id] FROM ##db_last_transaction)
END
GO

And that is going to make looking at cause an effect far easier – first procedure you call anywhere within your transaction to store the current transaction_id within the log, and the second to retrieve the values for that stored transaction id. Something like:

BEGIN TRAN
UPDATE tbltest SET firstname = 'ab'

EXEC sp_store_transaction
COMMIT TRAN

sp_get_last_transaction

If you are wondering why I did not dump the log records out and only the transaction id that ties them together – if you dumped the records before the transaction committed or rolled back, you would not see the effect of that action – you need to retrieve all the log rows associated to that transaction, after it has finished.

Immutable Primary Key vs Immutable Clustered Key

April 19, 2010 2 comments

It is often said that a primary key should be immutable, and this advice is echoed on a multitude of sites some of which strengthen it to a ‘law’ – but we know with databases that absolutes are rare and it is very difficult to be 100% prescriptive. There is then no mention of the clustering key being immutable alongside it, which strikes me as strange since it is just as important.

What happens within SQL Server to the row if you change the clustered key?

  • If you change the clustered key value the row must be physically moved and can result in page splits / fragmentation.
  • A change of the clustered key requires all the non-clustered indexes to be updated to reflect the clustering key choice.

And if you change the primary key?

  • A change of the primary key, has to be reflected in each of the other tables that use the key as a linking mechanism.
  • The primary key must still uniquely identify the row within the table.

Clearly different issues, but why does the primary key immutability get so much attention and not the clustering key? The default behaviour of SQL Server is that the primary key becomes the clustering key, so in essence all 4 points get applied, but you can choose a different primary key to the clustering key.

What sort of expense are we talking about if we allow the clustering key to not be immutable and started altering a row’s clustered key value? To get a better understanding of what is going on under the hood, I’ll construct an example and check the transaction log.

Creating a simple table in SQL and inserting a few rows to set up the test is pretty easy:

CREATE TABLE [dbo].[MyTable](
 [MyID] [int] IDENTITY(1,1) NOT NULL,
 [FirstName] [varchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
 [SecondName] [varchar](30) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
 CONSTRAINT [PK_Table_1] PRIMARY KEY CLUSTERED
(
 [FirstName] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]

Insert a few rows and then issue a simple update statement.

update mytable set firstname = 'TestFirstName', Secondname = 'TestSecondName' where MyID = 2

Inspect the transaction log and it is noticable that the log does not contain a LOP_MODIFY_ROW or LOW_MODIFY_COLUMNS within it, but contains a LOP_DELETE_ROWS and a LOP_INSERT_ROWS. Instead of just modifying the data, SQL has removed the row and reinserted it. A few other items appear with the transaction, LOP_BEGIN_XACT and LOP_COMMIT_XACT as you would expect to start and commit the transaction. There is also a LOP_SET_BITS on the LCX_PFS which is not surprising to see either, since we have potentially affected the free space level of the page the data was inserted into.

That maps to exactly what we expect from a high level logical perspective – the row has to be moved and there is no LOP_MOVE_ROW operation. This results in the row being placed into the transaction log twice, as a before and after.

What happens if we perform an update that does not include the clustered key?

update mytable set Secondname = 'AnotherTestSecondName' where MyID = 3

This time the log only includes 2 entries, the LOP_BEGIN_XACT / LOP_COMMIT_XACT and a single LOP_MODIFY_ROW which is more as you would expect.

Size wise, the transaction log entry length for the first alteration was 96 + 148 + 56 + 160 + 52 = 512 bytes. For the second entry it was only 96 + 144 + 52 = 292. So the alteration used more log space and due to write ahead logging it must be committed to the disk, but the actual difference for a single row does not look too significant.

Well, whilst it does not look significant, you have to remember that the row being modified was very small. As previous examples have shown the LOP_DELETE_ROWS and LOP_INSERT_ROWS include the entire contents of the row being removed / added, so with a larger row the entire contents of the row would be added to the log twice, compared to the simple modification. That would start to get expensive.

So altering the clustering key is clearly expensive for the transaction log in comparison to a normal update, and this example did not have additional non-clustered indexes added to the table, which would also then require even more entries to deal with the removal and re-insertion of the non-clustered index values.

Given a choice I would make both immutable; the primary key shouldn’t be the only one to get special treatment and be designed to be immutable, make the clustering key immutable as well.

A Strange Case of a Very Large LOP_FORMAT_PAGE

February 28, 2010 1 comment

I have been involved in a long running case with a client, trying to establish why an ETL process of a couple of gig of data, would result in over 5 gig of transaction log space being used.

The database was set in bulk mode, and confirmed as being correctly in bulk mode with appropriate full backups and transaction log backups occurring – so how could the log be expanding so rapidly for such a simple import. One clue was that this was only occurring on a single system, and that no other system had ever witnessed this behaviour, so it had to be something environmental.

The output from the transaction log captured during a load window was sent to me and some quick aggregation of the rows show that there was close to 280k LOP_FORMAT_PAGE entries, but instead of the normal log entry size of 84 bytes, they were coming in at 8276 bytes. This was due to the log record overhead but basically meant that for each page being allocated by the inload, an entry was added larger than the page just added.

I contacted Paul Randal to ask why the log entry would get allocated this large and he identified it as not being in bulk logged mode – which made sense given the log expansion but there was little proof. Examining an individual LOP_FORMAT_PAGE entry I could see a clear repeating pattern within the hex data – which would be consistent to a page with rows within it.

One field of what was being imported was a known value for every row (the data related to a specific week number), and translating that value into hex, it was visible multiple times. Using the breakdown of a row and that this value was field number 3 the row could be decoded and actual values of a row imported reconstructed.

Using these values the main database was queried and that row did indeed exist – the data has 27 decimal values within it also, so the chances of a random match are incredibly small. This proved that the log was recording every single row, even though it was in bulk logged mode.

There are a number of things that can prevent bulk logged mode from operating correctly, and on examining the list the problem became clear to the client’s DBA; the main database backup was still running at the same time as the data inload was being scheduled. The bulk log operations have to be fully logged during the backup to ensure that the database can be returned to a consistent state once it has been restored.

It is the only time I have seen the LOP_FORMAT_PAGE taking up this much space in a transaction log, and if anyone else gets stuck on a similar problem, google should at least find something this time.

Decoding a Simple Update Statement Within the Transaction Log

February 21, 2010 3 comments

After decoding the delete statement within the transaction log, I thought I would tackle another statement – an update statement, although I am attempting the simple version of the update which is only to update a single value within the fixed width fields section. Again to assist in decoding the log entry, I have used the AdventureWorks database and know the values of the data before and after the modification, so it is easier to decode how the log entry is constructed.

As before the database has been backed up whilst in fully logged mode and the transaction log backed up to make it as small as possible. Once the database was ready I issued a simple statement against the database

update HumanResources.Employee
set MaritalStatus = 'M' where employeeid = 6

The employee with the ID  of 6 has changed marital status, congratulations employee id 6.

The first confusion is that two LOP_MODIFY_ROW entries show up against the LCX_Clustered context, for the HumanResources.Employee.PK_Employee_EmployeeID allocation unit / name, but only 1 modification has been made.

What is the second modification?

Against each LOP_MODIFY_ROW there is an [Offset in Row] column, which indicates where within the row the modification has been made. The first modification is recorded at byte 24, the second is at byte 57. Instead of providing a field number / name, the alteration is being recorded at a byte level, so to figure out which columns have changed, we must map out the structure in terms of byte positions for the row.

Offset Name Size in Bytes Value
0 Status Bits 2  
2 Column Number Offset 2  
4 EmployeeID 4 6
8 ContactID 4 1028
12 ManagerID 4 109
16 BirthDate 8 1965-04-19 00:00:00.000
24 MaritalStatus 2 S
26 Gender 2 M
28 HireDate 8 1998-01-20 00:00:00.000
36 SalariedFlag 1 1
37 VacationHours 2 40
39 SickLeaveHours 2 40
41 CurrentFlag 1 1
41 rowguid 16 E87029AA-2CBA-4C03-B948-D83AF0313E28
57 ModifiedDate 8 2010-02-20 18:13:03.710
65 Column Count 2  
67 Variable Column Count 2 3
69 Variable Offets 6  
75 NationalIDNumber 16 24756624
91 LoginID 44 adventure-works\david0
135 Title 100 Marketing Manager

Again, because there are no nullable columns at all, there is no nullability bitmap within the structure. This mapping is specific to this row, but the offsets for the fixed portion of the row would be valid for all the records, it is only the variable width columns that would change per data row.

The two modifications are at byte 24 and 57, which is clearly the Marital Status and the Modified Date. 

Why is the modified date being changed? The adventure works employee table has an update trigger on it, which alters the modified data column to be the current date when any change is made to the row.

For the Marital status transaction log entry, there are two values we need to logically know – what was it before the change, and what is the value after the change.

The Row Log Contents 0 and Row Log Contents 1 fields clearly provide that information

Row Log Contents 0 : 0x53
Row Log Contents 1 : 0x4D
select convert(nchar,0x53) = S
select convert(nchar,0x4D) = M

So they are clearly the before and after results.

Checking the transaction log entry against the trigger, the log row contents are dates within hex format, which are not too difficult to deal with

In Hex, and Row Log Contents  0 contains the previous modified date, although the SQL Management Studio results show the Hex has been shortened somewhat from the normal  8 bytes for a date. Padding the extra zeros in gives us:

0x 00 00 00 00 35 95 00 00 = 0x 00 00 95 35 00 00 00 00  = 2004-07-31 00:00:00.000

The same rules apply to the  RowLog Contents 1 which has also been cut short.

0x A5 F1 D6 00 24 9D 00 00 =  00 00 9D 24 00 D6 F1 A5 =  2010-02-21 13:02:35.217

Which is unsurprisingly today’s date and when I made the modification.

The immediate question is did the log contain two entries because there were two different causes of the change, or does it record a transaction log row per value modified. Since the Offset In Row value is within the transaction log row, you could guess at 1 entry per change.

To check, I restored the database to the starting position and issued a statement with two modifications:

update HumanResources.Employee
set MaritalStatus = 'M',
VacationHours = 60
where employeeid = 6

The transaction log alters signficantly, the modified date LOP_MODIFY_ROW entry with an offset of 57 still gets added, but the alteration of the two values does not produce 2 x LOP_MODIFY_ROW but a single LOP_MODIFY_COLUMNS – this appears to have a more complex format that is going to be tougher to crack, so that will have to wait until another day.