Home > SQL Server > Interval Partitioning in SQL Server 2008

Interval Partitioning in SQL Server 2008

Another one of those features in Oracle that we do not natively have in SQL Server is interval partitioning, where you can automatically have partitions generated based on an interval automatically generated for you. The interval partitioning is yet again, another form of dynamic partitioning, so the thought was, could this be achieved within SQL Server?

My initial thought would be to use an instead of trigger, which would intercept the incoming values and take action on them if appropriate to extend the partition function. An initial look into the documentation suggests it will not be trivial – the BoL states:

"Additionally, the following Transact-SQL statements are not allowed inside the
body of a DML trigger when it is used against the table or view that is the
target of the triggering action. CREATE INDEX, ALTER INDEX, DROP INDEX, DBCC

The issue there being the alter partition function – any response to an incoming piece of data that is not properly covered by the existing partition mechanism will need to alter the partition information. Of course there are still ways around such restrictions but when experimenting, it seems the BoL is not correct to list the ‘ALTER PARTITION FUNCTION’ command in the restrictions.

There are a few caveats to the following snippets of code, I am not attempting to deal with the rolling window affect, or complex storage pattern of a partitioned table of which I have written about in other posts. This is purely designed to demonstrate that it could be achieved, not to provide a total solution to the problem. The partitions created on the fly will all go to the Primary filegroup as well etc.

So start with a very basic partition function and scheme:

AS RANGE LEFT FOR VALUES ('20100104' , '20100111' , '20100118', '20100125')

And generate a table on the partition scheme:

CREATE TABLE IntervalTest (
MyID int identity(1,1) not null,   
MyField Varchar(200),
MyDate datetime
) ON psTest(MyDate)

The next step is an ‘Instead of’ trigger, which has to intercept the incoming data from the inserted table, and extend the partition function if required:


  -- get the current maximum partition value
  DECLARE @max_part_dt datetime;   DECLARE @max_inserted_dt datetime;
  DECLARE @weeks_to_add int;

  SET @max_inserted_dt = (SELECT max(MyDate) FROM inserted);

  SET @max_part_dt = (
SELECT max(convert(datetime,value))
from sys.partition_functions f
  inner join sys.partition_range_values rv on f.function_id = rv.function_id
  where name = 'pfTest');      
IF (@max_inserted_dt > dateadd(D,7,@max_part_dt))
   -- need to potentially add multiple partition splits, it depends on
   -- how many weeks in advance the new data is.
   -- get a whole number of the weeks to add to ensure that we cover 
   -- the new data
   SET @weeks_to_add = ceiling(datediff(D,@max_part_dt, @max_inserted_dt) / 7.0)

   -- loop around splitting the partition function with the new weekly values
   -- that need to be covered
   WHILE @weeks_to_add > 0
    -- increase the maximum partition date by 7 days and split the function
    SET @max_part_dt = dateadd(D,7,@max_part_dt);


    SET @weeks_to_add = @weeks_to_add - 1

  -- finally insert the values
  INSERT INTO IntervalTest (MyField, MyDate)
  SELECT MyField, MyDate
  FROM inserted;  

The code is pretty self-explanatory, but I would point out that it is only covering an insert, not an update – and this is not production code but an experiment to see if it could be done (contrary to the BoL). To ‘productionize’ this would require significant work on exception handling, performance tuning, handling multiple filegroup partitioning, the list goes on, but all of it achievable.

A little test to insert a couple of values set ahead of the partition

insert into IntervalTest (MyField, MyDate)
select 'b', '20100505'
select 'c', '20100606'

And a check of the partition function values now show 23 values in the partition function, instead of the original  4 as follows:

2010-01-04 00:00:00.000, 2010-01-11 00:00:00.000, 2010-01-18 00:00:00.000,
2010-01-25 00:00:00.000, 2010-02-01 00:00:00.000, 2010-02-08 00:00:00.000,
2010-02-15 00:00:00.000, 2010-02-22 00:00:00.000, 2010-03-01 00:00:00.000,
2010-03-08 00:00:00.000, 2010-03-15 00:00:00.000, 2010-03-22 00:00:00.000,
2010-03-29 00:00:00.000, 2010-04-05 00:00:00.000, 2010-04-12 00:00:00.000,
2010-04-19 00:00:00.000, 2010-04-26 00:00:00.000, 2010-05-03 00:00:00.000,
2010-05-10 00:00:00.000, 2010-05-17 00:00:00.000, 2010-05-24 00:00:00.000,
2010-05-31 00:00:00.000, 2010-06-07 00:00:00.000

It has clearly created the partitions to cover the data being inserted and then performed that insertion.

So it can be done, but the constant cost of intercepting every insertion and update to provide this kind of dynamic partition is really not ideal, whether it could be made sufficiently efficient to work at the sort of scale that partitioning tends to be used at is debatable. I have it feeling that it would struggle – I would need to be lent a suitably sized server and SAN to test that one and see whether it could be made efficient enough.

  1. sundhar
    August 24, 2012 at 1:51 pm

    Hi ,
    This is sundar,

    i have a small doubt. in table partition

    i created table partitions after that i use the following commands for particular condition i got the following error but it’s working other than this condition. please review the following examples and let me know why it’s happens.

    when i’m try to apply the following query in partitioned table. i got the following error
    select distinct Extract_Date from TERRA_FORCAST_INPUT_VERSION_TBL_PARTITIONS where Extract_Date >=’2012-08-01′

    Msg 823, Level 24, State 2, Line 1
    The operating system returned error 1(Incorrect function.) to SQL Server during a read at offset 0x00000000092000 in file ‘D:\SQLServerDatabaseFiles\HUB_NOR_NEW_PARTITIONS\HUB_NORM_SEP12.NDF’. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

    but the following didn’t any error when i’m executing. i didn’t understand why i got this error.

    select distinct Extract_Date from TERRA_FORCAST_INPUT_VERSION_TBL_PARTITIONS where Extract_Date >=’2012-07-01′ and Extract_Date <='2012-07-31'

    for your reference i have pasted the my partition function & partition scheme.


    /****** Object: PartitionFunction [PF_FORECAST_EXTRACTDATE] Script Date: 08/24/2012 13:43:33 ******/
    CREATE PARTITION FUNCTION [PF_FORECAST_EXTRACTDATE](date) AS RANGE LEFT FOR VALUES (N'2011-09-30T00:00:00.000', N'2011-10-31T00:00:00.000', N'2011-11-30T00:00:00.000', N'2011-12-31T00:00:00.000', N'2012-01-31T00:00:00.000', N'2012-02-29T00:00:00.000', N'2012-03-31T00:00:00.000', N'2012-04-30T00:00:00.000', N'2012-05-31T00:00:00.000', N'2012-06-30T00:00:00.000', N'2012-07-31T00:00:00.000', N'2012-08-31T00:00:00.000', N'2012-09-30T00:00:00.000', N'2012-10-31T00:00:00.000', N'2012-11-30T00:00:00.000', N'2012-12-31T00:00:00.000')

    please suggest me if it's wrong with my creation of partition function & scheme
    give it to solution how to edit/delete

    i appriciate if you can help on the above query


    • August 24, 2012 at 2:27 pm

      Hi Sundaram,

      Given the error you have pasted, it looks like the problem is not with your Transact SQL but with the data file. The error you gave of “Msg 823, Level 24, State 2, Line 1
      The operating system returned error 1(Incorrect function.) to SQL Server during a read at offset 0×00000000092000 in file ‘D:\SQLServerDatabaseFiles\HUB_NOR_NEW_PARTITIONS\HUB_NORM_SEP12.NDF’.” is a pretty serious error, level 24 is the highest severity of error that SQL Server will return to you and will of no doubt killed your connection in the process.

      You got the error whilst querying one part of the table but not another, that would be consistent with the corruption being on a specific part of the table / index, and every time you try to read that part of the table / index then it fails. At this point you have to assume you have corruption within your database in some form.

      The golden rule at this point is to not panic and not make any rash decisions. Consult with your DB Administrators and choose an appropriate course of action. If this is a production system, then grab the DR manual immediately and again – don’t panic and do not make any rash decisions. (I couldn’t repeat that last part enough times)


  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: