Archive
SQL Server Hash Partitioning
It’s been a while since the last post, primarily due to changing jobs and now spending most of my time on Oracle – although it is always good to see the other side of the coin and see what it has to offer, but I won’t be abandoning SQL Server, that is for certain.
One of the more interesting features to me in Oracle is hash partitioning – the ability to create a partition across a defined number of partitions, and then arbitrarily decide which partition the data will go in based on a hashing function. Why would that be handy? SQL Server partitioning is in effect a range partition, in which you define the dividing points on the number line / alphabet – which suits partitions based on a defined number range or date range, but does not suit partitioning of other types such as a GUID.
The merits of such a partition could be debated, since with a decent index in place the benefits of the partition elimination within the query plan can be limited. Regardless of those merits (and I am pretty sure it is not going to be performant at scale, however could SQL Server implement Hash Partitioning? On a side note, this could be considered semi-dynamic partitioning in that the partition is able to cope with additional data outside of the expected range, due to the hash function.
I’ve seen a few articles try and perform hash partitioning by pre-processing the insert statement, prior to insertion into the database, but what about something a bit more native?
To start with, we need to create a partition function and partition schema to support this endeavour, both are pretty easy to construct.
CREATE PARTITION FUNCTION [myPartitionFunction] (int) AS RANGE LEFT FOR VALUES (100,200,300,400,500,600,700,800,900) CREATE PARTITION SCHEME [myPartitionScheme] AS PARTITION [myPartitionFunction] ALL TO ([FG1])
I’ve set up the partition scheme to assign all of the partitions to FG1, just for convenience, it could easily be set to multiple filegroups, and instead of 9 partitions, this could be constructed with 999 partitions.
There are a variety of hashing algorithms and functions, but given the range covered by the partition function, I have chosen to use a very simple modulo on the converted binary of the unique identifier. The only trick here is that we must create the function with schema binding, otherwise SQL will refuse to use the function later on when we persist the column and partition on it.
CREATE FUNCTION GuidHash (@guid_value uniqueidentifier) RETURNS int WITH SCHEMABINDING AS BEGIN RETURN abs(convert(bigint,convert(varbinary,@guid_value))) % 999 END
That is a pretty simple hashing function, but the point is to demonstrate is can be done, not to implement the best hashing algorithm that will give the most even distribution etc. The next step is to create the table, with the persisted column defined using the GuidHash function. If the function is not schema bound, you get an error thrown at this stage.
CREATE TABLE MyTable( MyID UniqueIdentifier not null, SomeField Char(200), PartitionID as dbo.GuidHash(MyId) PERSISTED ) ON myPartitionScheme(PartitionID)
The surprise here is that is accepts the table creation definition – since when would you expect a partitioned table’s column to be a computed column?
Time to put an index on the table, given the data is indexed off a unique identifier, it would not be unusual to place a non-clustered index on the table and to use index-alignment, e.g. place it on the same partitioning scheme.
CREATE NONCLUSTERED INDEX [ix_id] ON [dbo].[MyTable] ( [MyID] ASC, [PartitionID] ASC ) ON [myPartitionScheme]([PartitionID])
Populate the table with some test data:
DECLARE @guid uniqueidentifier SET @guid = newid() INSERT INTO mytable (myid, somefield) VALUES (@guid, 'some text') go 10000
So what happens when we select a single row from our data, for convienience I looked up a value in the table and grabbed the GUID – comparing the two queries side by side,
SELECT * FROM mytable WHERE myid = 'D41CA3AC-06D1-4ACC-ABCA-E67A18245596' SELECT * FROM mytable WHERE (partitionid = dbo.guidhash ('D41CA3AC-06D1-4ACC-ABCA-E67A18245596') and myid = 'D41CA3AC-06D1-4ACC-ABCA-E67A18245596')
The comparison is interesting, in percentage terms, it was 85% to 15% for the batch, the IO Statistics reads:
First Query : Scan count 10, Logical Reads 21 Second Query : Scan count 1, Logical Reads 3
So the hashing has clearly made the query faster – but that just means it was faster than the query that didn’t use the partition key which shouldn’t be too surprising – partition elimination vs checking every partition should win, so all it shows is that partition elimination is occurring. So how does it stack up against a normal table, e.g. have we gained anything? To test we need to put 10k rows into the same table, minus the computed column, index it and perform the same kind of select – all easy stuff so I will not write the code here, the results of a select from a normal table?
Normal Table Query : Scan Count 1, Logical Reads 3
And when run side by side, the SSMS window reports a 50% split of work between the two queries within the batch – which is not surprising given the IO costs were listed as the same – so where is the catch? There is no such thing as a free lunch, and the additional cost here is the CPU to generate the PartitionID value for the hashed GUID, but as a technique to partition based on a GUID, it has some merits.
One final thing that I did notice is that under Simple Parameterization, the GuidHash based query does not parameterize, which would start having detrimental effects on the query cache, once the database was placed under forced parameterization, then the query did parameterize appropriately – so you either want forced parameterization on or to use stored procedures – I would vote for the later there.
As a technique is has some merits, but you have to remember to manually include the PartitionID column within each query and run it through the hashing function – which is not ideal, but manageable.
Query Parameterization and Wildcard Searches
Time flies when you’re busy, and it has been far too long since I last posted. To business however, and I noticed a problem in query parameterization the other day which does not make much sense at first glance. To demonstrate, I will use the AdventureWorks example database and use a couple of simple queries. As with all parameterization issues, you need to make sure that you know which mode the database is within, so I’ll begin by setting it to Simple Mode
ALTER DATABASE AdventureWorks SET PARAMETERIZATION SIMPLE
And then run two queries, separately so that they are not considered a single batch.
Select * from HumanResources.Employee where loginid like '%a%' Select * from HumanResources.Employee where loginid like '%b%'
Under simple parameterization it should not be too surprising to see that when the query cache is inspected, the queries have not been parameterized, and 2 entries exist within the cache. So what happens when the mode is changed to Forced?
ALTER DATABASE AdventureWorks SET PARAMETERIZATION FORCED
Clear down the query cache and try the two queries again, in the hope of a plan cache hit – and it hasn’t changed. Two query plans still show in the cache and there was no parameterization. Perhaps it is the existence of the 2 wildcard characters? no, altering the wild-cards makes no difference, removing them entirely still results in the query plan generating a separate plan cache entry.
Parameterization is not limited to dates and numbers, it will work on strings without any problem, but clearly the introduction of the like clause prevents the cache hit. This behaviour is on both SQL Server 2005 and 2008 – which is a bit annoying.
So how can we get around this problem?
Well bizarrely by just using a different syntax to mean the same thing. PatIndex works just like the like clause and takes a wildcard, but returns the position. In a like clause we are just interested in a match – whilst the pat index gives us a character position. If the pattern is not found it returns zero, so the simple replacement is to using patindex and look for any values greater than zero.
Select * from HumanResources.Employee where patindex('%a%',loginid) > 0 Select * from HumanResources.Employee where patindex('%b%',loginid) > 0
In simple mode this still produces 2 cache hits, but in forced mode you get a plan cache hit finally!
If solving it was only that simple… by using PatIndex the query where clause has become non-sargable – which makes no difference if you have a wild card either side of your expression but if you only had a trailing wildcard then this would produce a very bad performance hit. The cost of the extra query plan in memory is unlikely to be more than the use of scans to resolve the query, so faced with a few additional query plans in memory using wildcards, you might be best to leave them there.
Case Sensitivity in the Query Plan Cache
A surprising facet of the query plan cache is that it matches ad-hoc queries not only on their text, but the case of the query text must exactly match as well. This is documented quite clearly on MSDN although it is a bit of a surprising behaviour. It also does not change based on whether the collation of the server is case-sensitive or not.
The documentation gives a statement on case-sensitivity in the plan cache, but no mention of whether the behaviour changes under ‘Forced’ parameterization, which asks SQL to be more aggressive in extracting query literals and generating a query cache hit – so I decided to have a look and see how it acted in ‘Forced’ vs ‘Simple’.
Whether the database was in ‘Simple’ or ‘Forced’ the behaviour did not change – but it turns out that it is not case-sensitive on keywords, just on the object names.
To show the case sensitivity behaviour I have used the AdventureWorks sample database as a testing ground. Prior to each test I cleared the procedure cache using DBCC FreeProcCache.
I then issued two very simple queries:
SELECT * from humanresources.employee Select * from humanresources.employee
When the query cache is inspected, there are two entries – it remains case-sensitive.
sql_statement execution_count --------------------------------------------- -------------------- SELECT * from humanresources.employee 1 SELECT * from humanresources.employee 1
The query so simple and has no parameters so I suspect the simple / forced parameterization routines must not activate.
If we add a parameter to the query, then the parameterization activates and gets to work on the query string prior to the cache lookup. Both simple and forced are able to cope with such a simple query so both performed the parameterization.
SELECT * FROM humanresources.employee WHERE employeeID = 1 Select * From humanresources.employee Where employeeID = 1
Inspect the query plan cache when running in forced:
sql_statement execution_count --------------------------------------------------------- -------------------- (@0 int)select * from humanresources . employee 2 where employeeID = @0
Inspect the query plan cache when running in simple:
sql_statement execution_count --------------------------------------------------------- -------------------- (@1 tinyint)SELECT * FROM [humanresources].[employee] 2 WHERE [employeeID]=@1
The results show a plan cache hit, but more importantly show up a rather obvious difference in the parameterization routines for each mode:
- Simple changes keywords to upper case, Forced changes them to lowercase.
- Simple places square brackets around the objects, forced does not.
- Simple chooses to replace the literal with a tinyint, Forced uses an Int.
- Simple starts the parameters at @1, forced starts at @0
The differences can be filed under bizarre, strange and just inconsistent, although they do both get the job done, which counts at the end of the day.
What is then disappointing is that the same is not true for the tables and fields named in the query. Changing the case of one of the objects prevents the caching again.
select * from humanresources.employee where EmployeeID = 1 Select * From humanresources.employee where employeeID = 1
Inspect the query cache (this one from forced mode):
sql_statement execution_count -------------------------------------------------------- -------------------- (@0 int)select * from humanresources . employee 1 where EmployeeID = @0 (@0 int)select * from humanresources . employee 1 where employeeID = @0
So we are back to the situation of no cache hit.
It seems very strange that the parameterization only ensures the casing of the keywords is consistant to give it a better chance of a query plan cache hit – if this was a case-insensitive server than it is a valid optimization to try increase the chances of a plan cache hit.
The converse you would think, is that it would be an inherently risky optimization on a case-sensitive database? – but in fact it is an optimization that would never be needed or made – if anything a case-sensitive database server will have a better chance of making a query plan cache hit since all the tables names and field names have to exactly match the stored object names – and so the queries will which have a greater chance of matching each other.
It could clearly do more to try give a match, but I suspect the complications and edge cases, such as database / server case-sensitive collation mis-match account for why it might seem easier than it really would be to make better.
How Can You Spot the Procedure Cache Being Flooded?
This comes from a question I had a couple of days ago – the SQL Server: Buffer Manager : Page Life Expectancy provides a performance counter that indicates the current lifetime of a page within memory. As data pages and query object pages are being added to the buffer pool they will of course build up and SQL will come under memory pressure as a result. The normal advice is that this figure should be above 300 seconds, indicating that a page should stay in memory for at least 5 minutes.
This figure however, includes both the data cache and the procedure cache – which means you can not determine whether the pages being flushed are a result of churning data pages or you are in a situation where ad hoc queries are flooding the procedure cache. You can of course look at the procedure cache using DMV’s and see the number of objects grow and then shrink, but this is not particularly scientific, nor is it measurable within a trace.
The page life expectancy can easily be traced within Perfmon, but how do you measure the procedure cache? well are a couple of events you can trace in SQL profiler, the primary one I would like to be working do not seem to properly register the event, whilst the secondary does at least work. The two counters are SP:Cache Remove and SP:Cache Insert.
The SP:Cache Remove has 2 Event Sub Classes listed in documentation produced by the SQL Programmability team, sub class 2 is for a deliberate procedure cache flush, such as a DBCC FreeProcCache command, sub class 1 is for when a compiled plan is removed due to memory pressure. In testing the deliberate procedure cache flush does show up in the profiler traces, with an event subclass value of ‘2 – Proc Cache Flush’ – but after a number of tests, I can not ever get the event to be raised when the procedure cache is under memory pressure. If it did then we would have exactly what I was after, an easy, traceable and recordable way to show a procedure cache under too much pressure.
The SP:Cache Insert is more of a backup mechanism to show the procedure cache is being flooded, but only on the basis that you would count the number of times this event shows up within a trace over a period of time. In essence a SP:Cache Insert is only going to occur if a query does not have a matching query plan within the cache. A large number of these within a short period of time is also going to be an indication that the procedure cache is potentially being flooded.
Combine a large number of SP:Cache Inserts with a low Page Life Expectancy and you can suspect you definitely have a procedure cache flooding problem.
So there is a kind of mechanism to determine whether a low page life expectancy is from data page churn or query page churn, but if the SP:Cache Remove subclass 1 event actually worked, it would be a lot easier. Once you know your plan cache is being flooded, you are then looking to check whether forced parameterization is the worth using to eliminate the issue.
PDC09 : Day 3 – SQL Azure and Codename ‘Houston’ announcement
The PDC is just about over, the final sessions have finished and the place is emptying rapidly – the third day has included a lot of good information about SQL Azure, the progress made to date on it as well as the overall direction – including a new announcement by David Robinson, Senior PM on the Azure team about a project codenamed ‘Houston’ .
During the sessions today the 10Gb limit on a SQL Azure database was mentioned a number of times, but each time was caveated with the suggestion that this is purely the limit right now, and it will be increased. To get around this limit, you can partition your data across multiple SQL Azure databases, as long as your application logic understands which database to get the data from. There was no intrinsic way of creating a view across the databases, but it immediately made me consider that if you were able to use the linked server feature of the full SQL Server, you could link to multiple Azure databases and created a partitioned view across the SQL Azure databases – got to try that out when I get back to the office but I do not expect it to work.
SQL Azure handles all of the resilience, backup, DR modes etc, and it remains hidden from you – although when connected to the SQL Azure database you do see a ‘Master’ database present. It is not really a ‘Master’ in the same way that we think of one, and it quickly becomes apparent how limited that version of the ‘Master’ really is – it exists purely to give you a place to create logins and databases. It could have been called something else to make it a bit clearer but one of the SQL Azure team said it was to keep compatibility to other 3rd party applications that expected there to be a master.
SQL Azure supports transactions as mentioned before, but given the 10GB limit currently on a database you will be partitioning your data across databases. That will be a problem, because the system does not support distributed transactions, so any atomic work that is to be committed on multiple databases at once it going to have to be controlled manually / crufted in code, which is not ideal and a limitation to be aware of.
Equally cross database joins came up as an area with problems – they can be made, but it appears there are performance issues – interested to start running some more tests there and see whether you can mimic a partitioned view across databases using joins. The recommendation was to duplicate reference data between databases to avoid joins, so lookup tables would appear in each database in effect, removing the cross database join.
On the futures list:
- The ability to have dynamic partition splits looked interesting, regular SQL server does not have this facility within a partitioned table – so if Azure can do it across databases then this might come up on the SQL roadmap as a feature – that could be wishful thinking.
- Better tooling for developers and administrators – that is a standard future roadmap entry.
- Ability to Merge database partitions.
- Ability to Split database partitions.
So SQL Azure has grown up considerably and continues to grow, in the hands-on-labs today I got to have more of a play with it and start testing more of the subtle limitations and boundaries that are in place. Connecting to an azure database via SQL Server Management Studio is trivial and the object explorer contains a cut down version of the normal object tree, but includes all the things you would expect such as tables, views and stored procedures.
Some limitations of the lack of master and real admin access become apparent pretty fast, no DMV support, no ability to check your current size. No ability to change a significant number of options, in fact, the bulk of the options are not even exposed.
Two of my personal favourites I took an immediate look at, maxdop and parameterization.
- Maxdop is set at 1, although you can not see it, and attempting to override it throws an error from the query windows, telling you that it is not permitted. Do not plan on parallel query execution, you will not get it.
- I attempted to test the query parameterisation using the date literal trick and it appeared to remain parametrized, as though the database is in ‘forced’ parameterisation mode, so is more likely to get parameter sniffing problems but I have not been able to concretely prove it as yet, but the early indication is the setting is ‘Forced’
One other interesting concept was that a table had to have a clustered index, it was not optional if you wanted to get data into the table, although is did not stop me from creating a table without a clustered index, I had not attempted to populate data into it to see this limit in action – a case of too much to do and so little time.
On one of the final talks about SQL Azure, David Robinson announced a project codenamed ‘Houston’ – (there will be so many ‘we have a problem’ jokes on that one) which is basically a silverlight equivalent of SQL Server Management Studio. The concept comes from the SQL Azure being within the cloud, and if the only way to interact with it is by installing SSMS locally then it does not feel like a consistent story.
From the limited preview, it only contains the basics but it clearly let you create tables, stored procedures and views, edit them, even add data to tables in a grid view reminiscent of Microsoft Access. The UI was based around the standard ribbon bar, object window on the left and working pane on the right. It was lo-fi to say the least but you could see conceptually where it could go – given enough time it could become a very good SSMS replacement, but I doubt it will be taken that far. There was an import and Export button on the ribbon with what looked to be ‘Excel’ like icons but nothing was said / shown of them. Date wise ‘Targetting sometime in 2010’, so this has some way to go and is not even in beta as yet.
So that was PDC09, excellent event, roll on the next one!
Inconsistent Date Literal Parameterization Behaviour
I have mentioned query parameterization before and the process by which SQL extracts literal values from a query and re-writes the query in effect to use parameters to get a query plan cache hit, which negates the need to recompile the plan costing both time and CPU. There are a lot of good articles and book chapters that cover the topic.
What has confused me for a while is witnessing date literals within a query being parameterized on one query and not on another, even though both databases have parameterization set to ‘Simple’ mode. When a date literal is not parameterized the chances of getting a query plan cache hit is obviously very low which has performance impacts. The problem to date has been that I had been unable to ascertain the commonality and requirements that allowed the query to get parameterized and when it just kept the literal. I was paying too much attention to the query and as it turns out not enough to the table.
Well, after a number of hours getting to know the brick wall very well I finally tracked it down, and oddly it had nothing to do with the query I was submitting, but I could reproduce reliably by using an unrelated non-clustered index, which is confusing to say the least and I can not yet think of any reason why, is it a bug or just ‘Weird and Odd’.
The following steps reproduce the issue in both SQL Server 2005 and 2008.
Create a database, default settings, nothing specific, then the second step is to create our test table, a simple structure is suffice.
CREATE TABLE [dbo].[paramtest]( [id] [int] IDENTITY(1,1) NOT NULL, [somedate] [datetime] NOT NULL, [somefield] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, CONSTRAINT [PK_paramtest] PRIMARY KEY CLUSTERED ( [id] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY]
We need some data to work against just to make sure we are selecting results, and we can insert these trivially as follows:
insert into [paramtest] values (getdate(),'a') go 10000
So we now have 10k rows within the table, and a clustered primary index on the identity column.
The test starts with freeing up the procedure cache, and then running the select statement, the datetime I used was roughly in the middle of the range of values I had inserted but is not a deciding factor in the query plan results.
dbcc freeproccache select * from paramtest where somedate > '2009-11-12 21:14:50.000'
Using a standard query plan cache extraction query the specific line of the xml plan we are interested in is the SQL Statement.
<StmtSimple StatementText="(@1 varchar(8000))SELECT * FROM [paramtest] WHERE [somedate]>@1" StatementId="1" StatementCompId="1" StatementType="SELECT" StatementSubTreeCost="0.0379857" StatementEstRows="3831.48" StatementOptmLevel="TRIVIAL">
From it you can see the literal varchar value was extracted as @1 with a type of varchar(8000) and the query altered to use this parameter – this is exactly the behaviour we would expect from parameter sniffing.
Next step is to create a non-clustered index on the varchar ‘somefield’ – completely unrelated to the date literal being used, and should have no impact on the query at all.
CREATE NONCLUSTERED INDEX [ix_test] ON [dbo].[paramtest] ([somefield] ASC )WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY]
Free the procedure cache up again and rerun the query
dbcc freeproccache select * from paramtest where somedate > '2009-11-12 21:14:50.000'
Extract the query plan again from the cache, but this time it is noticable different, the parameterisation has not occurred. The literal has
<StmtSimple StatementText="select * from paramtest where somedate > '2009-11-12 21:14:50.000'" StatementId="1" StatementCompId="1" StatementType="SELECT" />
To revert to the old plan, drop the index and clear the cache again, then run the query once more.
DROP INDEX [ix_test] ON [dbo].[paramtest] WITH (ONLINE = OFF)
Then clear the cache again and run the query
select * from paramtest where somedate > '2009-11-12 21:14:50.000'
And we are back to being parameterized.
So the application of a single non-clustered index on a separate field to the one being queried is preventing the simple parameterization mode from parameter sniffing the date literal – this makes absolutely no sense, and you can play around with it a lot more knowing what it causing the effect on the query plan. Even placing the additional non-clustered index on the identity field, which already has a clustered index results in the parameterization failing. If this behaviour is be design, then it makes for an interesting design or limitation on the parameterization.
As soon as the database is in ‘Forced’ parameterization mode, the literal was converted each time, so this looks specific to simple mode, but is not explainable, just demonstratable.
Simple vs Forced – Query Parameterization
On the drive back from a relaxing week off I had decided to write about Query Parameterization – this is the process where the query optimizer works on ad-hoc queries and chooses to take literal values used within a query predicate, and turn it into a parameter on the fly.
This optimised query is then the one used to check the query cache, and not the original query string, so it can result in a far higher percentage of query plan cache hits than would otherwise occur on ad-hoc queries.
There are two modes to this feature, and neither of them is ‘off’, you automatically have ‘simple’ parameterization turned on, and can increase this to ‘forced’ mode for your database if you desire. The difference in the two modes is that the simple mode will only deal with relatively simple literal values within the predicates, and will not consider every literal a candidate for parameterization.
So, overall it sounds a good thing but there are some known problems to parameterization, or parameter sniffing as it is oft called – the query plan generated for a given value of the parameter is not necessarily the optimal plan for another. Consider a search on a personnel database where the EmployeeAge is between 95 and 99 – Assuming an index is in place, you can already expect the data quantities to be low to nill, so the index will not tip and an index seek is the expected behaviour.
If the next time the query is executed the value is from 20 to 60, the number of people matching that value would be very high, yet the query plan in cache has already chosen the method by which the query will be executed, and it is no longer the most optimal.
From a user’s perspective this will lead what appears to be random / non-deterministic behaviour – sometimes the query will be very fast and other times it will be slow. There will not necessarily be a pattern to the behaviour because the query might be removed from cache when it is under memory pressure, or for a wide variety of other reasons the plan might be invalidated, the simplest being a data statistics invalidation.
So with the knowledge that parameter sniffing can cause issues, when should you consider investing the time into testing this setting? The time and effort to test could be considerable so I have written how I would come to that decision.
The first thing I check is the CPU utilisation on the SQL Server, in theory there should be some kind of issue forcing this to be considered and one of the issues that can occur from insufficient parameterization is an increase in the CPU utilisation due to a high query compilation rate. There are a lot of other reasons that can cause a high CPU such as incorrect indexing leading to table scans etc, but for the purpose of this explanation, we can assume that has already been checked and was not enough.
The second performance counters I would check are:
SQLServer : SQL Statistics: Auto-Param Attempts/Sec SQLServer : SQL Statistics: Batch Request / Sec
I could also check the plan cache hit ratio, or the SQL Compilations / sec figures, but having the Auto-Params and Batch figure per sec allows you to do a rough calculation on the ratio of incoming batches compared to the number of queries that auto-paramerterization is being attempted on (successfully or not). The higher the ratio, then the more ad-hoc queries are already being affected by the ‘simple’ mode parameterization.
If I see a high ratio, the next check would be on the plan cache statistics, primarily:
SQL Server : Plan Cache : Cache Object Counts for SQL Plans SQL Server : Plan Cache : Cache Pages for SQL Plans
This is to try get a judge of how many plans are sitting in the cache / whether the cache is under memory pressure. If you have a high number of SQL plan objects / cached pages then you can calculate whether you are under memory pressure. You can also look at the buffer manager performance counter for the page life expectancy value, but this could be a bit misleading since data pages will also be affecting this.
At this point I would start pulling the query plan cache, and checking the query text for a sample of plans, to check whether there was a high number of literals remaining within the submitted query, since these are the ones that the simple parameterisation failed to convert to parameters. When pulling the query plan cache, sorting them in SQL text order, allows you to easily spot when a near identical queries except for unconverted literals have been submitted but resulted in a different query plan.
There is no prescriptive guidance on how many of these near identical queries it takes before you consider it, but clearly a single duplicate is not an issue, whilst thousands of them could be quite an issue and be resulting in a high number of unnecessary SQL plan compilations, which is increasing the CPU utilisation of the server.
As you can tell, this setting is certainly not one that should be changed without some considerable forethought and then thorough testing to ensure the situation you have got is actually worth the switch, and preferably some very good instrumented testing in a realistic test environment to ensure the benefits you are getting from the increased level of parameterisation, are not being lost by occurences of non-optimal plan cache hits.