Benjamin Nevarez Rotating Header Image

Query Optimizer

Presenting at the SoCal Code Camp

I am speaking again this month, this time at the SoCal Code Camp at Cal State Fullerton. SoCal Code Camp is a community driven event where developers come and learn from their peers and it is scheduled for Saturday, January 29th and Sunday, January 30th. I will be presenting two sessions on Saturday: “Inside the SQL Server 2008 Data Collector” at 8:45 am, and “Top 10 SQL Server Query Optimizer Topics for Better Performance” at 4:00 pm, both on room UH-335.

I will also be participating on the SQL Server Q&A session along with Denny Cherry, Lynn Langit, Bret Stateham, Ben Aminnia, Ike Ellis, Andrew Karcher and Thomas Mueller. This session will be hosted on room UH-335 at 2:45 pm.

For more information regarding sessions, schedule and directions visit the SoCal Code Camp website. I hope to see you there,

clip_image002

Speaking at the Los Angeles SQL Server Professionals User Group

I haven’t updated this blog in a long time so I wanted to put in a quick post about a session that I will be presenting at the Los Angeles SQL Server Professionals User Group this Thursday, January 20th. The session, “Top 10 Query Optimizer Topics for Better Performance”, is the same topic I presented a couple of months ago at the PASS Summit in Seattle.

The meeting will be hosted at the UCLA campus and will start at 6:30 PM with Allen Berezovsky who will talk about File Stream in SQL Server. My session will follow next. More details and directions can be found at the Los Angeles SQL Server Professionals Group website.

I hope to see you there,

Benjamin

clip_image002

My Book, “Inside the Query Optimizer”, available at the PASS Summit

My book, “Inside the SQL Server Query Optimizer”, is almost finished and we will have a conference edition of it available at the PASS Summit. The final version of the book, published by Red Gate books, will be available on Amazon by Christmas.

For more details on the contents, I am including the Preface of the book next.

clip_image002

Preface

The Query Optimizer has always been one of my favorite SQL Server topics, which is why I started blogging about it and submitting related presentations to PASS. And so it would have continued, except that, after several blog posts discussing the Query Optimizer, Red Gate invited me to write a book about it. This is that book.

I started learning about the Query Optimizer by reading the very few SQL Server books which discussed the topic, and most of them covered it only very briefly. Yet I pressed on, and later, while trying to learn more about the topic, I found an extremely rich source of information in the form of the many available research papers. It was hard to fully grasp them at the beginning, as academic papers can be difficult to read and understand, but soon I got used to them, and was all the more knowledgeable for it.

Having said that, I feel that I’m in a bit of a minority, and that many people still see the Query Optimizer just as a black box where a query is submitted and an amazing execution plan is returned. It is also seen as a very complex component, and rightly so. It definitely is a very complex component, perhaps the most complex in database management software, but there is still a lot of great information about the Query Optimizer that SQL Server professionals can benefit from.  

The Query Optimizer is the SQL Server component that tries to give you an optimal execution plan for your queries and, just as importantly, tries to find that execution plan as quickly as possible. A better understanding of what the Query Optimizer does behind the scenes can help you to improve the performance of your databases and applications, and this book explains the core concepts behind how the SQL Server Query Optimizer works. With this knowledge, you’ll be able to write better queries, provide the Query Optimizer with the information it needs to produce efficient execution plans, and troubleshoot the cases when the Query Optimizer is not giving you a good plan.

With that in mind, and in case it’s not obvious, the content of this book is intended for SQL Server professionals: database developers and administrators, data architects, and basically anybody who submits more than just trivial queries to SQL Server. Here’s a quick overview of what the book covers:

The first chapter, Introduction to Query Optimization, starts with an overview on how the SQL Server Query Optimizer works and introduces the concepts that will be covered in more detail in the rest of the book. A look into some of the challenges query optimizers still face today is covered next, along with a section on how to read and understand execution plans. The Chapter closes with a discussion of join ordering, traditionally one of the most complex problems in query optimization.

The second chapter talks about the Execution Engine, and describes it as a collection of physical operators that perform the functions of the query processor. It emphasizes how these operations, implemented by the Execution Engine, define the choices available to the Query Optimizer when building execution plans. This Chapter includes sections on data access operations, the concepts of sorting and hashing, aggregations, and joins, to conclude with a brief introduction to parallelism.

Chapter 3, Statistics and Cost Estimation, shows how the quality of the execution plans generated by the Query Optimizer is directly related to the accuracy of its cardinality and cost estimations. The Chapter describes Statistics objects in detail, and includes some sections on how statistics are created and maintained, as well as how they are used by the Query Optimizer. We’ll also take a look at how to detect cardinality estimation errors, which may cause the Query Optimizer to choose inefficient plans, together with some recommendations on how to avoid and fix these problems. Just to round off the subject, the chapter ends with and introduction to cost estimation.

Chapter 4, Index selection, shows how SQL Server can speed up your queries and dramatically improve the performance of your applications just by using the right indexes. The Chapter shows how SQL Server selects indexes, how you can provide better indexes, and how you can verify your execution plans to make sure these indexes are correctly used. We’ll talk about the Database Engine Tuning Advisor and the Missing Indexes feature, which will show how the Query Optimizer itself can provide you with index tuning recommendations.

Chapter 5, The Optimization Process, is the Chapter that goes right into the internals of the Query Optimizer and introduces the steps that it performs without you ever knowing. This covers everything from the moment a query is submitted to SQL Server until an execution plan is generated and is ready to be executed, including steps like parsing, binding, simplification, trivial plan and full optimization. Important components which are part of the Query Optimizer architecture, such as transformation rules and the memo structure, are also introduced.

Chapter 6, Additional Topics, includes a variety of subjects, starting with the basics of update operations and how they also need to be optimized just like any other query, so that they can be performed as quickly as possible. We’ll have an introduction to Data Warehousing and how SQL Server optimizes star queries, before launching into a detailed explanation of Parameter sniffing, along with some recommendations on how to avoid some problems presented by this behavior. Continuing with the topic of parameters, the Chapter concludes by discussing auto-parameterization and forced parameterization.

Chapter 7 describes Hints, and warns that, although hints are a powerful tool which allows you to take explicit control over the execution plan of a query, they need to be used with caution and only as a last resort when no other option is available. The chapter covers the most-used hints, and ends with a couple of sections on plan guides and the USE PLAN query hint.

Before we get started, please bear in mind that this book contains many undocumented SQL Server statements. These statements are provided only as a way to explore and understand the Query Optimizer and, as such, should not be used on a production environment. Use them wisely, and I hope you enjoy learning about this topic as much as I do.

Benjamin Nevarez

Presenting at the SoCal Rock & Roll Code Camp

I will be presenting two sessions at the SoCal Rock & Roll Code Camp this Saturday. This is a community driven event with over 100 sessions, hosted at the University of Southern California (USC) on both Saturday October 23rd and Sunday 24th. My sessions will be “Inside the SQL Server 2008 Data Collector” at 12:15 pm, and “Top 10 SQL Server Query Optimizer Topics for Better Performance” at 1:30 pm, both on room VKC-105.

For more information regarding sessions, schedule and directions visit the SoCal Rock & Roll Code Camp website.

clip_image002

Speaking at the Orange County SQL Server Professionals User Group

I will be speaking at the Orange County SQL Server Professionals User Group this Thursday, October 7th, 2010. The topic is “Top 10 Query Optimizer Topics for Better Performance”. So if you are in the Orange County or Los Angeles area please stop by and say hello. 

The meeting starts at 6:30 PM. More details and directions can be found here 

Orange County SQL Server Professionals User Group

http://www.sqloc.com

An Introduction to Cost Estimation

Last year when I presented my session regarding the Query Optimizer at the PASS Summit, I was asked how the estimated CPU and I/O costs in an execution plan are calculated, that is, where a specific value like 1.13256 is coming from. All I was able to say at the moment was that Microsoft does not publish how these costs are calculated.

Since this time I am working on a related project, I thought that perhaps I could look into this question again and show one example. But since there are dozens of operators, I decided to take a look at a simple one: the Clustered Index Scan operator. So I captured dozens of XML plans, used XQuery to extract their cost information and after some analysis I was able to obtain a basic formula for this specific operator.

But first a quick introduction to cost estimation: the cost of each operator depends on its algorithm, each operator is associated with a CPU cost, and some of them will also have an I/O cost. The total cost of the operator is the sum of these two costs. An operator like a Clustered Index Scan has both CPU and I/O costs. Some other operators, like a Stream Aggregate, will have only CPU cost. It is interesting to note that this cost used to mean the estimated time in seconds that a query or operator would take to execute on a particular reference machine. In recent versions of SQL Server this cost should no longer be interpreted as seconds, milliseconds, or any other unit.

To show the example, let us look at the largest table in AdventureWorks, Sales.SalesOrderDetail. Run the following query and look at the estimated CPU and I/O costs for the Clustered Index Scan operator as shown on the next figure.

SELECT * FROM Sales.SalesOrderDetail

WHERE LineTotal = 35

clip_image002

For a Clustered Index Scan operator, I observed that the CPU cost is 0.0001581 for the first record, plus 0.0000011 for any additional record after that. In this specific case we have an estimated number of records of 121,317, as shown on the picture above, so we can use 0.0001581 + 0.0000011 * (121317 – 1) or 0.133606 which is the value shown as Estimated CPU Cost. In a similar way, I noticed that the minimum I/O cost is 0.003125 for the first database page and then it grows in increments of 0.00074074 for every additional page. Since the Clustered Index Scan operator scans the entire table, I can use the following query to find the number of database pages, which returns 1,234.

SELECT in_row_data_page_count, row_count

FROM sys.dm_db_partition_stats

WHERE object_id = object_id(‘Sales.SalesOrderDetail’)

AND index_id = 1

 

In this case we have 0.003125 + 0.00074074 * (1234 – 1) or 0.916458 which is the value shown as estimated I/O Cost.

Finally, we add both costs, 0.133606 + 0.916458 to get 1.05006 which is the total estimated cost of the operator. In the same way, adding the cost of all the operators will give the total cost of the plan. In this case, the cost of the Clustered Index Scan, 1.05006, plus the cost of the first Compute Scalar operator, 0.01214, the second Compute Scalar operator, 0.01213, and the cost of the Filter operator, 0.0582322, will give the total cost of the plan, 1.13256, as shown next.

clip_image004

Presenting at the PASS Summit 2010

I am honored to be selected to present at the PASS Summit for the third time. This November in Seattle I will be presenting the following two sessions:

Top 10 Query Optimizer Topics for Better Query Performance

This session will show you how a better understanding on how the Query Optimizer works can help you to improve the performance of your queries. I will show you the top 10 Query Optimizer topics that can give you the more benefit by focusing both on the concepts and practical solutions. The SQL Server Query Optimizer is a cost-based optimizer which job is to analyze the possible execution plans for a query, estimate the cost of these plans and select the one with the lowest cost. So a better knowledge on how the Query Optimizer works can help both database developers and administrators to get better performance from their databases. Several areas of the query processor will be covered, everything from troubleshooting query performance problems and identifying what information the Query Optimizer needs to do a better job to the extreme cases where, because of the its limitations, the Query Optimizer may not give you a good plan and you may need to take a different approach.

Inside the SQL Server 2008 Data Collector

The SQL Server 2008 Data Collector provides some low overhead data collection functionality to store performance and diagnostics historic information of your SQL Server instances. See how you can use this information to troubleshoot problems and to provide trend analysis for the performance of your SQL Server instance. In addition to show the basics and architecture of the new Data Collector, this session focuses on the predefined system data collection sets that are provided by SQL Server 2008 that automatically collect data from the disk usage, instance activity, and queries statistics. You will learn about the Disk Usage collection set, which gathers statistics regarding the growth of the data and transaction log database files; explore the Server Activity collection set which focus on the server activity and resources utilization; and learn about the Query Statistics collection set which collects data regarding the queries running in your instance.

See you in Seattle!

clip_image002

Auto-Parameterization in SQL Server

The SQL Server Query Optimizer might decide to parameterize some queries in those cases where the value of a specific parameter does not impact the choice of an execution plan, that is, in the cases where it does not matter which parameter value is used, the plan returned will be the same. This is a very conservative policy and SQL Server will only use it when it is safe to do it, so the performance of the queries are not negatively impacted. In this case the parameterized plan can be reused by similar queries which differ only on the value of their parameters. This feature, which helps to avoid optimization time and cost, is call auto-parameterization and was introduced with SQL Server 7.0.

For example, on the AdventureWorks database, the next two SQL statements will produce different execution plans and will not be parameterized, even when the queries are exactly the same and only the parameters are different. In this case the Query Optimizer decides that it is not safe to auto-parameterize them.

SELECT * FROM Sales.SalesOrderDetail

WHERE ProductID = 897

SELECT * FROM Sales.SalesOrderDetail

WHERE ProductID = 870

On the other hand, the following query will be auto-parameterized.

SELECT * FROM Sales.SalesOrderHeader

WHERE SalesOrderID = 43669

In this last example the column SalesOrderID is the primary key of the SalesOrderHeader table so it is guaranteed to be unique. In addition, because the query predicate is using an equality operator and SalesOrderID is unique, there will be always a maximum of one record returned by this query. In this case SQL Server decides that it is safe to parameterize this plan by using a clustered index seek operator. You can verify if your query is using a parameterized plan by inspecting the plan cache like in the following query

SELECT text

FROM sys.dm_exec_cached_plans

CROSS APPLY sys.dm_exec_sql_text(plan_handle)

WHERE text LIKE ‘%Sales%’

This code will output the following auto-parameterized query which will show placeholders for the parameter values like @1

(@1 int)SELECT * FROM [Sales].[SalesOrderHeader] WHERE [SalesOrderID]=@1

Finally, a new feature to parameterize queries more aggressively called forced parameterization was introduced with SQL Server 2005. This feature is disabled by default and can be enabled at the database level (or inside a query using the PARAMETERIZATION FORCED query hint). By enabling forced parameterization you can reduce the frequency of query optimizations but you may also choose suboptimal plans for some instances of some queries, so you should do extensive analysis and testing of your application to verify that your performance is in fact being improved. To differentiate from forced parameterization, auto-parameterization is also referred as simple parameterization.

Optimizing Join Orders

The SQL Server Query Optimizer needs to take two important decisions regarding joins: the selection of a join order and the choice of a join algorithm. The selection of a join algorithm is the choice between a nested loops join, merge join or hash join operator, but I will leave that topic for some other post. In this post I will talk about join orders.

A join combines records from two tables based on some common information. But since a join works with only two tables at a time, a query requesting data from n tables must be executed as a sequence of n – 1 joins. Although the results of a query are the same regardless of the join order, the order in which the tables are joined greatly influences the cost and performance of a query. The Query Optimizer must find the optimal sequence of joins between the tables used in the query. Finding the optimal join order is one of the most difficult problems in query optimization and one that has been subject of extensive research since the seventies.

This basically means that the Query Optimizer defines the join order; it is not defined by your query. To see an example run the following query against the AdventureWorks database and choose to display the execution plan by clicking Include Actual Execution Plan.

SELECT FirstName, LastName

FROM Person.Contact AS C

    JOIN Sales.Individual AS I

        ON C.ContactID = I.ContactID

    JOIN Sales.Customer AS Cu

        ON I.CustomerID = Cu.CustomerID

WHERE Cu.CustomerType = ‘I’

It will show the following execution plan

clip_image002

Note that the Query Optimizer is not using the same join order specified in the query; it found a more efficient one. The query asks to join the Contact table with the Individual table, and then join the result with the Customer table, but if you inspect the three clustered index scan operators, the plan is first joining the Customer and Individual tables and then joining the result with the Contact table.

The problem of finding an efficient join order for a query is difficult because of the number of possible permutations that the Query Optimizer needs to analyze. Because of the commutative and associative properties of joins there could be many different possible join orders in a query and this number increases exponentially with the number of tables. In fact, with just a handful of tables the number of possible join orders could be in the thousands or even millions. Obviously, it is impossible for any query optimizer to look at all those combinations: it would take too long.

Queries are represented as trees in the query processor and some names like left-deep, right-deep and bushy trees are commonly used to identify the shapes of the order of joins in these trees. Left-deep and bushy trees for a join of four tables are shown in the next figure.

 clip_image004

For example, the left-deep tree shown could be JOIN( JOIN( JOIN(A, B), C), D) and the bushy tree could be JOIN(JOIN(A, B), JOIN(C, D)). Left-deep trees are also called linear trees or linear processing trees. The set of bushy trees includes the sets of both left-deep and right-deep trees. The following table lists the number of possible join orders considering left-deep and bushy trees.

Tables

Left-Deep Trees

Bushy Trees

1

1

1

2

2

2

3

6

12

4

24

120

5

120

1,680

6

720

30,240

7

5,040

665,280

8

40,320

17,297,280

9

362,880

518,918,400

10

3,628,800

17,643,225,600

11

39,916,800

670,442,572,800

12

479,001,600

28,158,588,057,600

The number of left-deep trees is calculated as n! or n factorial, where n is the number of tables in the relation. The factorial is the product of all the positive integers less than or equal to n. For example, 5! or 5 factorial is 5 x 4 x 3 x 2 x 1 or 120. Calculating the number of bushy tress is more complicated and can be calculated as (2n – 2)!/(n – 1)!.

So how does the Query Optimizer analyze all these possible join orders? The answer is: it does not. Analyzing all the possible join orders for a complex query could take a long time so the Query Optimizer must find a balance between the optimization time and the quality of the resulting plan. The Query Optimizer may not perform an exhaustive search and instead uses some heuristics to guide the search process. For example, a common heuristic is to avoid considering bushy tress during optimization, but I will leave some of these details for some other post.

How OPTIMIZE FOR UNKNOWN Works

One of the questions I have been asked several times is about how OPTIMIZE FOR UNKNOWN works. OPTIMIZE FOR is a query hint introduced with SQL Server 2005 that can help with the parameter sniffing problem and requires from you to specify a value for a parameter. For an introduction to the parameter sniffing problem you can see my previous post here. On the other hand, OPTIMIZE FOR UNKNOWN, which was introduced in SQL Server 2008, does not require from you to specify a value for a parameter.

A traditional way to avoid the parameter sniffing problem, especially in previous versions of SQL Server, was by using local variables. But entirely avoiding parameter sniffing does not mean that it is always a good solution. As I mentioned in my previous article, from the point of view of query optimization, parameter sniffing is a good thing. When the Query Optimizer knows the value of a parameter it can use the statistics histogram to estimate the number of records that can be returned by a query. Using the histogram will give you the best estimate possible. But when you use local variables SQL Server is not able to use the histogram anymore. Instead it uses the information on the density vector of the statistics object. OPTIMIZE FOR UNKNOWN works pretty much in the same way.

To better understand how OPTIMIZE FOR UNKNOWN works let us first see the case when a parameter value is known. Create the following stored procedure

CREATE PROCEDURE test (@pid int)

AS

SELECT * FROM Sales.SalesOrderDetail

WHERE ProductID = @pid

Running this stored procedure and requesting a plan shows 188 estimated records which can be seen on the following execution plan which uses both an index seek and a key lookup operators.

EXEC test @pid = 709

clip_image002

In this case SQL Server is able to use the histogram and estimate that 188 records would be returned. The Query Optimizer uses that estimate to take a decision about the plan to generate. Use the following statement to inspect the statistics object used by this stored procedure.

DBCC SHOW_STATISTICS(‘Sales.SalesOrderDetail’, IX_SalesOrderDetail_ProductID)

Running the statement shows the following information (for space limitations, only the information needed for this post is displayed, including the first line of the density vector and the first three lines of the histogram).

All density   Average Length Columns

0.003759399   4              ProductID

 

RANGE_HI_KEY RANGE_ROWS    EQ_ROWS       DISTINCT_RANGE_ROWS 

707          0             3083          0                   

708          0             3007          0                   

709          0             188           0                   

In this case SQL Server used the histogram to find the value of ProductID 709, defined as RANGE_HI_KEY, and finds the estimated number of rows 188, defined as EQ_ROWS.

Let us now change the stored procedure to use local variables

ALTER PROCEDURE test (@pid int)

AS

DECLARE @lpid int

SET @lpid = @pid

SELECT * FROM Sales.SalesOrderDetail

WHERE ProductID = @lpid

Run the procedure again using the same value

EXEC test @pid = 709

This time we get a different plan, using a clustered index scan, as shown next

clip_image004

Local variables are not known at optimization time so the Query Optimizer is not able to use the value 709 for the optimization, as it did before. Actually, this time it does not matter which value you use, you always will get the same estimated number of rows and the same plan. The estimated number of rows is 456.079.

Now use the version of the stored procedure with OPTIMIZE FOR UNKNOWN

ALTER PROCEDURE test (@pid int)

AS

SELECT * FROM Sales.SalesOrderDetail

WHERE ProductID = @pid

OPTION (OPTIMIZE FOR UNKNOWN)

You will notice that it is behaving the same as with local variables, it is getting an estimated number of rows of 456.079 and getting the same plan, using a clustered index scan.

But now let us see how SQL Server is obtaining the value 456.079 and what the reasoning behind this is.

Density is defined as 1 / number of distinct values. The SalesOrderDetail table has 266 distinct values for ProductID, so the density is calculated as 1 / 266 or 0.003759399 as shown before on the statistics object. One assumption in the statistics mathematical model used by SQL Server is the uniformity assumption. Since in this case SQL Server can not use the histogram, the uniformity assumption tells that for any given value the data distribution is the same. To obtain the estimated number of records SQL Server will multiply the density by the current total number of records, 0.003759399 * 121,317 or 456.079, as shown on the plan. This is also the same as to divide the total number of records by the number of distinct values, 121,317 / 266, which also gives 456.079.

Finally, I would conclude saying that the benefit of using OPTIMIZE FOR UNKNOWN is that you always get the same execution plan, just make sure that the chosen plan benefits most of the instances of your query.