Benjamin Nevarez Rotating Header Image

Query Tuning

High Performance SQL Server: Second Edition

High Performance SQL Server

I am very excited to announce that the second edition of my book, High Performance SQL Server, is now available on Amazon and, in this post, I include the Introduction of the book, which covers what this book is about and describes the content of each of its eleven chapters. In summary, if you are familiar with the first edition of the book, this second edition has more than 200 new pages and includes two new chapters, the first one covering SQL Server on Linux and the second one Intelligent Query Processing. All the other chapters have been updated and expanded, to cover both SQL Server 2017 and SQL Server 2019 new features and enhancements.

I’ve been blogging and presenting about query tuning and optimization for years. I even wrote a couple of books about this topic: Inside the SQL Server Query Optimizer and Microsoft SQL Server 2014 Query Tuning & Optimization. Query tuning and optimization are extremely important for the performance of your databases and applications.

Equally important is having a well-designed and configured system in the first place. SQL Server default configuration can work fine for some applications, but mission-critical and high performance applications demand a thoughtful design and configuration. Well-written and tuned queries will not shine if a system is not properly configured. For example, queries will not use processor resources in an optimal way if a maximum degree of parallelism setting is not configured correctly. Database performance will suffer if a database is using the default file autogrowth settings or if the storage is not properly configured. A misconfigured tempdb database may show contention on many busy systems. Even the query optimizer will struggle with a bad database design or badly written queries. These are just some common configuration problems out there in real production systems.

In addition, even when a well-designed application goes to production, performance tuning does not end there. Monitoring and troubleshooting are an extremely important part of an application and database life cycle since performance problems eventually
will arise. Workloads may change, hopefully for the better (e.g., an application having to deal with an unexpected increase on business transactions). Sometimes, those changes will require a redesign, changes, and perhaps new configurations.

So this is, in fact, an iterative process, consisting of design and configuration, followed by implementation, monitoring, and troubleshooting, which again may lead to new designs or configurations, monitoring, and so on. In addition, collecting performance data, creating a baseline, and performing trend analysis are an important part of a production implementation, not only to troubleshoot problems but also to anticipate issues or understand future growth and additional requirements. It is essential to estimate and trend those changes proactively instead of waking up to a system suddenly having trouble in handling changing workloads or, even worse, to face a downtime that could have been avoided. There are several tools to help with this process, including the Query Store, a tool which was introduced with SQL Server 2016.

I spend a good part of my daily job working on all these items, so I decided to write a book about them. I wanted to cover everything you need to know about performance in SQL Server that does not require you to know about query tuning, work with execution plans, or “fight” the query optimizer. There are so many areas to cover, and more are being added as new features and technologies appear on SQL Server such as In-Memory OLTP, columnstore indexes, and the aforementioned Query Store.

This book covers all currently supported versions of SQL Server with a major focus on SQL Server 2019. Although this is a performance book from the practical point of view, understanding SQL Server internals is very important too. The best way to troubleshoot something is to know how it works and why things happen. Consequently, I focus on database engine internals when required.

Finally, this book complements my query tuning and optimization books. If you are a database developer or a SQL Server professional who cares about query performance, you could benefit from reading both these books as well. If you are a database administrator, a database architect, or a system administrator, and you want to improve the performance of your system without query tuning, you can read only this book.

Since I mentioned that implementing a high performance database server is an iterative process, consisting of design and configuration, followed by implementation, monitoring, and performance troubleshooting, I have decided to separate the chapters
of the book in four parts: SQL Server Internals, Design and Configuration, Monitoring, and Performance Tuning and Troubleshooting. I tried to separate the chapters on these areas, but I am sure some chapters may overlap with more than one of those categories.

As mentioned earlier, understanding SQL Server internals is important to better optimize a system and troubleshoot database problems, so this book starts explaining how the SQL Server database engine works and covers everything happening in the
system from the moment a connection is made to a database until a query is executed and the results are returned to the client. Chapter 1 includes topics such as the Tabular Data Stream (TDS) and network protocols used by SQL Server, SQLOS, and the work
performed by the SQL Server relational engine, focusing on query processing and the most common query operators. A new chapter on this second edition, Chapter 2, covers SQL Server on Linux. Starting with SQL Server 2017, SQL Server is now available on
Linux and can also be run on Docker containers. This chapter covers the SQL Server on Linux architecture and how the database engine works on this operating system.

Part 2, Design and Configuration, includes Chapters 3 and 4. Chapter 3 explains a number of instance-level configuration settings that can greatly impact the performance of your SQL Server implementation. As an interesting fact, it shows how some trace flags originally introduced to solve a particular problem are now implemented as SQL Server configuration defaults.

Chapter 4 covers tempdb configuration, which is especially important as such a database is shared between all the user and system databases on a SQL Server instance. Focus in the chapter is given to tempdb latch contention of allocation pages and tempdb disk spilling, a performance issue that occurs when not enough memory is available for some query processor operations.

Part 3, which focuses on monitoring SQL Server, covers analyzing wait statistics and the Query Store. Waits happen in a SQL Server instance all the time. Chapter 5 introduces the waits performance methodology, which can be used to troubleshoot performance problems, especially when other methods are not able to pinpoint a performance issue.

Chapter 6 covers the Query Store, a very promising query performance feature introduced with SQL Server 2016. The Query Store can help you to collect query and plan information along with their runtime statistics, which you can use to easily identify query performance–related problems and even force an existing execution plan. The chapter closes by mentioning some related new features such as the Live Query Statistics and the SQL Server Management Studio plan comparison tool.

Part 4, Performance Tuning and Troubleshooting, includes the five remaining chapters of the book and covers topics such as in-memory technologies, indexing, intelligent query processing, and disk and storage.

In-memory technologies are introduced in Chapter 7 and include In-Memory OLTP and columnstore indexes. Both features suffered severe limitations with their original releases, so this chapter covers how these technologies work and what their current improvements are. The chapter ends with operational analytics, which combines both technologies to allow analytical queries to be executed in real time in an OLTP system. In-memory technologies promise to be the future in relational database technologies.

Chapter 8 shows how proactively collecting and persisting performance information could be extremely beneficial to understand how a specific system works, to create a baseline, and to understand when performance is deviating from a desirable or expected behavior. The chapter also covers the most critical performance counters, dynamic management objects, and events, along with some of the tools used to display and collect such data. This chapter surely overlaps with Part 3, Monitoring.

Indexing, a required topic for database performance, is covered in Chapter 9, which explains how indexes work and why they are important on both OLTP and Data Warehouse environments. The chapter provides emphasis on using SQL Server tools to help create indexes such as the missing indexes feature and the more sophisticated Database Engine Tuning Advisor.

Another new chapter in this second edition of the book, Chapter 10, provides an overview to intelligent query processing, a collection of features introduced with SQL Server 2017 designed to improve the performance of your queries. The intelligent query processing features provide performance enhancements with no application changes needed and little or no effort required.

Finally, SQL Server storage is explained in Chapter 11. Disk has traditionally been the slowest part of a database system, but newer technologies such as flash-based storage offer great performance improvements and are becoming a de facto enterprise standard
as their cost continues to decline. The chapter also indicates that storage optimization is not only about using the fastest disk possible but also minimizing its usage by implementing the methods covered in several chapters of the book, such as proper indexing or some query-tuning techniques


The SQL Server Query Store

Have you ever found a plan regression after a SQL Server upgrade and wanted to know what the previous execution plan was? Have you ever had a query performance problem due to the fact that a query unexpectedly got a new execution plan? At the last PASS Summit, Conor Cunningham uncovered a new SQL Server feature, which can be helpful in solving performance problems related to these and other changes in execution plans.

This feature, called the Query Store, can help you with performance problems related to plan changes and will be available soon on SQL Azure and later on the next version of SQL Server. Although it is expected to be available on the Enterprise Edition of SQL Server, it is not yet known if it will be available on Standard or any other editions. To understand the benefits of the Query Store, let me talk briefly about the query troubleshooting process.

Why is a Query Slow?

Once you have detected that a performance problem is because a query is slow, the next step is to find out why. Obviously not every problem is related to plan changes. There could be multiple reasons why a query that has been performing well is suddenly slow. Sometimes this could be related to blocking or a problem with other system resources. Something else may have changed but the challenge may be to find out what. Many times we don’t have a baseline about system resource usage, query execution statistics or performance history. And usually we have no idea what the old plan was. It may be the case that some change, for example, data, schema or query parameters, made the query processor produce a new plan.

Plan Changes

At the session, Conor used the Picasso Database Query Optimizer Visualizer tool, although didn’t mention it by name, to show why the plans in the same query changed, and explained the fact that different plans could be selected for the same query based on the selectivity of their predicates. He even mentioned that the query optimizer team uses this tool, which was developed by the Indian Institute of Science. An example of the visualization:

Picasso Database Query Optimizer Visualizer

Picasso Database Query Optimizer Visualizer

Each color in the diagram is a different plan, and each plan is selected based on the selectivity of the predicates. An important fact is when a boundary is crossed in the graph and a different plan is selected, most of the times the cost and performance of both plans should be similar, as the selectivity or estimated number of rows only changed slightly. This could happen for example when a new row is added to a table which qualifies for the used predicate. However, in some cases, mostly due to limitations in the query optimizer cost model in which it is not able to model something correctly, the new plan can have a large performance difference compared to the previous one, creating a problem for your application. By the way, the plans shown on the diagram are the final plan selected by the query optimizer, don’t confuse this with the many alternatives the optimizer has to consider to select only one.

An important fact, in my opinion, which Conor didn’t cover directly, was the change of plans due to regressions after changes on cumulative updates (CUs), service packs, or version upgrades. A major concern that comes to mind with changes inside the query optimizer is plan regressions. The fear of plan regressions has been considered the biggest obstacle to query optimizer improvements. Regressions are problems introduced after a fix has been applied to the query optimizer, and sometimes referred as the classic “two or more wrongs make a right.” This can happen when, for example, two bad estimations, one overestimating a value and the second one underestimating it, cancel each other out, luckily giving a good estimate. Correcting only one of these values may now lead to a bad estimation which may negatively impact the choice of plan selection, causing a regression.

What Does the Query Store Do?

Conor mentioned the Query Store performs and can help with the following:

  1. Store the history of query plans in the system;
  2. Capture the performance of each query plan over time;
  3. Identify queries that have “gotten slower recently”;
  4. Allow you to force plans quickly; and,
  5. Make sure this works across server restarts, upgrades, and query recompiles.

So this feature not only stores the plans and related query performance information, but can also help you to easily force an old query plan, which in many cases can solve a performance problem.

How to Use the Query Store

You need to enable the Query Store by using the ALTER DATABASE CURRENT SET QUERY_STORE = ON; statement. I tried it in my current SQL Azure subscription, but the statement returned an error as it seems that the feature is not available yet. I contacted Conor and he told me that the feature will be available soon.

Once the Query Store is enabled, it will start collecting the plans and query performance data and you can analyze that data by looking at the Query Store tables. I can currently see those tables on SQL Azure but, since I was not able to enable the Query Store, the catalogs returned no data.

You can analyze the information collected either proactively to understand the query performance changes in your application, or retroactively in case you have a performance problem. Once you identify the problem you can use traditional query tuning techniques to try to fix the problem, or you can use the sp_query_store_force_plan stored procedure to force a previous plan. The plan has to be captured in the Query Store to be forced, which obviously means it is a valid plan (at least when it was collected; more on that later) and it was generated by the query optimizer before. To force a plan you need the plan_id, available in the sys.query_store_plan catalog. Once you look at the different metrics stored, which are very similar to what is stored for example in sys.dm_exec_query_stats, you can make the decision to optimize for a specific metric, like CPU, I/O, etc. Then you can simply use a statement like this:

EXEC sys.sp_query_store_force_plan @query_id = 1, @plan_id = 1;

This is telling SQL Server to force plan 1 on query 1. Technically you could do the same thing using a plan guide, but it would be more complicated and you would have to manually collect and find the required plan in the first place.

How Does the Query Store Work?

Actually forcing a plan uses plan guides in the background. Conor mentioned that “when you compile a query, we implicitly add a USE PLAN hint with the fragment of the XML plan associated with that statement.” So you no longer need to use a plan guide anymore. Also keep in mind that, same as using a plan guide, it is not guaranteed to have exactly the forced plan but at least something similar to it. For a reminder of how plan guides work take a look at this article. In addition, you should be aware that there are some cases where forcing a plan does not work, a typical example being when the schema has changed, i.e. if a stored plan uses an index but the index no longer exists. In this case SQL Server can not force the plan, will perform a normal optimization and it will record the fact that the forcing the plan operation failed in the sys.query_store_plan catalog.


Every time SQL Server compiles or executes a query, a message is sent to the Query Store. This is shown next.


Query Store Workflow Overview

The compile and execution information is kept in memory first and then saved to disk, depending on the Query Store configuration (the data is aggregated according to the INTERVAL_LENGTH_MINUTES parameter, which defaults to one hour, and flushed to disk according to the DATA_FLUSH_INTERVAL_SECONDS parameter). The data can also be flushed to disk if there is memory pressure on the system. In any case you will be able to access all of the data, both in memory and disk, when you run the sys.query_store_runtime_stats catalog.


The collected data is persisted on disk and stored in the user database where the Query Store is enabled (and settings are stored in sys.database_query_store_options. The Query Store catalogs are:

sys.query_store_query_text Query text information
sys.query_store_query Query text plus the used plan affecting SET options
sys.query_store_plan Execution plans, including history
sys.query_store_runtime_stats Query runtime statistics
sys.query_store_runtime_stats_interval Start and end time for intervals
sys.query_context_settings Query context settings information

Query Store views

Runtime statistics capture a whole slew of metrics, including the average, last, min, max, and standard deviation. Here is the full set of columns for sys.query_store_runtime_stats:

runtime_stats_id plan_id runtime_stats_interval_id
execution_type execution_type_desc first_execution_time last_execution_time count_executions
avg_duration last_duration min_duration max_duration stdev_duration
avg_cpu_time last_cpu_time min_cpu_time max_cpu_time stdev_cpu_time
avg_logical_io_reads last_logical_io_reads min_logical_io_reads max_logical_io_reads stdev_logical_io_reads
avg_logical_io_writes last_logical_io_writes min_logical_io_writes max_logical_io_writes stdev_logical_io_writes
avg_physical_io_reads last_physical_io_reads min_physical_io_reads max_physical_io_reads stdev_physical_io_reads
avg_clr_time last_clr_time min_clr_time max_clr_time stdev_clr_time
avg_dop last_dop min_dop max_dop stdev_dop
avg_query_max_used_memory last_query_max_used_memory min_query_max_used_memory max_query_max_used_memory stdev_query_max_used_memory
avg_rowcount last_rowcount min_rowcount max_rowcount stdev_rowcount

Columns in sys.query_store_runtime_stats

This data is only captured when query execution ends. The Query Store also considers the query’s SET options, which can impact the choice of an execution plan, as they affect things like the results of evaluating constant expressions during the optimization process. I cover this topic in a previous post.


This will definitely be a great feature and something I’d like to try as soon as possible (by the way, Conor’s demo shows “SQL Server 15 CTP1” but those bits are not publicly available). The Query Store can be useful for upgrades which could be a CU, service pack, or SQL Server version, as you can analyze the information collected by the Query Store before and after to see if any query has regressed. (And if the feature is available in lower editions, you could even do this in a SKU upgrade scenario.) Knowing this can help you to take some specific action depending on the problem, and one of those solutions could be to force the previous plan as explained before.

SQL Server 2014 Query Tuning & Optimization Book Code

Today I was finally able to finish extracting the code of my new book Microsoft SQL Server 2014 Query Tuning & Optimization after a few readers requested it recently but I had been busy with some traveling in the last few weeks. The book contains a large number of SQL queries, all of which are based on the AdventureWorks2012 database, except Chapter 9 which uses the AdventureWorksDW2012 database. All code has been tested on SQL Server 2014. Note that these sample databases are not included by default in your SQL Server installation, but can be downloaded from the CodePlex website.

To download the code click here Query Tuning and Optimization Code.