Query store – next generation tool for every DBA

Along with the release of SQL server 2016 CTP 3 now comes the preview of a brand new feature for on premise databases – the Query Store. This feature enables performance monitoring and troubleshooting through a log of executed queries.

This blogpost will cover the aspects of this new feature including:

  • Introduction
  • How to activate it
  • Configuration options
  • What information is found in the Query Store
  • How to use the feature
  • What’s in it for me


The new feature Query Store enables everyone with responsibility for SQL server performance and troubleshooting with insight to the actual queries and their query-plans. It simplifies the old way of setting up tracing, logging and event handling to a standard, out of the box, feature.

It enables you to find causes for performance differences due to change in query plans. It also captures historic data from queries, plans, statistics (runtime), and keeps these for later review. This storage is divided into configured time-slots.

All in all, this feature enables you to monitor, capture and analyze performance issues in the server with a few standard settings.

How to activate it

The feature can be enabled in to ways – from SSMS with mouse-clicks or from T-SQL statements.

Enable Query Store from Management Studio

From the object explorer window, right click the database and select Properties.

Here click the Query Store page and change the ‘Enable’ to TRUE:

query store

Enable Query Store from T-SQL statement

In a new query window the following statement enables Query Store on the database ‘QueryStoreDB’:


Configuration options

The Query Store has a series of configuration options. All of them can be set from the SQL Server Management Studio with clicks or with T-SQL Statements.

OPERATION_MODE – This can be READ_WRITE or READ_ONLY and states if the Query Store is to collect new data (READ_WRITE) or not to collect data and just hold current data (READ_ONLY).

CLEANUP_POLICY – Specifies through the STALE_QUERY_THRESHOLD_DAYS the number of days for the query store to retain data.

DATA_FLUSH_INTERVAL_SECONDS – Gives the interval in which the data written to the Query Store is persisted to the disk. The frequency, which is asynchronous, for which the transfer occurs is configured via DATA_FLUSH_INTERVAL_SECONDS.

MAX_STORAGE_SIZE_MB – This gives the maximum size of the total data in the Query Store. If and when the limit is reached, the OPERATION_MODE is automatic changed to READ_ONLY and no more data is collected.

INTERVAL_LENGTH_MINUTES – Gives the interval at which the data from runtime execution stats is aggregated. The option gives the fixed time window for this aggregation.

SIZE_BASED_CLEANUP_MODE – When the data in the Query Store gets close to the configured number in MAX_STORAGE_SIZE_MB this option can control the automatic cleanup process.

QUERY_CAPTURE_MODE – Gives the Query Store option to capture all queries or relevant queries based on execution count and resource usage.

MAX_PLANS_PER_QUERY – The maximum number of execution plans maintained for queries.

From SQL Server Management Studio the window look like below when the Query Store is enabled. Also in the bottom of this window, you can see the current disk usage.

The T-SQL syntax for setting the Query Store options is as follows:

ALTER DATABASE <database name> 

What information can be found in the Query Store

Specific queries in the SQL server normally has evolving execution plans over time. This due to e.g. schema changes, changes in statistics, indexes etc. Also the plan cache evicts execution plans due to memory pressure. The result is that query performance troubleshooting can be non-trivial and time consuming to resolve.

The Query Store retains multiple execution plans per query. Therefore it can be used to enforce certain execution plans to specific queries. This is called plan forcing (see below for stored procedure to do this).

Prior to SQL 2016 the hint ‘USE PLAN’ was used, but now it is a fairly easy task to enforce a specific execution plan to the query processor.

More scenarios for using the Query Store:

  • Find and fix queries that have a regression in performance due to plan changes
  • Overview of how often and in which context a query has been executed, helping the DBA on performance tuning tasks
  • Overview of the historic plan changes for a given query
  • Identity top n queries (by time, cpu time, io etc) in the past x hours
  • Analyze the use of ressources (io, CPU and memory)

The Query Store contains two stores – a plan store and a runtime stats store. The Plan Store persists the execution plan information and the Runtime Stats Store persists the execution statistics information. Information is written to the two stores asynchronously to optimize performance.

The space used to hold the runtime execution information can grow over time, so the data is aggregated over a fixed time window as per setting made in the configuration.

When Query Store is enabled in the database a set of system views will be ready for queries.








Furthermore a series of system stored procedures can be called:







How to use Query Store

The Query Store comes with 4 standard reports as shown below:

For all standard reports is that they can be modified in several ways to fit your personal needs. This is done by selection in drop-downs and point-and-click.

The Regressed Queries gives an overview of the top 25 most resource consuming queries in the last hour. This including the execution plan, a time table to see when and how long the query took to run etc.

The Overall Resource Consumption show 4 charts as standard based on duration, execution count, CPU time and Logical reads

The Top Resource Consuming Queries report shows in the same format as Regressed Queries only non-aggregated and with more details.

The Tracked Queries report show detailed data from a selected query – here you need to find and remember the query id – this can be found, among other ways, from below queries against the Query Store system views.

The data from the Query Store can be accessed from the above described system views. Examples of usage can be found below.

Top 5 queries with the longest average execution time the last hour

   sys.query_store_query_text AS qt 
   RIGHT JOIN sys.query_store_query AS q 
      ON qt.query_text_id = q.query_text_id 
   RIGHT JOIN sys.query_store_plan AS p 
      ON q.query_id = p.query_id 
   RIGHT JOIN sys.query_store_runtime_stats AS rs 
      ON p.plan_id = rs.plan_id
WHERE  1=1 
   AND rs.last_execution_time > DATEADD(hour, -1, GETUTCDATE())
   rs.avg_duration DESC;

Last 10 queries executed on the server

SELECT TOP 10 qt.query_sql_text, q.query_id, 
    qt.query_text_id, p.plan_id, rs.last_execution_time
FROM sys.query_store_query_text AS qt 
JOIN sys.query_store_query AS q 
    ON qt.query_text_id = q.query_text_id 
JOIN sys.query_store_plan AS p 
    ON q.query_id = p.query_id 
JOIN sys.query_store_runtime_stats AS rs 
    ON p.plan_id = rs.plan_id
ORDER BY rs.last_execution_time DESC;

Queries with more than one execution plan

,p.query_plan AS plan_xml
FROM (SELECT COUNT(*) AS count, q.query_id 
FROM sys.query_store_query_text AS qt
JOIN sys.query_store_query AS q
    ON qt.query_text_id = q.query_text_id
JOIN sys.query_store_plan AS p
    ON p.query_id = q.query_id
GROUP BY q.query_id
HAVING COUNT(distinct plan_id) > 1) AS qm
JOIN sys.query_store_query AS q
    ON qm.query_id = q.query_id
JOIN sys.query_store_plan AS p
    ON q.query_id = p.query_id
JOIN sys.query_store_query_text qt 
    ON qt.query_text_id = q.query_text_id
ORDER BY query_id, plan_id;

What’s in it for me

Well I hope that the answer to this is pretty obvious to you after you have read this post 🙂

The Query Store enables any person responsible for database performance to monitor, analyze and keep track of queries, execution plans and resource usage through system views or standard reports from the SQL Server Management Studio.


This new feature Query Store is a great add-on for the DBA (or accidental DBA) that needs to keep the analytical data in a standard form and have availability of query statistics and troubleshooting.

This blogpost is based on the latest CTP of SQL Server 2016 (CTP 3.0) which can be downloaded here:


Many-to-many in SSAS Tabular

Many-to-many in SSAS Tabular

With the release of SQL Server 2016 CTP 3.0 also comes the ability to test the new functionality of Many-to-Many in SSAS Tabular.

This blogpost will cover the aspects of the many-to-many feature from SQL Server 2016 – including:

  • Prerequisites
  • The old way
  • The new way

This post is based on data from the AdventureWorksDW2012 database.


In order to test the new many-to-many feature from SQL Server 2016 SSAS Tabular you’ll need to download the latest CTP from Microsoft – it can be found here:


Also you’ll need the Visual Studio 2015 and the add-in for Business Intelligence:


Choose the SSDT October 2015 Preview in Visual Studio for download.

After a bit of waiting with the installation, you are ready to test the functionality.

The old way

Before showing the new (and for me right way) to do the many-to-many in SSAS Tabular, let me first show you how it was done prior to SQL Server 2016 CTP 3.0.

Thanks to the two brilliant guys from SqlBI Marco Russo (T,L) and Alberto Ferrari (T,L) we’ve had below approach for quite a while now.

First of all you need to build a bridge table with the column that links the two tables and build a model like below illustrates.

The m2mKey is a concatination of the SalesOrderNumber and SalesOrderLineNumber as the Tabular still does not have the ability to handle two joins at the same time.


Then all measures that need to take the DimSalesReason into account needed to be rewritten with some DAX coding:

Sum of UnitPrice:=CALCULATE(SUM([UnitPrice]);vBridgeSalesReason)

Then the output will look something like this:


The new way

With the CTP 3.0 release and the SSDT addon for Visual Studio 2015 now this get’s as easy as 1,2,3.

First of all, it is now possible to build a datamodel directly without any bridge tables like this:


Note the highlighted area – here you can see the many-to-many relationship. This is modelled when creating the relationship in the model like this:


Remember to select the Filter Direction to << To Both Tables >>.

And that is it!

The result without doing DAX formulas: