In today’s installment of the SQL Server Performance Tuning series we talk about TempDb - the public toilet of SQL Server, as I describe that special database in SQL Server. Everyone of us uses TempDb regularly. Some people use it directly, some people use it indirectly. Today I want to give you an overview of TempDb usage in SQL Server, and I will give you some best practices on how to configure TempDb for better performance.
Everyone of us uses TempDb. That’s a fact we have to live with. Therefore it is also very important to configure TempDb accordingly - to get good performance out of it. TempDb stores the following kinds of objects:
- User Objects
- Internal Objects
- Version Store
Let’s have a more detailed look at these objects. When we talk about user objects, we talk about temp tables, table variables, and table valued functions Temp tables have two flavors in SQL Server: local ones, and global ones. Local temp tables are created with the prefix “#”, and are scoped to the session which created it. As soon as you close your session, the local temp table will be also deallocated in TempDb. The nice thing about local temp tables is that you can overcome locking & blocking problems with them, because every session gets its own, dedicated temp table.
There are also global temp tables in SQL Server, which are created with the prefix “##”. These temp tables are accessible across all sessions, because they are created globally. And finally SQL Server provides you with table variables, which are also persisted physically in TempDb, and are scoped to the batch where you have defined the table variable. It’s a misconception that table variables are in-memory constructs in SQL Server. They are always persisted in TempDb. Tables that are returned from table valued functions are also always persisted in TempDb. Therefore it’s also very important to configure TempDb accordingly when using all these various objects in SQL Server.
Besides you - as a developer or DBA - SQL Server itself is always using TempDb for storing internal created objects. When you run consistency checks with DBCC CHECKDB or DBCC CHECKTABLE, SQL Server allocates work tables in TempDb. Sort or hash operations in execution plans that are spilled over, are also persisted physically in TempDb. When you work with cursors, or even with Service Broker, you consume space in TempDb. If you rebuild indexes with the option SORT_IN_TEMPDB, you use TempDb. TempDb is used just about everywhere in SQL Server!
In addition to internal objects, SQL Server supports also a so-called Version Store, which is used when you work with optimistic concurrency in SQL Server, or if you perform online index operations. Internally SQL Server divides the version store into 2 distinct stores: a Common Version Store for triggers, Snapshot Isolation, Read Committed Snapshot Isolation, and Multiple Active Result Sets (MARS). And an Online Index Rebuild Version Store that is used by Online Index Operations in SQL Server.
Running TempDb in the default configuration isn’t really a good idea. The default configuration of TempDb gives you just one data file, and one transaction log file. The data file has an initial size of 8 MB, and 1 MB for the transaction log in SQL Server 2014. Both files are set to an auto growth factor of 10%. This configuration leads to several problems:
- Too many auto growth operations over time
- Log file fragmentation
- Latch contention
Let’s have a more detailed look at these problems. With the default initial size of 8 MB, your TempDb will have to grow over time with expensive auto growth operations. If you know that your TempDb needs x MBs in size, you should set that as an initial size, because during startup of SQL Server, TempDb is always recreated from the model database. That way you can avoid the auto growth operations. If you rely on an auto growth setting, you should also use a fixed size instead of a percentage value. This allows you to estimate how long the auto growth operation will take. With a percentage value it takes longer and longer over the time depending on the current size of your file.
You also have to carefully size the transaction log of TempDb, because auto growth operations are very expensive there. SQL Server can’t use Instant File Initialization for any transaction log. This means that your database can’t accept transactions during the auto growth of the transaction log. Auto growth operations on the transaction log are mainly a no-go for performance critical systems.
And finally you can also hit Latch Contention problems in TempDb, because there is only one data file available. When SQL Server allocates new objects in TempDb, SQL Server has to read special pages (SGAM, GAM, PFS). These pages must be latched during reading and writing. Therefore you can have contention problems on these hot pages in TempDb when you run a workload that is highly dependent on TempDb.
The solution to that problem is to use multiple data files for TempDb, because SQL Server will then use a Round-Robin allocation algorithm across multiple data files, which will also reduce the latch contention problems. If you use multiple data files, you also have to be sure to set the initial size (and a possible auto growth factor) to the same values, so that they grow at the same time.
In today’s installment of the SQL Server Performance Tuning series we have talked about a special database in SQL Server - TempDb. As you have seen everyone always uses TempDb in SQL Server - directly or indirectly. Therefore it is also very important to size and plan TempDb accordingly. In the 2nd section I gave you some advice about how to configure TempDb. In one week you will receive the final installment of the training plan, where I will talk about database maintenance. Have fun in the meantime!