Uge 19: Lock Escalation

Så er vi i gang med endnu et afsnit af denne følgeton. Hvis du er med stadig, så er der gode ting på programmet. Denne gang skriver jeg lidt om Lock Escalations som er en af fænomenerne man kan opleve ved SQL serveren. Et såkaldt lock-hierarki er bygget dybt ind i maven på SQL serverens motor.

Lock Hierarkiet

Da jeg skrev om pessimistisk concurrency, skrev jeg også om at SQL serveren anvender Shared og Exlusive Locks på række-niveau. Desværre er det kun en del af hele billedet. Det samlede billede af Locks kræver at man også kigger på den granilaritet hvori locks bliver sat. Det kan være på alle niveau’er fra database til række.

Lock Escalation

Så snart en database tages i brug, vil den givne session mod databasen få en Shared Lock. Denne Lock betyder at ingen kan slette databasen eller genskabe fra en backup, så længe denne Lock er aktiv. Operationerne vil simpelhen blive blokkeret grundet den aktive session. Ligesom de nævnte Shared Locks og Exclusive Locks på række-niveau, eksisterer der også såkaldte Intent-Locks på tabel- og page-niveau.

  • Med en Shared Lock på række-niveau, kommer også en (IS) Intent Shared Lock på tabellen og tilhørende page
  • Med en Exclusive Lock på række-niveau, kommer også en (IX) Intent Exclusive Lock på tabellen og tilhørende page

Intent Locks anvendes til at signalere at en inkompatibel Lock allerde er givet på en lavere niveau i Lock-hierarkiet. Intent Locks er mestendels en performance optimering for den relationelle motor. Uden dem skulle Lock Manager (intern service i SQL serveren) gennemgå samtlige locks på lavere niveau, for at tilsikre at en lock på et højere niveau kan tildeles. Hvis der for eksemplets skyld eksisterer en IX Lock på en tabel, så kan man ikke få en X Lock på samme tabel, idet nogen data-rækker allerede er låst med Exclusive Lock fra tabellen selv.

Uheldigvis koster denne multi-level lock metode også ressourcer på SQL Serveren. Hver enkelt Lock “koster” 96 bytes af SQL serverens hukommelse (RAM). Derfor er det vigtigt for SQL serveren at tilsikre at queries kun anvender så få Lock bytes som overhovedet muligt.

Lock Escalations

Et eksempel er altid godt at fortælle med, så forestil dig følgende scenarie: Et update statement der rammer 1 million rækker spredt på 200.000 data pages. I dette eksempel skal SQL serveren tilsikre 1 million Locks på data-rækkerne, 200.000 IX Locks på data pages og en enkelt IX Lock på tabellen. Alene disse 1.200.001 Locks fylder ca 110 MB i memory fra Lock Manageren – kun for denne ene query. Den slags hukommelsesforbrug er uhensigtsmæssig, hvorfor der er en ekstra feature i SQL serveren Lock Escalations som træder i kraft ved mere end 5.000 Locks på eet niveau – som her på række-niveau. Her vil SQL serveren eskallere de individuelle række Locks til en mere grovkornet Lock på tabellen:

  • Individuelle X Locks bliver eskalleret til 1 tabel X Lock
  • Individuelle S Locks bliver eksalleret til 1 tabel S Lock

Nedenståene illustrerer hvordan Locks ser ud før og efter Lock Escalation.

Forbruget af hukommelse er nedbragt betragteligt ved denne tilgang, men det påvirker også muligheden for parallelle queries på databasen. En X Lock på en tabel betyder at ingen andre kan læse eller skrive til den tabel – En S Lock på en tabel betyder at tabellen er read-only. Databasens gennemstrømning af queries falder.

SQL Serveren trigger Lock Escalation når der er mere end 5.000 Locks på samme niveau. Dette er hard-coded ind i maven på SQL serveren og kan desværre ikke ændres ved en konfiguration eller hack. Siden SQL server 2008 har det været muligt at styre lidt af Lock Escalations gennem et ALTER TABLE DDL statement.

ALTER TABLE MyTableName
SET
(
   LOCK_ESCALATION = TABLE -- or AUTO or DISABLE
)

Pr. default bliver Lock Escalations fra rækker altid forhøjet til tabel (fra ovenstående LOCK_ESCALATION = TABLE). Hvis man sætter den til AUTO vil SQL serveren eskallere til partitions niveau – men pas på med denne feature, da det kan føre til ekstra Deadlocks, hvis man tilgår partitionerne i vilkårlig rækkefølge. Muligheden for DISABLE fjerner helt Lock Escalation på tabellen – med forhøjet forbrug af RAM som tidligere nævnt. Det gode spørgsmål her er så hvordan man mest effektivt kan lave en UPDATE eller DELETE query på mere end 5.000 rækker i en tabel?

Her 3 forslag:

  1. Delvis opdatering af de 5.000 rækker i mindre dele – eks. gennem en WHILE loop
  2. Anvend partition switching – hvis tabellen er partitioneret
  3. Midlertidigt LOCK_ESCALATIONS = DISABLE – men pas på med RAM forbruget i perioden

Opsummering – Lock Escalation

Lock Escalation er en form for sikkerhedsnet på SQL serveren. Med god grund, men det introducerer også en række afledte følger. Derfor er det vigtigt at man meget velovervejet skriver T-SQL kode som håndterer mere end 5.000 rækker i samme transaktion. Måske skal det gøres i mindre dele i stedet for een stor UPDATE eller DELETE query.

I næste uge skriver jeg lidt videre om Locks og Blockings på SQL serveren og tager fat på Deadlocks, hvad det er og hvordan SQL serveren arbejder med disse.

Husk at skrive dig på maillisten nedenfor, så du ikke misser næste udgave af denne blogserie. Jeg lover dig kun at sende en mail, når der er nyt i denne serie.

Uge 17: Pessimistic Concurrency

Den 5. måned starter ud med Locking, Blocking og Deadlocking på SQL Serveren.

SQL serveren anvender to modeller til at definere hvordan parallelle queries er eksekveret. Ikke dermed ment de queries som en enkelt query som eksekveres på flere tråde, men to forskellige queries som eksekveres samtidig.

I dette indlæg vil jeg give dig et overblik over de forskellige Isolation Levels som er den del af den pessimistiske model (pessimistic concurrency) for parallelle queries. I næste uge kommer en gennemgang af den optimiske model.

Pessimistiske Isolation Levels

Pessimistisk parallellitet betyder at SELECT queries blokerer for INSERT, UPDATE og DELETE queries, og vise versa. SQL serveren anvender Locks til dette formål.

  • SELECT får tildelt Shared Locks (S)
  • INSERT, UPDATE og DELETE for tildelt Exclusive Locks (X)

Begge Locks er ikke kompatible med hinanden. Dette betyder at man ikke kan læse fra og skrive til den samme datarække på samme tid. Hvis dette bliver forsøgt opstår der et såkaldt Blocking scenarie. Når man sætter en specifik Isolation Level på en transaktion, indvirker man direkte hvordan SELECT queries skal opretholde deres S locks. Man kan derimod ikke påvirke X locks – de bliver altid tildelt til INSERT, UPDATE og DELETE.

Pr. default bliver alle queries eksekveret i Isolation Level Read Commited. Read Commited betyder at SQL serveren tager en S Lock på en datarække i det tidsrum det tager at læse den. Lige så snart en datarække er læst og processeret, bliver S Lock frigivet øjeblikkeligt. Når man anvender en Scan Operator (i single tråd) bliver der kun anvendt 1 S Lock på et givnet tidspunkt. På baggrund af denne opsætning, er det muligt at en efterfølgende transaktion kan ændre i datarækken umiddelbart bagefter. Hvis man læser den samme datarække igen inden for den samme transaktion, opstår der såkaldt Non-Repeatable Read: man læser den samme datarække flere gange og man modtager forskellige værdier hver gang.

Hvis man gerne vil have en anden tilgang for læsning af data, kan man anvende en mere striks Isolation Level såsom Repeatable Read. Denne Isolation Level giver mulighed for at holde S Lock lige så længe som transaktionen forløber med læsning af datarækken. Derfor kan der heller ikke opnås X Locks (og dermed ikke ændres i data) i den periode – det ville skabe et scenarie af Blocking Queries. Denne tilgang har nogen fordele og ulemper. På den ene side får man mere korrekt data inden for samme transaktion og på den anden side vil man opleve flere blocking scenarier, fordi SELECT holder deres S Lock i længere tid.

Man kan være endnu mere striks i tilgangen og vælge Isolation Level Serializable. Med denne tilgang – den mest strikse på SQL serveren – kan man helt undgå såkaldte Phantom Records. En Phanton Record opstår og forsvinder igen når man tilgår det samme datasæt flere gange. For at undgå disse phantom records anvender SQL serveren en såkaldt Key Range Locking teknik ved at låse det subset af datarækker man som bruger først modtager.

Derfor kan ingen anden parallel query arbejde med disse datarækker. Det er derfor umuligt at lave INSERT, UPDATE og DELETE på de rækker. Disse queries vil blot blive blocket. Der er brug for et støttende index på search predicate som definerer datarækkerne. Med dette støttende index kan SQL serveren låse de individuelle index nøgler. Uden dette index bliver SQL serveren nødt til at låse hele tabellen, og dermed indvirke meget negativt på alle andre queries som tilgår denne tabel.

Slutteligt kan SQL serveren konfigureres til at bruge isolation level Read Uncommitted. Med denne isolation level bliver der ikke uddelt nogen S Locks ved læsning af data. Derfor er det muligt at læse data som endnu ikke er committed til databasen og som stadig er in-flight i en transaktion. Dette kaldes også for Dirty Reads. Hvis en sådan transaktion laver en Roll Back, vil man have læst data som aldrig har fandtes på databasen og som heller ikke kan genskabes på nogen måde. Dirty Reads opstår bl.a. når man bruger det berømte NOLOCK query hint.

Pessimistisk Isolation Levels er ikke så kompliceret – eller hvad synes du? 😀 Isolation level fortæller blot SQL serveren hvor længe en S Lock skal opretholdes ifm læsning af data. Og baseret på denne indstilling bliver der besluttet hvad der er tilladt og ikke tilladt under datalæsning. Jeg har forsøgt at lave et overblik som nedenstående illustration.

Pessimistic Concurrency

I tillæg til at opsætte en specifik Isolation Level, er det også muligt for SQL serveren midlertidigt at sætte Isolation Level på en query for at garantere rigtighed af data og resultatsættet.

Opsummering – Pessimistic Concurrency

Med dette indlæg og opstarten af 5. måned har jeg forsøgt at give et overblik over de forskellige pessimistiske isolation levels på SQL serveren. Med dette fundament, har du også fået første viden omkring fejlsøgning på locking og blocking scenarier på SQL serveren. SELECT blokerer for INSERT, UPDATE og DELETE – INSERT, UPDATE og DELETE blokerer for SELECT.

I næste uge tager jeg fat på 2 andre isolation levels på SQL serveren som anvendes i forbindelse med optimistic concurrency.

Husk at skrive dig på maillisten nedenfor, så du ikke misser næste udgave af denne blogserie. Jeg lover dig kun at sende en mail, når der er nyt i denne serie.

Uge 1: Hvordan en SQL Server eksekverer en forespørgsel

Dette er første indlæg i en længere serie om SQL Performnace Tuning. Før vi hopper direkte ud i alle de lækre detaljer omkring performance tuning på en SQL server, er det vigtigt at have grundlaget på plads.

I dette indlæg kigger jeg derfor nærmere på hvordan en SQL server eksekverer en forespørgsel. Det er en vigtig del af det at forstå SQL serverens metoder, da det vil være herpå de næste indlæg vil bygge videre.

Som vist ovenfor er SQL serveren internt delt i en Relational Engine og en Storage Engine. Den største del af Relational Engine er Query Optimizeren (ofte blot kald optimizeren). Det eneste opgave for optimizeren er at generere en fysisk execution plan (mere herom i et senere indlæg) for de forespørgsler vi sender til SQL serveren.

Læsning af data

Alle forespørgsler – eller queries – som vi sender til SQL Serveren passerer igennem Protocol Layer til Command Parser. Command Parser kontrollerer den kode vi sender afsted – f.eks om det er valid TSQL kode og eksiterer alle anvendte tabeller og kolonner i databasen. Resultatet af denne opgave er et såkaldt Query Tree, en træstruktur som repræsenterer den afsendte query. Denne træstruktur bliver så anvendt af optimizeren til at generere en execution plan.

Den færdige execution plan bliver herefter sendt videre til Query Executor. Her er opgaven at udføre de handlinger som execution planen foreskriver. Inden dette sker, gemmes den modtage execution plan i Plan Chachen – her kan SQL serveren nemlig genbruge execution planer fra tidligere. Denne metode med at gemme og genbruge execution plans er på samme tid et meget stærkt og meget farligt koncept af SQL serveren. Mere herom i et senere indlæg omkring Plan Cache.

Efter execution plan er gemt i cachen, begynder query executor at kommunikere med storage engine og eksekverer hver en lille del af execution plan – kaldet operators.

Når data tilgås fra execution plan (dette er det eneste sted vi kan få fat i data) er det Access Methods som kommunikerer med Buffer Manager for at læse specifikke pages – mere om pages i de næste indlæg, og lige nu er det nok at vide at en page er en buffer med data der fylder 8kb hvori data og index er gemt. Det er Buffer Manager der styrer Buffer Pool hvor pages er gemt. Det er Buffer Pool der er den fysiske anvendelse af memory (ram) som man kan se SQL serveren anvender i operativsystemet.

Når en page allerede er gemt i buffer pool, så bliver denne page øjeblikkeligt returneret. Når dette sker er det en Logical Read i SQL serveren. Hvis en page ikke allerede er gemt i buffer pool, så udfører buffer manager en asynkron I/O forespørgsel mod den fysiske disk og ind i buffer pool. Dette er en Physical Read. Under denne asynkrone handling venter querien indtil handlingen er færdig. Dette kaldes Waits og senere i et indlæg kommer jeg ind på dette emne og Wait Statistics.

Så snart en page er læst ind til buffer pool, bliver denne page sendt videre til den proces som forespurgte på den. Når execution plan er færdig, vil resultatet af det behandlede data blive returneret til brugeren eller applikationen gennem Protocol Layer.

Ændring af data

Når man arbejder med TSQL udtryk som ændrer på det eksisterende data (INSERT, DELTE, UPDATE og MERGE), så arbejder storage engine også med Transaction Manageren. Opgaven for denne proces er at skrive records til transaktionsloggen som beskriver de handlinger der sker i transaktionen. Så snart disse records er skrevet til loggen, kan transaktionen blive kørt færdig. Dette betyder også at SQL serveren kun kan være så hurtig som den tilhørende transaktionlog.

Pages som er ændret i hukommelsen på serveren bliver skrevet til disk gennem den såkaldte CHECKPOINT proces. Som udgangspunkt eksekveres checkpoint processen hvert minut og skriver dirty pages fra buffer manageren til disk. En dirty page er en page med ændringer som endnu ikke er skrevet til disk.

Når en page er blevet skrevet til disk markeres den som “clean” igen.

Opsummering – Performance Tuning

Som du sikkert allerede har erfaret af denne indledning til SQL Performance Tuning, så er der rigtig mange ting der sker på SQL serveren, når man eksekverer en forespørgsel. Mange af de ord og termer der er brugt i dette indlæg, vil blive gjort mere klar senere i andre indlæg.

Har du fået mere blod på tanden til at læse videre om hvordan SQL serveren håndterer forespørgsler og databehandling, så vil jeg anbefale dig at læse nogen af nedenstående blogs:

Næste gang gennemgår jeg lidt mere om Data Pages og deres berettigelse i SQL Server.

Har du kommentarer eller andet, er du velkommen til at skrive dem herunder.

Importing flatfiles to a SQL server with a varying number of columns

Ever been as frustrated as I when importing flatfiles to a SQL Server and the format suddenly changes in production?

The mostly used integration tools (like SSIS) are very dependent on the correct, consistent and same metadata when working with flatfiles.

I’ve come up with a solution that I would like to share with you.

When implemented, the process of importing flatfiles with changing metadata is handled in a structured, and most important, flawless way. Even if the columns change order or existing columns are missing.

Background

When importing flatfiles to SQL server almost every standard integration tool (including TSQL bulkload) requires fixed metadata from the files in order to work with them.

This is quite understandable, as the process of data transportation from the source to the destination needs to know where to map every column from the source to the defined destination.

Let me make an example:

A source flatfile table like below needs to be imported to a SQL server database.

This file could be imported to a SQL Server database (in this example named FlatFileImport) with below script:

create table dbo.personlist (
	[name] varchar(20),
	[gender] varchar(10),
	[age] int,
	[city] varchar(20),
	[country] varchar(20)
);

BULK INSERT dbo.personlist
FROM 'c:\source\personlist.csv'
WITH
(
	FIRSTROW = 2,
	FIELDTERMINATOR = ';',  --CSV field delimiter
	ROWTERMINATOR = '\n',   --Use to shift the control to next row
	TABLOCK,
	CODEPAGE = 'ACP'
);

select * from dbo.personlist;

The result:

If the column ‘Country’ would be removed from the file after the import has been setup, the process of importing the file would either break or be wrong (depending on the tool used to import the file) The metadata of the file has changed.

-- import data from file with missing column (Country)
truncate table dbo.personlist;
 
BULK INSERT dbo.personlist
FROM 'c:\source\personlistmissingcolumn.csv'
WITH
(
	FIRSTROW = 2,
	FIELDTERMINATOR = ';',  --CSV field delimiter
	ROWTERMINATOR = '\n',   --Use to shift the control to next row
	TABLOCK,
	CODEPAGE = 'ACP'
);
 
select * from dbo.personlist;

With this example, the import seems to go well, but upon browsing the data, you’ll see that only one row is imported and the data is wrong.

The same would happen if the columns ‘Gender’ and ‘Age’ where to switch places. Maybe the import would not break, but the mapping of the columns to the destination would be wrong, as the ‘Age’ column would go to the ‘Gender’ column in the destination and vice versa. This due to the order and datatype of the columns. If the columns had the same datatype and data could fit in the columns, the import would go fine – but the data would still be wrong.

-- import data from file with switched columns (Age and Gender)
truncate table dbo.personlist;
 
BULK INSERT dbo.personlist
FROM 'c:\source\personlistswitchedcolumns.csv'
WITH
(
	FIRSTROW = 2,
	FIELDTERMINATOR = ';',  --CSV field delimiter
	ROWTERMINATOR = '\n',   --Use to shift the control to next row
	TABLOCK,
	CODEPAGE = 'ACP'
);
Importing flatfiles to a sql server

When importing the same file, but this time with an extra column (Married) – the result would also be wrong:

-- import data from file with new extra column (Married)
truncate table dbo.personlist;
 
BULK INSERT dbo.personlist
FROM 'c:\source\personlistextracolumn.csv'
WITH
(
	FIRSTROW = 2,
	FIELDTERMINATOR = ';',  --CSV field delimiter
	ROWTERMINATOR = '\n',   --Use to shift the control to next row
	TABLOCK,
	CODEPAGE = 'ACP'
);
 
select * from dbo.personlist; 

The result:

Above examples are made with pure TSQL code. If it was to be made with an integration tool like SQL Server Integration Services, the errors would be different and the SSIS package would throw more errors and not be able to execute the data transfer.

The cure

When using the above BULK INSERT functionality from TSQL the import process often goes well, but the data is wrong with the source file is changed.

There is another way to import flatfiles. This is using the OPENROWSET functionality from TSQL.

In section E of the example scripts from MSDN, it is described how to use a format file. A format file is a simple XML file that contains information of the source files structure – including columns, datatypes, row terminator and collation.

Generation of the initial format file for a curtain source is rather easy when setting up the import.

But what if the generation of the format file could be done automatically and the import process would be more streamlined and manageable – even if the structure of the source file changes?

From my GitHub project you can download a home brewed .NET console application that solves just that.

If you are unsure of the .EXE files content and origin, you can download the code and build your own version of the GenerateFormatFile.exe application.
Another note is that I’m not hard core .Net developer, so someone might have another way of doing this. You are very welcome to contribute to the GitHub project in that case.

The application demands inputs as below:

Example usage:

generateformatfile.exe -p c:\source\ -f personlist.csv -o personlistformatfile.xml -d ;

Above script generates a format file in the directory c:\source\ and names it personlistFormatFile.xml.

The content of the format file is as follows:

The console application can also be called from TSQL like this:

-- generate format file
declare @cmdshell varchar(8000);
set @cmdshell = 'c:\source\generateformatfile.exe -p c:\source\ -f personlist.csv -o personlistformatfile.xml -d ;'
exec xp_cmdshell @cmdshell;

If by any chance the xp_cmdshell feature is not enabled on your local machine – then please refer to this post from Microsoft: Enable xp_cmdshell

Using the format file

After generation of the format file, it can be used in TSQL script with OPENROWSET.

Example script for importing the ‘personlist.csv’

-- import file using format file
select *  
into dbo.personlist_bulk
from  openrowset(
	bulk 'c:\source\personlist.csv',  
	formatfile='c:\source\personlistformatfile.xml',
	firstrow=2
	) as t;
 
select * from dbo.personlist_bulk;

This loads the data from the source file to a new table called ‘personlist_bulk’.

From here the load from ‘personlist_bulk’ to ‘personlist’ is straight forward:

-- load data from personlist_bulk to personlist
truncate table dbo.personlist;
 
insert into dbo.personlist (name, gender, age, city, country)
select * from dbo.personlist_bulk;
 
select * from dbo.personlist;
 
drop table dbo.personlist_bulk;

Load data even if source changes

Above approach works if the source is the same every time it loads. But with a dynamic approach to the load from the bulk table to the destination table it can be assured that it works even if the source table is changed in both width (number of columns) and column order.

For some the script might seem cryptic – but it is only a matter of generating a list of column names from the source table that corresponds with the column names in the destination table.

-- import file with different structure
-- generate format file
if exists(select OBJECT_ID('personlist_bulk')) drop table dbo.personlist_bulk
 
declare @cmdshell varchar(8000);
set @cmdshell = 'c:\source\generateformatfile.exe -p c:\source\ -f personlistmissingcolumn.csv -o personlistmissingcolumnformatfile.xml -d ;'
exec xp_cmdshell @cmdshell;
 
 
-- import file using format file
select *  
into dbo.personlist_bulk
from  openrowset(
	bulk 'c:\source\personlistmissingcolumn.csv',  
	formatfile='c:\source\personlistmissingcolumnformatfile.xml',
	firstrow=2
	) as t;
 
-- dynamic load data from bulk to destination
declare @fieldlist varchar(8000);
declare @sql nvarchar(4000);
 
select @fieldlist = 
				stuff((select 
					',' + QUOTENAME(r.column_name)
						from (
							select column_name from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'personlist'
							) r
							join (
								select column_name from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'personlist_bulk'
								) b
								on b.COLUMN_NAME = r.COLUMN_NAME
						for xml path('')),1,1,'');
 
print (@fieldlist);
set @sql = 'truncate table dbo.personlist;' + CHAR(10);
set @sql = @sql + 'insert into dbo.personlist (' + @fieldlist + ')' + CHAR(10);
set @sql = @sql + 'select ' + @fieldlist + ' from dbo.personlist_bulk;';
print (@sql)
exec sp_executesql @sql
 

The result is a TSQL statement what looks like this:

truncate table dbo.personlist;
insert into dbo.personlist ([age],[city],[gender],[name])
select [age],[city],[gender],[name] from dbo.personlist_bulk;

The exact same thing would be able to be used with the other source files in this demo. The result is that the destination table is correct and loaded with the right data every time – and only with the data that corresponds with the source. No errors will be thrown.

From here there are some remarks to be taken into account:

  1. As no errors are thrown, the source files could be empty and the data updated could be blank in the destination table. This is to be handled by processed outside this demo.

Further work

As this demo and post shows it is possible to handle dynamic changing flat source files. Changing columns, column order and other changes, can be handled in an easy way with a few lines of code.

Going from here, a suggestion could be to set up processes that compared the two tables (bulk and destination) and throws an error if X amount of the columns are not present in the bulk table or X amount of columns are new.

It is also possible to auto generate missing columns in the destination table based on columns from the bulk table.

Only your imagination sets the boundaries here.

Summary – importing flatfiles to a SQL server

With this blogpost I hope to have given you inspiration to build your own import structure of flatfiles in those cases where the structure might change.

As seen above the approach needs some .Net skills – but when it is done and the console application has been build, it is a matter of reusing the same application around the different integration solutions in your environment.

Happy coding 🙂

External links:

BULK INSERT from MSDN: https://msdn.microsoft.com/en-us/library/ms188365.aspx

OPENROWSET from MSDN: https://msdn.microsoft.com/en-us/library/ms190312(v=sql.130).aspx

XP_CMDSHELL from MSDN: https://msdn.microsoft.com/en-us/library/ms175046.aspx

GitHub link: https://github.com/brianbonk/GenerateFormatFile/releases/tag/v2.0

Undelete object from database

undelete

Have you ever tried to delete an object from the database by mistake or other error? You can undelete object – sometimes.

Then you should read on in this short post.

I recently came across a good co-worker of mine who lost one of the views on the developer database. He called me for help.

Fortunately the database was in FULL RECOVERY mode – so I could extract the object from the database log and send the script to him for his further work that day. I think I saved him a whole day of work…

The undelete object script

Here is the script I used:

select 
	convert(varchar(max),substring([RowLog Contents 0], 33, LEN([RowLog Contents 0]))) as [Script]
from 
	fn_dblog(NULL,NULL)
where 1=1
	and [Operation]='LOP_DELETE_ROWS' 
	and [Context]='LCX_MARK_AS_GHOST'
and [AllocUnitName]='sys.sysobjvalues.clst'

Ready, SET, go – how does SQL server handle recursive CTE’s

This blogpost will cover some of the basics in recursive CTE’s and explain the approach done by the SQL Server engine.

First of all, a quick recap on what a recursive query is.

Recursive queries are useful when building hierarchies, traverse datasets and generate arbitrary rowsets etc. The recursive part (simply) means joining a rowset with itself an arbitrary number of times.

A recursive query is defined by an anchor set (the base rowset of the recursion) and a recursive part (the operation that should be done over the previous rowset).

The basics in recursive CTE

A recursive query helps in a lot of scenarios. For instance, where a dataset is built as a parent-child relationship and the requirement is to “unfold” this dataset and show the hierarchy in a ragged format.

A recursive CTE has a defined syntax – and can be written in general terms like this – and don’t run way because of the general syntax – a lot of examples (in real code) will come:

select result_from_previous.*
 from result_from_previous
 union all
 select result_from_current.*
 from set_operation(result_from_previous, mytable) as result_from_current

Or rewritten in another way:

select result_from_previous.*
 from result_from_previous
 union all
 select result_from_current.*
 from result_from_previous.*
 join mytable
 on condition(result_from_previous)

Another way to write the query (using cross apply):

select result_from_current.*
from result_from_previous
cross apply (
select result_from_previous.*
union all
select *
from mytable
where condition(result_from_previous.*)
) as result_from_current

The last one – with the cross apply – is row based and a lot slower than the other two. It iterates over every row from the previous result and computes the scalar condition (which returns true or false). The same row then gets compared to each row in mytable and the current row of result_from_previous. When these conditions are real – the query can be rewritten as a join. Why you should not use the cross apply for recursive queries.

The reverse – from join to cross apply – is not always true. To know this, we need to look at the algebra of distributivity.

Distributivity algebra

Most of us have already learned that below mathematics is true:

X x (Y + Z) = (X x Y) + (X x Z)

But below is not always true:

X ^ (Y x Z) = (X ^ Z) x (X ^ Y)

Or said with words, distributivity means that the order of operations is not important. The multiplication can be done after the addition and the addition can be done after the multiplication. The result will be the same no matter what.

This arithmetic can be used to generate the relational algebra – it’s pretty straight forward:

set_operation(A union all B, C) = set_operation(A, C) union all set_operation(B, C)

The condition above is true as with the first condition in the arithmetic.

So the union all over the operations is the same as the operations over the union all. This also implies that you cannot use operators like top, distinct, outer join (more exceptions here). The distribution is not the same between top over union all and union all over top. Microsoft has done a lot of good thinking in the recursive approach to reach one ultimate goal – forbid operators that do not distribute over union all.

With this information and knowledge our baseline for building a recursive CTE is now in place.

The first recursive query

Based on the intro and the above algebra we can now begin to build our first recursive CTE.

Consider a sample rowset (sampletree):

id parentId name
1 NULL Ditlev
2 NULL Claus
3 1 Jane
4 2 John
5 3 Brian

From above we can see that Brian refers to Jane who refers to Ditlev. And John refers to Claus. This is fairly easy to read from this rowset – but what if the hierarchy is more complex and unreadable?

A sample requirement could be to “unfold” the hierarchy in a ragged hierarchy so it is directly readable.

The anchor

We start with the anchor set (Ditlev and Claus). In this dataset the anchor is defined by parentId is null.

This gives us an anchor-query like below:

recursive CTE 1

Now on to the next part.

The recursive

 After the anchor part, we are ready to build the recursive part of the query.

The recursive part is actually the same query with small differences. The main select is the same as the anchor part. We need to make a self join in the select statement for the recursive part.

Before we dive more into the total statement – I’ll show the statement below. Then I’ll run through the details.

recursive CTE 2

Back to the self-reference. Notice the two red underlines in the code. The top one indicates the CTE’s name and the second line indicates the self-reference. This is joined directly in the recursive part in order to do the arithmetic logic in the statement. The join is done between the recursive results parentId and the id in the anchor result. This gives us the possibility to get the name column from the anchor statement.

Notice that I’ve also put in another blank field in the anchor statement and added the parentName field in the recursive statement. This gives us the “human readable” output where I can find the hierarchy directly by reading from left to right.

To get data from the above CTE I just have to make a select statement from this:

recursive CTE 3

And the results:

recursive CTE 4

I can now directly read that Jane refers to Ditlev and Brian refers to Jane.

But how is this done when the SQL engine executes the query – the next part tries to explain that.

The SQL engines handling

Given the full CTE statement above I’ll try to explain what the SQL engine does to handle this.

The documented semantics is as follows:

  1. Split the CTE into anchor and recursive parts
  2. Run the anchor member creating the first base result set (T0)
  3. Run the recursive member with Ti as an input and Ti+1 as an output
  4. Repeat step 3 until an empty result set is returned
  5. Return the result set. This is a union all set of T0 to Tn

So let me try to rewrite the above query to match this sequence.

The anchor statement we already know:

recursive CTE 5

First recursive query:

recursive CTE 6

Second recursive query:

recursive CTE 7

The n recursive query:

The union all statement:

This gives us the exactly same result as we saw before with the rewrite:

Notice that the statement that I’ve put in above named Tn is actually empty. This to give the example of the empty statement that makes the SQL engine stop its execution in the recursive CTE.

This is how I would describe the SQL engines handling of a recursive CTE.

Based on this very simple example, I guess you already can think of ways to use this in your projects and daily tasks.

But what about the performance and execution plan?

Performance

The execution plan for the original recursive CTE looks like this:

The top part of this execution plan is the anchor statement and the bottom part is the recursive statement.

Notice that I haven’t made any indexes in the table, so we are reading on heaps here.

But what if the data is more complex in structure and depth. Let’s try to base the answer on an example:

From the attached sql code you’ll find a script to generate +20.000 rows in a new table called complextree. This data is from a live solution and contains medical procedure names in a hierarchy. The data is used to show the relationships in medical procedures done by the Danish hospital system. It is both deep and complex in structure. (Sorry for the Danish letters in the data…).

When we run a recursive CTE on this data – we get the exactly same execution plan:

This is also what I would expect as the amount of data when read from heaps very seldom impact on the generated execution plan.

The query runs on my PC for 25 seconds.

Now let me put an index in the table and let’s see the performance and execution plan.

The index is only put on the parentDwId as, according to our knowledge from this article is the recursive parts join column.

The query now runs 1 second to completion and generates this execution plan:

The top line is still the anchor and the bottom part is the recursive part. Notice now the SQL engine uses the non-clustered index to perform the execution and the performance gain is noticeable.

Conclusion

I hope that you’ve now become more familiar with the recursive CTE statement and are willing to try it on your own projects and tasks.

The basics is somewhat straight forward – but beware that the query can become complex and hard to debug as the demand for data and output becomes stronger. But don’t be scared. As I always say – “Don’t do a complex query all at once, start small and build it up as you go along”.

Happy coding.

External links:

The with operator in T-SQL: https://technet.microsoft.com/en-us/library/ms175972.aspx

Recursive CTE’s from MSDN: https://msdn.microsoft.com/en-us/library/ms186243.aspx

Wikipedia on distributivity: https://en.wikipedia.org/wiki/Distributive_property

Use of hierarchyid in SQL Server

I attended a TDWI conference in May 2016 in Chicago. Here I got a hint about the datatype hierarchyid in SQL Server which could optimize and eliminate the good old parent/child hierarchy.

Until then I (and several other in the class) haven’t heard about the hierarchyid datatype in SQL Server. So I had to find out and learn this.

Here’s a blogpost covering some of the aspects of the datatype hierarchyid – including:

  • Introduction
  • How to use it
  • How to optimize data in the table
  • How to work with data in the hierarchy-structure
  • Goodies

Introduction

The datatype hierarchyid was introduced in the SQL Server as from version 2008. It is a variable length system datatype. The datatype can be used to represent a given element’s position in a hierarchy – e.g. an employee’s position within an organization.

The datatype is extremely compact. The storage is dependent in the average fanout (fanout = the number of children in all nodes). For smaller fanouts (0-7) the typical storage is about 6 x Log A * n bits. Where A is the average fanout and n in the total number of nodes in the tree. Given above formula an organization with 100,000 employees and a fanout of 6 levels will take around 38 bits – rounded to 5 bytes of total storage for the hierarchy structure.

Though the limitation of the datatype is 892 bytes there is a lot of room for extremely complex and deep structures.

When representing the values to and from the hierarchyid datatype the syntax is:

[level id 1]/[level id 2]/..[level id n]

Example:

1/7/3

The data between the ‘/ can be of decimal types e.g. 0.1, 2.3 etc.

Given two specific levels in the hierarchy a and b given that a < b means that b comes after a in a depth first order of comparison traversing the tree structure. Any search and comparison on the tree is done this way by the SQL engine.

The datatype directly supports deletions and inserts through the GetDescendant method (see later for full list of methods using this feature). This method enables generation of siblings to the right of any given node and to the left of any given node. Even between two siblings. NOTE: when inserting a new node between two siblings will produce values that are slightly less compact.

Hierarchyid in SQL Server how to use it

Given an example of data – see compete sql script at the end of this post to generate the example used in this post.

hierarchyid in SQL Server 1

The Num field is a simple ascending counter for each level member in the hierarchy.

There are some basic methods to be used in order to build the hierarchy using the hierarchy datatype.

GetRoot method

The GetRoot method gives the hierarchyid of the rootnode in the hierarchy. Represented by the EmployeeId 1 in above example.

The code and result could look like this:

hierarchyid in SQL Server 2

The value ‘0x’ from the OrgPath field is the representation of the string ‘/’ giving the root of the hierarchy. This can be seen using a simple cast to varchar statement:

hierarchyid in SQL Server 3

Building the new structure with the hierarchyid dataype using a recursive SQL statement:

hierarchyid in SQL Server 4

Notice the building of the path after the union all. This complies to the above mentioned syntax for building the hierarchy structure to convert to a hierarchyid datatype.

If I was to build the path for the EmployeeId 10 (Name = ‘Mads’) in above example it would look like this: ‘/2/2/’. A select statement converting the hierarchyid field OrgPath for the same record, reveals the same thing:

hierarchyid in SQL Server 5

Notice the use of the ToString method here. Another build in method to use for the hierarchyid in SQL Server.

GetLevel method

The GetLevel method returns the current nodes level with an index of 0 from the top:

hierarchyid in SQL Server 6

GetDescendant method

This method returns a new hierarchyid based on the two parameters child1 and child2.

The use of these parameters is described in the BOL HERE.

Below is showed some short examples on the usage.

Getting a new hierarchyid when a new employee referring to top manager is hired:

hierarchyid in SQL Server 7

Getting a new hierarchyid when a new hire is referring to Jane on the hierarchy:

hierarchyid in SQL Server 8

Dynamic insert new records in the hierarchy table – this can easily be converted into a stored procedure:

hierarchyid in SQL Server 9

Notice the new GetAncestor method which takes one variable (the number of steps up the hierarchy) and returns that levels Hierarchyid. In this case just 1 step up the hierarchy.

More methods

There are several more methods to use when working on a hierarchy table – as found on BOL:

GetDescendant – returns a new child node of a given parent. Takes to parameters.

GetLevel – returns the given level for a node (0 index)

GetRoot – returns a root member

ToString – converts a hierarchyid datatype to readable string

IsDescendantOf – returns boolean telling if a given node is a descendant of given parent

Parse – converts a string to a hierarchyid

Read – is used implicit in the ToString method. Cannot be called by the T-SQL statement

GetParentedValue – returns node from new root in case of moving a given node

Write – returns a binary representation of the hierarchyid. Cannot be called by the T-SQL statement.

Optimization

As in many other scenarios of the SQL Server the usual approach to indexing and optimization can be used.

To help on the usual and most used queries I would make below two indexes on the example table:

hierarchyid in SQL Server 10

But with this like with any other indexing strategy – base it on the given scenario and usage.

Goodies

So why use this feature and all the coding work that comes with it?

Well – from my perspective – it has just become very easy to quickly get all elements either up or down from a given node in the hierarchy.

Get all descendants from a specific node

If I would like to get all elements below Jane in the hierarchy I just have to run this command:

hierarchyid in SQL Server 11

Think of the work you would have to do if this was a non hierarchy structured table using only parent/child and recursice sql if the structure was very complex and deep.

I know what I would choose.

Conclusion

As seen above the datatype hierarchyid can be used to give order to the structure of a hierarchy in a way that is both efficient and fairly easy maintained.

If one should optimize the structure even further, then the EmployeeId and the ManagerId could be dropped as the EmployeeId is now as distinct as the OrgPath and can be replaced by this. The ManagerId is only used to build the structure – but this is now also given by the OrgPath.

Happy coding…

External references:

Hierarchyid from MSDN

Using hierarchyid from TechNet

Update SCD Type 2 dimension in one single transaction using only T-SQL

Merge logo new

Recently I got a request inside my organization to make sure that a SCD Type 2 dimension would keep track of the changes due to requrementes from the business.

This needed to be done in a single transaction in pure T-SQL code.

So – what to do and how to do it. Here’s one way.

The sourcetable looks like this:

SCD type 2 dimension

The request was to keep track of changes in the ManagerId according to CaseId.

I’ve created a SCD2 table like this:

CREATE TABLE [dbo].[CaseProjectManagerHistory](
	[dwid] [bigint] IDENTITY(1,1) NOT NULL,
	[CaseId] [int] NULL,
	[ManagerId] [int] NULL,
	[dwDateFrom] [date] NULL,
	[dwDateTo] [date] NULL,
	[dwIsCurrent] [bit] NULL,
	[dwChangeDate] [date] NULL
)


The fields are as follows:
dwid: Identifier for the table
CaseId: The caseid for the rows
ManagerId: The managerid for the row
dwDateFrom: The date from where the row is actual
dwDateTo: The date to where the row is actual
dwIsCurrent: Boolean that tells if the row is the current one or not
dwChangeDate: The date of the change (if the row has changed since the first write)

If you need to catch up on the history types in a dimension – then take a look at Kennie’s blogppost HERE.

First of all I started out with a merge statement that would insert all the new values not in the table and update the ones that needed update.

Something like this:

merge dbo.CaseProjectManagerHistory as target
	using (select CaseId, ManagerId, cast(getdate() as date) as startDate, datefromparts(2199,1,1) as endDate, 1 as [current], cast(getdate() as date) as changeDate from dbo.[Case]) as source
	on target.CaseId = source.CaseId
	when not matched by target
		then
			insert (CaseId, ManagerId, dwDateFrom, dwDateTo, dwIsCurrent, dwChangeDate)
			values (source.CaseId, source.ManagerId, source.startDate, source.endDate, source.[current], source.changeDate)
	when matched 
		and target.dwIsCurrent = 1
		and exists (select source.CaseId, source.ManagerId
					except
					select target.CaseId, target.ManagerId)
		and target.dwChangeDate <= source.ChangeDate
		and source.changeDate < target.dwDateTo
		then
			update set dwIsCurrent = 0, target.dwChangeDate = source.changeDate, target.dwDateTo = dateadd(d,-1,source.startDate)


Those of you who haven’t tried and worked with a merge-statement – you can get the 101 from BOL here.

But this merge statement only inserts new rows and updates existing rows. The rows that are updated still needs to be in the table in order to fully apply to the SCD 2 rules.

This can be done by using the cluse ‘output’ from the merge-statement and then use the output rows to insert into the same table.

It will look like this:

insert into dbo.CaseProjectManagerHistory_demo (CaseId, ManagerId, dwDateFrom, dwDateTo, dwIsCurrent, dwChangeDate)
select CaseId, ManagerId, startDate, endDate, [current], changeDate 
from (
	merge dbo.CaseProjectManagerHistory_demo as target
	using (
		select 
			CaseId
			,ManagerId
			,cast(getdate() as date) as startDate
			,datefromparts(2199,1,1) as endDate
			,1 as [current]
			,cast(getdate() as date) as changeDate 
		from 
			dbo.[Case]
		where 1=1
			and caseid in (2005,2013,2015,2016,2019,2021,2023,2025,2027,2028)
			) as source
	on target.CaseId = source.CaseId
	when not matched by target -- indsæt nye rækker
		then
			insert (CaseId, ManagerId, dwDateFrom, dwDateTo, dwIsCurrent, dwChangeDate)
			values (source.CaseId, source.ManagerId, source.startDate, source.endDate, source.[current], source.changeDate)
	when matched -- opdater eksisterende rækker
		and target.dwIsCurrent = 1
		and exists (select source.CaseId, source.ManagerId --filtrer kun på rækker der ikke allerede eksisterer i target
					except
					select target.CaseId, target.ManagerId)
		and target.dwChangeDate <= source.ChangeDate
		and source.changeDate < target.dwDateTo
		then
		update set dwIsCurrent = 0, target.dwChangeDate = source.changeDate, target.dwDateTo = dateadd(d,-1,source.startDate) 
		output $action ActionOut, source.CaseId, source.ManagerId, source.startDate, source.endDate, source.changeDate, source.[current]) as mergeOutput
where mergeOutput.ActionOut = 'UPDATE';

The mergestatement ‘output’ action is used to insert the same rows to the history table once more. The only change is the ‘end date’.

Happy coding!

Note: I did a short presentation with this at my workplace a few weeks ago, and here Kennie (l, b, t) told me that there is a bug in the merge statement that needs to be taken into account. Read more of that here.

Row level security in SQL Server 2016

With the release of SQL Server 2016 comes many great new features. One of these is the implementation of row level security in the database engine.

This blogpost will cover the aspects of this new feature – including:

  • Setup
  • Best practice
  • Performance
  • Possible security leaks

Introduction

The row level security feature was released earlier this year to Azure – following Microsoft’s cloud-first release concept.

A past big issue with the SQL Server engine was that in only understands tables and columns. Then you had to simulate security using secured views, stored procedures or table value functions. The problem here was to make sure that there were no way to bypass them.

With SQL Server 2016, this is no longer an issue.

Now the SQL Server engine handles the security policy in a central controlled area.

row level security 1

Setup and best practice

The Row-level security is based on a special inline table valued function. This function returns either a single row with a 1 or no rows based on the users rights to that specific row.

Let us take an example:

First of all, I’ll create a database and some users to test with:

CREATE DATABASE RowFilter;
GO
 
USE RowFilter;
GO
 
CREATE USER userBrian WITHOUT LOGIN;
CREATE USER userJames WITHOUT LOGIN;
GO

 A table with examples and grant select to the new users:

CREATE TABLE dbo.SalesFigures (
[userCode] NVARCHAR(10),
[sales] MONEY)
GO
 
INSERT  INTO dbo.SalesFigures
VALUES ('userBrian',100), ('userJames',250), ('userBrian',350)
GO
 
GRANT SELECT ON dbo.SalesFigures TO userBrian
GRANT SELECT ON dbo.SalesFigures TO userJames
GO


Now we’ll add a filter predicate function as below:

CREATE FUNCTION dbo.rowLevelPredicate (@userCode as sysname)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS rowLevelPredicateResult
WHERE @userCode = USER_NAME();
GO


This illustrates that the current user must have associated records in order to get any results. Notice that the functions does not have access to the rows itself.

Furthermore the function can contain joins and lookup tables in the where clause – but beware of the performance hit here. Look further down this post for more info.

The last thing to do is to add a filter predicate to the table dbo.SalesFigures:

CREATE SECURITY POLICY UserFilter
ADD FILTER PREDICATE dbo.rowLevelPredicate(userCode)
ON dbo.SalesFigures
WITH (STATE = ON);
GO


That’s it.

Let’s test the results with the users added before:

 EXECUTE AS USER = 'userBrian';
SELECT * FROM dbo.SalesFigures;
REVERT;
GO


This gives me 2 rows:

EXECUTE AS USER = 'userJames';
SELECT * FROM dbo.SalesFigures;
REVERT;
GO
 


This gives me 1 row:

The execution plan shows a new filter predicate when this row level security is added:

To clean up the examples.

USE master;
DROP DATABASE RowFilter;


Performance

Some might ask, “what about the performance – isn’t there a performance hit in this use of functions?”

The short answer is “It depends”.

If you only use a direct filter on the table there is very little to no impact on the performance. The filter is applied directly to the table as any other filter. Compared to the old way of doing the row filter with stored procedures or table valued functions this new approach is performing better.

If you plan to use lookup tables or joins in the predicate function, then you must beware of the helper tables’ indexes and how fast they can deliver data to the function. If the tables are large and slow performing (without indexes etc.) then you will experience bad performance in the row filter function. But that’s just like any other lookup or join that you might do in your solutions.

Best practices

There are some best practices given from Microsoft:

  • It is highly recommended to create a separate schema for the RLS objects (predicate function and security policy).
  • The ALTER ANY SECURITY POLICY permission is intended for highly-privileged users (such as a security policy manager). The security policy manager does not require SELECT permission on the tables they protect.
  • Avoid type conversions in predicate functions to avoid potential runtime errors.
  • Avoid recursion in predicate functions wherever possible to avoid performance degradation. The query optimizer will try to detect direct recursions, but is not guaranteed to find indirect recursions (i.e., where a second function calls the predicate function).
  • Avoid using excessive table joins in predicate functions to maximize performance.

Possible security leaks

This new row filter context can cause information leakage using some carefully codes queries.

Above example can be breached with the following query:

SELECT 1/([sales]-250) FROM dbo.SalesFigures
WHERE Usercode = 'userJames'

This will give an error: Divide by zero error encountered.


This will tell the user trying to access the table, that userJames has a sale of 250. So even though the row filter prevents users from accessing data that they are not allowed, hackers can still try to determine the data in the table using above method.

Conclusion

The new row level security feature has been very much a wanted feature for quite a while now, and with the function now in place, and planned to be released in the RTM version of SQL Server 2016, the DBA’s and other people working with security can use this out-of-the-box.

I hope this post makes a great start for you if you would like to try out the row level security function. Currently the feature is awailable in the latest CTP version (2.2) – which can be downloaded here: https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2016

en_USEnglish