Feeds:
Posts
Comments

Archive for the ‘Tables’ Category

Temporal Table is one of the best features available in SQL Server 2016. It actually provides your data stored in the table at any point in time. In other words, if you execute any update or delete statement on a table then the old data will be over written and after that if you query the table you will get the latest data (after update or delete), but using temporal table you can view the latest data as well as old data but how ? Let me explain how it works first.

  • How Temporal Table works ?
      • What happens when you Insert record(s) in Temporal Table ?

When you insert the data in temporal table, the data will remain in temporal table like a normal table but it will NOT affect the history table as shown below.

Temporal Table.1.7

      • What happens when you Update/Delete record(s) in Temporal Table ?

When you update/delete the data in temporal table, the existing records will be MOVED FIRST into the history table to record the changes before the data is changed in the temporal table.

Temporal Table.1.8

      • What happens when you Query record(s) in Temporal Table ?

When you Query the records from the temporal table, Temporal table is smart enough to decide whether to return the data from the temporal table or from the history table, you do NOT have to apply any joins or any sub query between temporal table and history table.

Temporal Table.1.9

  • Create Temporal Table

When you create a temporal table it automatically creates a history table (if you already have an existing history table you can link it to temporal table). As you can see below is a usual table creation script with additional columns. These additional column (fields) are specific for temporal table period definition and it will be hidden as well. So when you query the table these columns will not appear in the result. Also, I turned on the system versioning and declared the history table name “dbo.tbl_Product_History”, in this case the temporal table will create the history table as defined. If you do not declare the history table name, SQL will create a history table for the temporal table by default.

CREATE DATABASE SampleDB
GO
USE SampleDB
GO
--DROP TABLE tbl_Product
--GO
CREATE TABLE tbl_Product
(
Product_ID int NOT NULL PRIMARY KEY CLUSTERED,
Product_Name varchar(50) NOT NULL,
Rate numeric(18,2),

/*Temporal Specific Fields - Period Definition */
[Valid From] datetime2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
[Valid Till] datetime2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
PERIOD FOR SYSTEM_TIME ([Valid From] ,[Valid Till])
)
WITH
/* Temporal Specific - System-Versioning Configuration*/
(SYSTEM_VERSIONING = ON
(HISTORY_TABLE = dbo.tbl_Product_History, DATA_CONSISTENCY_CHECK = ON)
);
GO

Create Temporal Table

  • Data Manipulation Language (DML) Statements on Temporal Table ?
    Lets execute some Data Manipulation (DML) Statement on temporal table and view the impact on temporal table as well as history table.

      • INSERT STATEMENT

    As I explained earlier, when you execute INSERT statement on temporal table there will be NO Impact on history table. Lets execute INSERT statements and observe the results accordingly.

    USE SampleDB
    GO
    INSERT INTO dbo.tbl_Product VALUES
    (1,'Product A', 300)
    ,(2,'Product B', 400)
    GO
    
    SELECT * FROM tbl_Product
    GO
    SELECT * FROM dbo.tbl_Product_History
    GO
    

    Temporal Table.1.1

      • UPDATE STATEMENT

As I explained earlier, when you execute UPDATE statement on temporal table then the OLD version of data will be moved to history table and temporal table will have latest data ONLY. Lets execute UPDATE statements and observe the results accordingly.

USE SampleDB
GO
UPDATE tbl_Product SET Rate =Rate/2
WHERE Product_ID IN (1,2)
GO

SELECT * FROM tbl_Product
GO
SELECT * FROM dbo.tbl_Product_History
GO

Temporal Table.1.2

      • DELETE STATEMENT

As I explained earlier, when you execute DELETE statement on temporal table then the OLD version of data will be moved to history table and temporal table will have latest data ONLY. Lets execute DELETE statement and observe the results accordingly.

USE SampleDB
GO
DELETE FROM tbl_Product WHERE Product_ID = 2
GO

SELECT * FROM tbl_Product
GO
SELECT * FROM dbo.tbl_Product_History
GO

Temporal Table.1.3

      • SELECT STATEMENT

Select Statement is interesting in temporal table because it knows what you exactly want from temporal table and it internally links to history table to fulfill your requirement. Given below are some queries that we will run on temporal table and you will observe that temporal table (NOT history table) will return the current state and earlier state of the table as well. Isn’t in it amazing ?

Lets browse the TEMPORAL TABLE and it will show the latest state of the table like a NORMAL TABLE.

USE SampleDB
GO
--Current State of the table
SELECT * FROM tbl_Product
GO

Temporal Table.1.4

Lets browse the TEMPORAL TABLE FOR SYSTEM_TIME ‘2015-06-27 21:33:50.9002439’. It will give you the state of the table at ‘2015-06-27 21:33:50.9002439’ and you will be shocked to see that temporal table returned the result what was available exactly BEFORE the UPDATE statements. Wow !!!

USE SampleDB
GO
SELECT * FROM tbl_Product
FOR SYSTEM_TIME AS OF '2015-06-27 21:33:50.9002439'
GO

Temporal Table.1.5

Lets browse the TEMPORAL TABLE FOR SYSTEM_TIME at ‘2015-06-27 21:43:31.2982847’. You will be AGAIN shocked to see that temporal table returns the result what was available exactly available BEFORE the DELETE statements. Wow !!!

USE SampleDB
GO
SELECT * FROM tbl_Product
FOR SYSTEM_TIME AS OF '2015-06-27 21:43:31.2982847'
GO

Temporal Table.1.6

Conclusion :
I quickly reviewed temporal table and found it very interesting and exciting. I am sure it will change the way databases will be designed specially data warehouses because now we do not need to create separate audit tables to record each state manually by using stored procedure or triggers etc while TEMPORAL TABLE is doing it for us behind the scene automatically.

Let me know your experience about temporal table.

Reference : Channel9

Read Full Post »

I have written multiple articles on memory optimized table and its handling. I was also working on its core area that is its performance. I continued with my research to see whether it really improves the performance or not. And as per my research, I found a massive improvement in the performance (except few limitations).

Before proceeding with the performance, I would like to create a sample to compare the performance between disk based & memory optimized table.

  • Sample For Memory Optimized table :

Let me create sample tables for memory optimized table and insert bulk data in it to measure the performance.
Given blow is the script.

--Given below scripts are compatible with SQL Server 2014 and above.
USE hkNorthwind
GO
--Create a memory optimized table
CREATE TABLE [dbo].[tbl_product_Master_MO]
(
[Product ID] INT NOT NULL PRIMARY KEY NONCLUSTERED HASH
WITH (BUCKET_COUNT = 100000),
[Product Name] [nvarchar](100) NULL,
[Creation Datetime] [datetime] NULL
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO
--Insert 65536 records into the memory optimized table
--This script may take few minutes to insert records.

USE hkNorthwind
GO
;WITH N1 (n) AS (SELECT 1 UNION ALL SELECT 1),
N2 (n) AS (SELECT 1 FROM N1 AS X, N1 AS Y),
N3 (n) AS (SELECT 1 FROM N2 AS X, N2 AS Y),
N4 (n) AS (SELECT 1 FROM N3 AS X, N3 AS Y),
N5 (n) AS (SELECT ROW_NUMBER() OVER(ORDER BY X.n)
FROM N4 AS X, N4 AS Y)
INSERT INTO tbl_product_Master_MO
SELECT n,'Number' + Convert(varchar(10),n),GETDATE()
from N5

---Create native compiled procedure to give boost to memory optimized table.
CREATE PROCEDURE dbo.usp_product_master
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS
BEGIN
ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
SELECT [Product ID],[Product Name],[Creation Datetime]
FROM dbo.[tbl_product_Master_MO]

END
GO
  • Sample For Disk Based table :

Let me create sample tables for disk based table and insert bulk data in it.
Given blow is the script.

--Create a disk based table (Normal table)
USE [hkNorthwind]
GO
CREATE TABLE [dbo].[tbl_product_Master_DB](
[Product ID] [int] NOT NULL,
[Product Name] [nvarchar](100) NULL,
[Creation Datetime] [datetime] NULL
) ON [PRIMARY]
GO

--Insert 65536 records into the disk based table.
--This script may take few minutes to insert records.
USE hkNorthwind
GO
;WITH N1 (n) AS (SELECT 1 UNION ALL SELECT 1),
N2 (n) AS (SELECT 1 FROM N1 AS X, N1 AS Y),
N3 (n) AS (SELECT 1 FROM N2 AS X, N2 AS Y),
N4 (n) AS (SELECT 1 FROM N3 AS X, N3 AS Y),
N5 (n) AS (SELECT ROW_NUMBER() OVER(ORDER BY X.n)
FROM N4 AS X, N4 AS Y)
INSERT INTO tbl_product_Master_DB
SELECT n,'Number' + Convert(varchar(10),n),GETDATE()
from N5
GO

Let’s proceed with the different categories of performance comparison between normal table and memory optimized table.

  • Query Cost related to the Batch :

Let us start with the query cost related to the batch.
Given below is the script and its output.
Note : Given below results are tested on 300K~ records.

USE hkNorthwind
GO
--Memory optimzed table
Select * from [tbl_product_Master_MO]

--Disk based table
Select * from [tbl_product_Master_DB]
GO
--OUTPUT

diskbased_vs_memory_optimized.1.1

As you can see that memory optimized table only took 7% but on the other hand disk based table took 93% for the same task.

  • Time Statistics

Let’s turn on the time statistics and view the performance comparison.
Given below is the script


SET STATISTICS TIME ON
USE hkNorthwind
GO
---Given below is the Memory optimized native compiled stored procedure.
--This stored procedure we used to browse memory optimized table.
--It will give boost to memory optimized table performance.
EXEC usp_product_master
GO

--Disk based table
Select * from [tbl_product_Master_DB]
GO
SET STATISTICS TIME OFF
--OUTPUT

–For memory optimized
SQL Server Execution Times:
CPU time = 93 ms, elapsed time = 1706 ms.

— For disk based table.
SQL Server Execution Times:
CPU time = 234 ms, elapsed time = 3251 ms.

  • IO Statistics 

Let’s turn on the IO statistics and you will be amazed to view that there is NO IO involvement in memory optimized table
and due to this, it produces a high level performance.
Given below is the script.

SET STATISTICS IO ON
USE hkNorthwind
GO
--Memory optimzed table
Select * from [tbl_product_Master_MO]
GO
--Disk based table
Select * from [tbl_product_Master_DB]
GO
SET STATISTICS IO OFF
--OUTPUT

–For memory optimized query
No Result.

— For disk based table query.
(327680 row(s) affected)
Table ‘tbl_product_Master_DB’. Scan count 1, logical reads 1962, physical reads 0, read-ahead reads 1584, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

diskbased_vs_memory_optimized.1.2

Conclusion :
I am very much impressed with the memory optimized table performance. But still it requires some improvement regarding BLOB data types & validations.
Given below is the consolidated summary that will give you a glance performance review of disk based vs memory optimized table.

S.No

Type

Memory Optimized Table

Disk Based Table

1

Query Cost related to the Batch

7%

93%

2

Statistics Time

93 ms

234 ms

3

Statistics IO

NO IO involvement

1962 read

Let me know about your test results.

Read Full Post »

Statistics is very helpful when it comes to performance because query optimizer uses these statistics to create query plans to improve query performance. I recently implemented the statistics on memory optimized table and given below are my findings.

S.No

Disk based tables

Memory Optimized tables

1

Statistics are updated automatically.

Statistics are NOT  updated automatically

2

It uses by default sampled statistics.

It has no default statistics, you must specify fullscan option.

3

It supports sampled statistics & fullscan options.

It supports ONLY fullscan option.

Following are the steps you must be cognizant of while implementing statistics in memory optimized tables :

  1. First of all create memory optimized tables and indexes.
  2. After that, insert (update, delete) data into the memory optimized tables.
  3. Once you are done with the data manipulation, update statistics on the memory optimized tables. (Do not do this step in the peak hour)
  4. The last step is to create natively compiled stored procedures that access the memory tables.

Given below is the script to update memory optimized table statistics.

USE hkNorthwind
GO
UPDATE STATISTICS dbo.tbl_Product_Master WITH FULLSCAN, NORECOMPUTE

Read Full Post »

Memory optimized table is a new concept to maintain records in table with high performance. I already discussed this new type of table earlier in my article. So, I kept on researching for this new feature and came across an issue. The issue is, I once created a memory optimized table and inserted few records in it and did other research work on the test database and once I was done, I just shut down my machine. Next morning when I tuned on my machine, I found no data in the memory optimized table. First I thought I deleted the data by mistake and forgot it. So I repeated the process but the following morning same thing happened to me again. I could not comprehend what on earth was going on !!  I was puzzled and started doing my research and finally resolved it.

Let me explain it step by step. (Never apply these steps on production database)

Step 1 :
Create a memory optimized table in a memory optimized table enabled database.

--This script is compatible with SQL Server 2014 and above.
USE hkNorthwind
GO
--DROP TABLE tbl_Product_Master
--GO
CREATE TABLE tbl_Product_Master
(
[Product ID] INT not null primary key nonclustered hash
with (bucket_count = 1024),
[Product Name] NVARCHAR(100),
[Creation Datetime] datetime
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY)
GO

Step 2 :
Insert records in the memory optimized table.

--This script is compatible with SQL Server 2014 and above.
USE hkNorthwind
GO
INSERT INTO tbl_Product_Master VALUES (1, 'SQL Server 2012',GETDATE())
INSERT INTO tbl_Product_Master VALUES (2, 'SQL Server 2014',GETDATE())
GO

Step 3 :
Browse the memory optimized table.

--This script is compatible with SQL Server 2005 and above.
USE hkNorthwind
GO
SELECT * FROM tbl_Product_Master
GO

Memory Optimized table myth.1

Step 4 :
Now either restart the database server or shutdown your test machine and turn it on.
Never do this exercise on production server without taking proper precautions.
Browse the table again.

--This script is compatible with SQL Server 2014 and above.
USE hkNorthwind
GO
SELECT * FROM tbl_Product_Master
GO

Memory Optimized table myth.2

Step 5 :
Opsssss, now you can see the data gone and the reason behind this is, just a parameter that we passed at the time of memory optimized table creation and this is DURABILITY because if you make DURABILITY = SCHEMA_ONLY the schema will be durable but not the data. Once you restart the database server, you will lose your data. Remember, whenever you create a memory optimize table, to  keep the durable data (permanent data). Always use WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA).

Note : If you create the memory optimized table without passing DURABILITY parameter, by default it will be DURABILITY = SCHEMA_AND_DATA.

Lets recreate the memory optimize table. Given below is the script. Do the same process again without step 1 and your data will remain with you.

--This script is compatible with SQL Server 2014 and above.
USE hkNorthwind
GO
--DROP TABLE tbl_Product_Master
--GO
CREATE TABLE tbl_Product_Master
(
[Product ID] INT not null primary key nonclustered hash
with (bucket_count = 1024),
[Product Name] NVARCHAR(100),
[Creation Datetime] datetime
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

Please let me know if you come across this issue and its resolution.

Reference : MSDN

Read Full Post »

Recently, I was doing research on memory optimized table and I found that most of the data types are supportive and few of them are non supportive as well. But even though few data data types are non supportive there are works around to deal with those data types as well. In my earlier article, I wrote about how to deal with BLOB data types even though they are non supportive. Rest of the non supportive data types work around, I will share in my upcoming posts.

Given below is the list of Supportive data types:

  • bigint
  • int
  • tinyint
  • smallint
  • int
  • real
  • smallmoney
  • money
  • float
  • decimal
  • numeric
  • bit
  • uniqueidentifier
  • smalldatetime
  • datetime
  • binary
  • nchar
  • sysname

Given below is the list of Non-supportive data types:

  • image
  • text
  • sql_variant
  • ntext
  • varbinary(Max)  (Limitation is 8060 bytes per row)
  • varchar(Max)  (Limitation is 8060 bytes per row)
  • timestamp
  • nvarchar(Max) (Limitation is 8060 bytes per row)
  • xml

Let me know if you have work around for the non supportive data types.

Read Full Post »

Memory Optimized Table is a new concept introduced in SQL Server Hekaton. Memory optimized table is also a table but the major difference between memory-optimized tables and disk based tables is that the memory optimized table information is available in the memory and it does not require the pages to be read into cache from disk. The performance of memory optimized table is much higher than disk based table. Shall discuss its performance in the upcoming article.

Let me create the memory optimized table, but before proceeding with the creating memory table, we should know that whenever you create memory optimized table the database must have memory-optimized data filegroup.
Lets first create a database having memory-optimized data filegroup. Given below is the script.

CREATE DATABASE Sample_DB
ON
PRIMARY(NAME = [Sample_DB],
FILENAME = 'C:\DATA\Sample_data.mdf', size=500MB)
,FILEGROUP [hekaton_demo_fg] CONTAINS MEMORY_OPTIMIZED_DATA(
NAME = [hekaton_demo_dir],
FILENAME = 'C:\DATA\Sample_dir')
LOG ON (name = [hekaton_demo_log]
, Filename='C:\DATA\Sample_log.ldf', size=500MB)
;
GO

Once you create the database having memory-optimized data filegroup then you can create memory optimized table. Given below is the script.

USE Sample_DB
GO
CREATE TABLE tbl_sample
(
[ID] INT NOT NULL PRIMARY KEY NONCLUSTERED HASH
WITH (BUCKET_COUNT = 100000),
[Name] VARCHAR(50) NOT NULL
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

Read Full Post »