Feeds:
Posts
Comments

Problem :
There is a common problem in SQL Server, when you give access of any table to any USER then the USER can view the entire table data and after giving access to the table if you would like to restrict this user NOT to view some critical data in that table, out of the box it is not possible in SQL Server, however there are some workarounds but those workarounds can make your life difficult as DBA. The GOOD NEWS is that it is POSSIBLE in SQL Server 2016 as out of the box solution and the name of this feature is Row Level Security (RLS). In this article, I will demonstrate how to implement RLS in your database environment step by step.

Before proceeding to the solution, I would like to create a sample.

Sample :
I will create two branches (Branch A & B) data in the sample table and later on restrict branch Administrator to view their own branch data and ONLY db_owner (or you can select any other role to give full access to the above table) can view all the data in the table.

CREATE DATABASE Sample_DB
GO
USE Sample_DB
GO
--DROP TABLE tbl_Sample
GO
CREATE TABLE tbl_Sample
(
 [ID] int,
 [Name] varchar(50),
 [BranchName] varchar(50)
)
GO
INSERT INTO tbl_Sample VALUES (1,'Ken','Branch A')
INSERT INTO tbl_Sample VALUES (2,'Selena','Branch A')
INSERT INTO tbl_Sample VALUES (3,'Mae','Branch B')
INSERT INTO tbl_Sample VALUES (4,'Tom','Branch B')
INSERT INTO tbl_Sample VALUES (5,'Thomas','Branch B')
GO
SELECT * FROM tbl_Sample
GO

Row Level Security (RLS) 1.1

Solution :

Step 1 :
Let me create the Branch A and Branch B administrator role in the respective databases. Given below is the script.

USE Sample_DB
GO
EXEC sp_addrole 'Branch A Admin'
GO
EXEC sp_addrole 'Branch B Admin'
GO

Row Level Security (RLS) 1.2

Step 2 :
In the above step, I created the role for branches A & B, now let me create two users for respective branches, who can view only respective branch data in the table. Given below is the script.

USE Sample_DB
GO
CREATE USER USER_A WITHOUT LOGIN; -- Branch A Admin
GO
CREATE USER USER_B WITHOUT LOGIN; -- Branch B Admin
GO

Row Level Security (RLS) 1.3

Step 3 :
Lets add User A & B (created in step 2) as a member in the respective roles (created in step 1). Given below is the script.

USE Sample_DB
GO
EXEC sp_addrolemember 'Branch A Admin', 'USER_A'
GO
EXEC sp_addrolemember 'Branch B Admin', 'USER_B'
GO

Step 4:
Grant the SELECT access of table (tbl_Sample) to the users (User_A & User_B). After granting the Select access they (USER_A, USER_B) can view the entire data in the table (tbl_Sample). Given below is the script.

USE Sample_DB
GO
GRANT SELECT ON tbl_Sample TO USER_A;
GO
GRANT SELECT ON tbl_Sample TO USER_B;
GO

Step 5:
Lets find out the member principal ID. This is a unique ID corresponding to each member of the role. Using member principal ID, we will allow users to view the respective data ONLY in the table, later in the article. Given below is the script.

USE Sample_DB
GO
SELECT role_principal_id,
USER_NAME(role_principal_id) As [Role],
member_principal_id,
USER_NAME(member_principal_id) As [Member]
FROM sys.database_role_members
WHERE USER_NAME(role_principal_id)
IN ('Branch A Admin','Branch B Admin')
GO

Row Level Security (RLS) 1.4

Step 6:
Lets create a user defined table where we must enter the receptive Branch Name and member Principal ID which we found in the previous step (Step 5). Actually, now we are trying to assign each branch access to respective users using Database Principal ID (Member Principal ID). Given below is the script.

USE Sample_DB
GO
DROP TABLE dbo.tbl_user
GO
CREATE TABLE dbo.tbl_user
(
 [ID] int,
 [Name] varchar(50),
 [BranchName] varchar(50),
 [DatabasePrincipalID] int
)
GO
INSERT INTO dbo.tbl_user VALUES (1,'USER A','Branch A',7)
INSERT INTO dbo.tbl_user VALUES (2,'USER B','Branch B',8)
GO

SELECT * FROM dbo.tbl_user
GO

Row Level Security (RLS) 1.5

Step 7:
Lets create a separate scheme (Security) which we will use to create RLS object (predicate function & security object) later in this article. Note : It is a best practice to create RLS object in a separate scheme. Given below is the script.

USE Sample_DB
GO
CREATE SCHEMA Security;
GO

Step 8:
Now, it is time to create a predicate function (inline table valued function) to make restricted access for each branch data & for each user. However, db_owner will have full access on the table. Given below is the script.

USE Sample_DB
GO
--DROP FUNCTION [Security].fn_securitypredicate
--GO
CREATE FUNCTION [Security].fn_securitypredicate(@BranchName varchar(50))
    RETURNS TABLE
    WITH SCHEMABINDING
AS
    RETURN SELECT 1 AS fn_securitypredicate_result
    FROM dbo.tbl_user a Where  (a.DatabasePrincipalID = DATABASE_PRINCIPAL_ID()
	And a.BranchName = @BranchName) OR IS_MEMBER('db_owner')=1
GO

Step 9:
Once predicate function (inline table valued function) has been created, lets create a security policy and add the predicate function (created in the above step) as a filter predicate. Please make sure that the STATE must be ON. Given below is the script.

USE Sample_DB
GO
--DROP SECURITY POLICY [Security].[BranchSecurityPolicy]
--GO
CREATE SECURITY POLICY [Security].[BranchSecurityPolicy]
ADD FILTER PREDICATE [Security].fn_securitypredicate([BranchName])
ON [dbo].[tbl_Sample]
WITH (STATE= ON);
GO

Step 10:
Now, we are done with the setup. Lets test the access of each user on the table (tbl_Sample). Given below is the script.

  • USER_A can view ONLY Branch A data as shown below.
USE Sample_DB
GO
EXECUTE('Select * from tbl_Sample;') AS USER ='USER_A'
GO

Row Level Security (RLS) 1.6

  • USER_B can view ONLY Branch B data as shown below.
USE Sample_DB
GO
EXECUTE('Select * from tbl_Sample;') AS USER ='USER_B'
GO

Row Level Security (RLS) 1.7

  • Admin, whose is member of db_owner can view Entire Table data as shown below.
USE Sample_DB
GO
EXECUTE('Select * from tbl_Sample;') AS USER ='Admin'
GO

Row Level Security (RLS) 1.8

Conclusion :
As you can see above by creating few RLS objects we can restrict each user access at the row level even though the user has Select access on the respective table.

Reference : MSDN

Policy Based Management (PBM) is one of the greatest features shipped with SQL Server 2008. It made DBA’s life easier when it comes to implementation, compliance as well as changing any database/ server properties, especially when you need to implement /check some database / server settings across the databases/ servers. Recently, we have implemented Delayed durability database setting to Forced in one of our clients’ databases using Policy Based Management across the databases. Delayed durability is actually one of the best features shipped with SQL Server 2014. What made it best actually is its handling vs performance. You can just enable with a single statement and you get a huge performance boast.

 Note: Delayed durability is disabled by default. 

In this article, I will show you step by step, how to implement any database / server settings across the databases / servers using few clicks with the help of Policy Based Management.

Step 1:
Open SQL Server Management Studio (SSMS) and Select Management >>Policy Management >>Conditions and right click on it to Select New Condition… as shown below.

PBM - Delayed Durability 1.0

Step 2:
Now, we are in condition building screen, so lets build the 1st condition and that is to check delayed durability in all user databases which is online ONLY. Given below is the configuration. Do not forget to give a proper name to this condition.

PBM - Delayed Durability 1.1

Step 3:
Lets repeat Step 1 and open the condition creation screen. This time we will create the condition to check database setting which is delayed durability=FORCED. Given below is the configuration.

PBM - Delayed Durability 1.2

Step 4:
Now we are done with the conditions, so the next step is to build the Policy. In order to build the policy you need to Select Management >>Policy Management >> Policies and right click on it to select New Policy… as shown below.

PBM - Delayed Durability 1.4

Step 5:
Now, we are in policy creation screen, here we need to give a proper Policy Name and then select the conditions created in Step 2 and Step 3 shown below in red and blue color border respectively.  In  other words, we are trying to make policy which checks the delayed durability is forced or NOT in all user databases which is online.

PBM - Delayed Durability 1.6

Usually I choose the Evaluation Mode to “On Demand” showing above because it is the easiest way to evaluate and fix any discrepancy across the databases / servers, however, you can select On Schedule as well and Press OK.

Step 6:
Now, the policy has been created and you can see it under policies as shown below. Just right click on and select Evaluate to evaluate the policy as shown below.

PBM - Delayed Durability 1.7

Step 7:
You are in evaluation screen and you can see all the user databases shows in red color because none of them is having database settings to delayed durability= force.

PBM - Delayed Durability 1.8

Step 8:
Lets click on View Details to see the difference in database settings as shown below. You can see that Policy is expecting the Delayed durability = force but the actual value is disabled.

PBM - Delayed Durability 1.9

Step 9:
Lets select the databases where you want to change the database settings of delayed durability to FORCE and press Apply button. It gives the given below messages. Press YES button.

PBM - Delayed Durability 1.10

Step 10:
Once the policy has been implemented, policy based management will automatically evaluate the policy again and shows as green color icon as shown below.

PBM - Delayed Durability 1.11

Let me know if you have implemented any databases / servers settings using Policy Based Management.

Sometimes, it is very important to know when your database has been dropped as well who dropped it. Obviously, if you setup the database backup properly, you can easily recover it from the last backup but how to find who dropped/ deleted the database ? Today, I came across this issue and I started my research and found some solutions to recover this info using trace, however, I developed a script that will help you find who dropped database, at what time, by using SQL Server Log.

Note : Please do not use this script for any negative purpose.

Script :

--This script is compatible with SQL Server 2005 and above.
USE master
GO
DROP PROCEDURE Recover_Dropped_Database_Detail_Proc
GO
CREATE PROCEDURE Recover_Dropped_Database_Detail_Proc
@Date_From DATETIME='1900/01/01',
@Date_To DATETIME ='9999/12/31'
AS
;WITH CTE AS (
Select REPLACE(SUBSTRING(A.[RowLog Contents 0],9
,LEN(A.[RowLog Contents 0])),0x00,0x) AS [Database Name]
,[Transaction ID]
FROM fn_dblog(NULL,NULL) A
WHERE A.[AllocUnitName] ='sys.sysdbreg.nc1'AND
A.[Transaction ID] IN (
SELECT DISTINCT [TRANSACTION ID] FROM  sys.fn_dblog(NULL, NULL)
WHERE Context IN ('LCX_NULL') AND Operation IN ('LOP_BEGIN_XACT')
AND [Transaction Name] LIKE '%dbdestroy%'
AND CONVERT(NVARCHAR(11),[Begin Time]) BETWEEN @Date_From AND @Date_To))

SELECT
 A.[Database Name]
,B.[Begin Time] AS [Dropped Date & Time]
,C.[name] AS [Dropped By User Name]
FROM CTE A
INNER JOIN fn_dblog(NULL,NULL) B
ON A.[Transaction ID] =B.[Transaction ID]
AND Context IN ('LCX_NULL') AND Operation IN ('LOP_BEGIN_XACT')
AND [Transaction Name] LIKE '%dbdestroy%'
INNER JOIN sys.sysusers C ON B.[Transaction SID]=C.[Sid]

GO
EXEC Recover_Dropped_Database_Detail_Proc
GO

1

Let me know if you came across this issue and its solution as well.

On 26th May, we had a very informative session presented by Mr. Mohammed Owais (CTO at CAZAR) in SQL Server User Group meetup about Backups – not as simple as you think. He covered almost each and every aspect from full backup till tail log backup, however, a very nice question has been raised by an audience – ‘how to check the status of the backup / recovery along with the percentage via TSQL ?’. Because in most cases we have more than one DBA in an organization and sometimes they are geographically dispersed and if one of them takes backup / restore, how the others will come to know that he is performing any backup / restore using T-SQL.

Given below is the script which will give you the backup / restore progress along with the exact percentage and the user name (who is taking the backup).

USE master
GO
SELECT
  A.session_id As [Session ID]
, login_name As [Login Name]
, [command] As [Command]
, [text] AS [Script]
, [start_time] As [Start Time]
, [percent_complete] AS [Percentage]
, DATEADD(SECOND,estimated_completion_time/1000, GETDATE())
as [Estimated Completion time]
, [program_name] As [Program Name]
FROM sys.dm_exec_requests A
CROSS APPLY sys.dm_exec_sql_text(A.sql_handle) B
INNER JOIN sys.dm_exec_sessions C ON A.session_id=C.session_id
WHERE A.command in ('BACKUP DATABASE','RESTORE DATABASE')
GO

1 : While Taking Backup

USE master;
GO
BACKUP DATABASE AdventureWorks2012
TO DISK = 'C:\Data\AdventureWorks2012.Bak'
   WITH FORMAT,
      MEDIANAME = 'SQLServerBackups',
      NAME = 'Full Backup of AdventureWorks2012';
GO

raresql-Backup-Restore.1.2

2 : While Restoring Backup

USE master;
GO
RESTORE DATABASE AdventureWorks2012
FROM DISK = 'C:\Data\AdventureWorks2012.BAK'
WITH NORECOVERY
GO

raresql-Backup-Restore.1.1

Let me know if you came across this issue and its solution as well.

Yet again I had another opportunity to present Introduction to Policy Based Management yesterday (26 May 2015) at SQL Server User Group meet-up @ Microsoft Office, Dubai, UAE. As mentioned earlier, this event is normally held every last Tuesday of the month. Meet-up was organised by SQL Server User group (www.uaessug.com) and usually consists of two comprehensive and informative sessions that focus on the SQL Server latest technology, best practice, user experience, tips & tricks. This is a good platform to network with SQL Server experts face to face. I was the speaker for yesterday’s meet up along with Mr. Mohammed Owais (CTO at CAZAR).

Following topics were covered in yesterday’s session:

  • Backups – not as simple as you think presented by Mr. Mohammed Owais
  • Introduction to policy based management was by me

We focused on how you can make your restore strategy instead of backup strategy 🙂 and discussed different types of backup and how can it be handy in case of disaster. Also we discussed, how can you manage your policies across the servers by just click of a button.

I hope yesterday’s session was factual as usual. Our next meetup will be held in June 2015. Hope the coming sessions would witness more participants.

Some glimpses of the session:

Mr. Owais busy explaining Backup :

IMG_9815_U

He explained us why tail log backup is important :

IMG_9816_U

Further he explained, how to boast performance of backup :

IMG_9818_U

Me presenting Introduction to policy based management :

IMG_9820_U

Explaining PBM with the help of demos :

IMG_9821_U

IMG_9826_U

Last but not least, lunch 🙂

IMG_9828_U

IMG_9829_U

IMG_9831_U

Thank you guys. See you next month.

TRY_CONVERT is one of the new conversion functions shipped with SQL Server. This function converts expression from one data type to another data type. The beauty of this function is that if it fails to convert, it returns NULL value as a results and due to this functionality, it has an edge over CONVERT function. I started using this function wherever conversion required since SQL Server 2012, however, I received an error message this morning when I was trying to use TRY_CONVERT function in SQL Server 2014 as shown below.

Let me explain this error in detail :

Message Number: 195

Severity : 15

Error Message: ‘TRY_Convert’ is not a recognized built-in function name.

Error Generation:

Let me create a sample using TRY_Convert  function to demonstrate this error.

USE SampleDB
GO
SELECT TRY_CONVERT(datetime2, '12/31/2010') AS Result;
GO
SELECT @@VERSION
GO

Msg 195, Level 15, State 10, Line 5
‘TRY_CONVERT’ is not a recognized built-in function name.

Microsoft SQL Server 2014 – 12.0.2000.8 (X64)

Try_Convert .1.2

Ooopps…… I am unable to use TRY_CONVERT even though I am using SQL Server 2014 as shown above.

Resolution:

The resolution is very simple, actually, what you are trying to do is to use TRY_CONVERT function in one of the databases having compatibility less than 110 even though you are using SQL Server 2014. Lets fix this issue step by step.

Step 1: Check compatibility

USE SampleDB;
GO
SELECT compatibility_level
FROM sys.databases WHERE name = 'SampleDB';
GO
--OUTPUT

compatibility_level
——————-
100

(1 row(s) affected)

Step 2: Change compatibility
As you can see above the compatibility of the database is below 110, lets change it to enjoy the core functionality of SQL Server 2014.
Note : Test the compatibility change of your database on your test/development database first, before proceeding to production database.

USE master
GO
ALTER DATABASE SampleDB
SET COMPATIBILITY_LEVEL = 120;
GO

Step 3: Use TRY_CONVERT

USE SampleDB
GO
SELECT TRY_CONVERT(datetime2, '12/31/2010') AS Result;
GO
--OUTPUT

Try_Convert .1.1

Conclusion :

Remember, whenever you use NEW SQL Server functions that are compatible with specific versions / compatibility level, you MUST check the version/ compatibility before implementing those functions.

SET NOEXEC is one of my favorite SET statements. I am sure, when I explain it in detail you will love it too. Mostly, we come across in situations when we need to execute batch SQL statements (insert, update, delete) on production databases for the deployment / troubleshooting etc. Sometimes due to the syntax error, some parts of the batch statements execute and the remaining DO NOT and it creates the mess/headache for you to correct the data. And you are left with no option, but to restore the latest backup and fix the script and execute it again.

The solution for this problem is SET NOEXEC. It actually compiles the SQL Query and validate the syntax. The best/fun part is that it DOES NOT EXECUTE the SQL Query. So even if SQL Query is having errors, it DOES NOT affect the data and you are SAFE.

Let me create a sample to demonstrate, how it works.
Sample :

USE AdventureWorks2014
GO
--DROP TABLE tbl_sample
--GO
--Create a sample table
CREATE TABLE tbl_sample
(
 [ID] INT,
 [Letter] VARCHAR(1)
)
GO
--Insert a record in the table
INSERT INTO tbl_sample VALUES (1,'A')
GO
--Browse the data
SELECT * FROM tbl_sample
GO

NoEXEC.1.1

Now, let me turn on the NOEXEC and execute few statements and view the result-set.

USE AdventureWorks2014
GO
-- SET NOEXEC to ON
SET NOEXEC ON;

--Update the table
UPDATE tbl_sample SET [Letter]='B' WHERE [ID]=1
GO
--Delete the table
DELETE FROM tbl_sample  WHERE [ID]=1
GO
-- SET NOEXEC to OFF
SET NOEXEC OFF;
GO

NoEXEC.1.2

The output shows Command(s) completed successfully. It seems, that it not ONLY compiles but executes as well. Lets browse and check the impact of above queries on the data.

USE AdventureWorks2014
GO
--Browse the data to check the impact.
SELECT * FROM tbl_sample
GO

NoEXEC.1.1

As you can see above, the SQL Statements (Update & Delete) DID NOT IMPACT sample table data because NOEXEC was TURNED ON. Hence, it is PROVED that it ONLY compiles the SQL Statements and gives you either an error or Command(s) completed successfully message but it DOES NOT execute SQL Statements.

Note : Once you compiled the statement, DO NOT FORGET to turn NOEXEC off.

Paging became quite simpler & easy to script and manage by using OFFSET & FETCH NEXT keywords in SQL Server 2012 & above. I have written quite a detailed article earlier about it and implemented it in my most of the solutions wherever required. However, when you implement/use paging in your script, you face a big challenge, that is, to find the total number of records in that particular result-set.

Given below are the three methods which you can use to get the total row count from OFFSET / FETCH NEXT.
Before proceeding with the solutions, let me create a sample.

Sample :

USE AdventureWorks2014
GO
-- Create Sample Table Table
CREATE TABLE [dbo].[SalesOrderDetail](
[SalesOrderID] [int] NOT NULL,
[SalesOrderDetailID] [int] NOT NULL,
[CarrierTrackingNumber] [nvarchar](25) NULL,
[OrderQty] [smallint] NOT NULL,
[ProductID] [int] NOT NULL,
[SpecialOfferID] [int] NOT NULL,
[UnitPrice] [money] NOT NULL,
[UnitPriceDiscount] [money] NOT NULL,
[LineTotal] [numeric](38, 6) NOT NULL,
[rowguid] [uniqueidentifier] NOT NULL,
[ModifiedDate] [datetime] NOT NULL
) ON [PRIMARY]
GO

-- Insert bulk data into sample table
-- It may take few minutes depends upon the server performance
INSERT INTO [dbo].[SalesOrderDetail]
SELECT * FROM [SALES].[SalesOrderDetail]
GO 100

-- Verfiy the data
Select * from [dbo].[SalesOrderDetail]
GO

Method 1 : Using COUNT(*) OVER()

USE AdventureWorks2014
GO
DECLARE
  @PageSize INT = 10,
  @PageNum  INT = 1;

SELECT
  [SalesOrderID]
, [SalesOrderDetailID]
, [CarrierTrackingNumber]
, [OrderQty]
, [ProductID]
, [SpecialOfferID]
, [TotalCount]= COUNT(*) OVER()
FROM [dbo].[SalesOrderDetail]
ORDER BY SalesOrderID
  OFFSET (@PageNum-1)*@PageSize ROWS
  FETCH NEXT @PageSize ROWS ONLY;
GO
--OUTPUT

row count using Offset 1.1

Method 2 : Using Common Table Expression

USE AdventureWorks2014
GO
DECLARE
    @PageSize INT = 10,
    @PageNum  INT = 1;

;WITH Main_CTE AS(
	SELECT [SalesOrderID]
	, [SalesOrderDetailID]
	, [CarrierTrackingNumber]
	, [OrderQty]
	, [ProductID]
	, [SpecialOfferID]
	FROM [dbo].[SalesOrderDetail]
)
, Count_CTE AS (
	SELECT COUNT(*) AS [TotalCount]
	FROM Main_CTE
)
SELECT *
FROM Main_CTE, Count_CTE
ORDER BY Main_CTE.SalesOrderID
    OFFSET (@PageNum-1)*@PageSize ROWS
    FETCH NEXT @PageSize ROWS ONLY
GO
--OUTPUT

row count using Offset 1.1

Method 3 : Using Cross Apply

USE AdventureWorks2014
GO
DECLARE @PageSize INT = 10,
        @PageNum  INT = 1;

 SELECT
  [SalesOrderID]
, [SalesOrderDetailID]
, [CarrierTrackingNumber]
, [OrderQty]
, [ProductID]
, [SpecialOfferID]
, [TotalCount]
FROM [dbo].[SalesOrderDetail]

CROSS APPLY (SELECT COUNT(*) TotalCount
FROM [dbo].[SalesOrderDetail] ) [Count]
ORDER BY SalesOrderID
OFFSET (@PageNum-1)*@PageSize ROWS
FETCH NEXT @PageSize ROWS ONLY
GO
--OUTPUT

row count using Offset 1.1

All of the above methods give you the same result-sets. Lets view their performance given below.

S.No

Method

CPU Time

Elapsed Time

1

COUNT(*) OVER()

30654 ms

40372 ms

2

Common Table Expression

11762 ms

7665 ms

3

Cross Apply

11794 ms

7373 ms

Conclusion :
On the basis of above results, I would recommend that you either use Common Table Expression or Cross Apply to get the faster results.

Note : The above queries have been tested on ~12 Million records.

Yesterday (24 March 2015), I had the opportunity to present Extended Events at SQL Server User Group meet-up. This event was held at Microsoft Office in Dubai, UAE. Generally this event is held every last Tuesday of the month. SQL Server User group (www.uaessug.com) have been organizing it extremely well for the last one year. This meet up usually consists of two comprehensive and informative sessions that focus on the SQL Server latest technology, best practice, user experience, tips & tricks. I usually attend this meet up because I get an opportunity to network with SQL Server experts face to face instead of Facebook chat :). I was the speaker in yesterday’s meet up along with Mr. Mike Mohan (CFO at Microsoft).

Following topics were covered in yesterday’s session:

  • Real Life Power BI presented by Mr. Mike Mohan – Microsoft
  • Introduction to Extended Events was by me

We focused on how a non technical user can make best use of Power BI & its integration on different devices, best practices and troubleshooting (the scariest part) throughout the event and hands on with live demos to demonstrate the practical implementations of Power BI & Extended Events.

I hope yesterday’s session was informative as usual. Our next meetup will be held in April 2015. Hope the coming sessions would witness more participants.

Some glimpses of the session:

Mr. Mike busy explaining Power BI :

IMG_0079_U

He showed us how to generate a power map and its auto zoom feature :

IMG_0080_U

Showed us, how to combine many graphs in one page :
IMG_0083_U

Displaying look n feel of Power BI on multiple devices :

IMG_0092_U

Me presenting SQL Server Extended Events :

IMG_9654_U

Showing the components of EE :
IMG_9656_U

IMG_9657_U

Thank you guys. See you next month.

SQL Server 2014 is shipped with lots of exciting features and enhancements, which I usually share with you in my blog from time to time. Today, I will discuss a new enhancement that will minimize your code. This enhancement is actually in the CREATE TABLE (Transact-SQL). Now you can actually create NONCLUSTERED index within the create statement and you do not need to alter table to add NONCLUSTERED index anymore. However, if you want to follow the old method, you can continue doing it. The old method has NOT been discontinued.

Given below are both methods, demonstrating the enhancement.

NEW Method : (Create NONCLUSTERED index within the create table statement)
In this method, we will create NONCLUSTERED index within create table statement. This script is compatible with SQL SERVER 2014 and above.

USE AdventureWorks2014
GO
--DROP TABLE Employee
--GO
CREATE TABLE Employee
(
[Emp_ID] int NOT NULL,
[LastName] varchar(255) NOT NULL,
[FirstName] varchar(255),
[Address] varchar(255),
[City] varchar(255),
[PostalCode] nvarchar(15),
CONSTRAINT pk_Emp_ID PRIMARY KEY ([Emp_ID]),
INDEX IX_Employee_PostalCode NONCLUSTERED (PostalCode) 
);

GO
--OUTPUT

How to create a nonclustered index within the create table.1.1

OLD Method : (Create NONCLUSTERED index after creation of the table)
In this method, we will create NONCLUSTERED index AFTER table creation. This script is compatible with SQL SERVER 2005 and above.

USE AdventureWorks2012
GO
--DROP TABLE Employee
--GO
CREATE TABLE Employee
(
[Emp_ID] int NOT NULL,
[LastName] varchar(255) NOT NULL,
[FirstName] varchar(255),
[Address] varchar(255),
[City] varchar(255),
[PostalCode] nvarchar(15),
CONSTRAINT pk_Emp_ID PRIMARY KEY ([Emp_ID])
)
GO

CREATE NONCLUSTERED INDEX IX_Employee_PostalCode
ON dbo.Employee (PostalCode)
GO

How to create a nonclustered index within the create table.1.1

Conclusion
As you can see above, both methods will give you the same output, however, new method will reduce the line of code. Let me know your feedback about new enhancement.