Feeds:
Posts
Comments

Archive for January, 2014

Mostly companies develop several environments from development stage till production like Development environment, Integration environment, Validation environment, Quality Assurance environment, User Acceptance environment and Production environment to enhance quality of the product for the customer. But once we have different environments, there is a very common issue called inconsistency in database between these environments. Sometimes, script manager may miss out some scripts to deploy, and due to this one or all of the environments can create serious problems and may crashes also. And if this is user acceptance / production environment, it may create a cumbersome image about your product. Sometimes, we use user defined functions and stored procedures to compare two databases like this. However, I felt that these solutions are good, it can help you how to fix, but it is not a smart solution (can automatically fix the issues).
Recently, I came across an issue where we have many environments and in all different environments it was generating different database inconsistency errors. So I started doing my research, and came across a tool namely ApexSQL Diff, a schema comparison tool. I tested this tool and I synced all environments in a very few minutes with just a few clicks. My customer was damn happy and was inquiring how I managed to do it in few minutes as it seems, it takes few weeks to sort out. I recommended this solution to the customer & exclaimed,”it is money’s worth”. I approached APEX team to purchase this product for my customers & get some discount, so thought of getting same discount for my blog reader. And finally I got it :). You can use coupon Code ApexSQLDiffPro25 to get 25% discount on APEXSQL Diff.

Note : There is a FREE community version of ApexSQL Diff for SQL Express and Windows Azure SQL Databases.

Let me explain step by step how it works.

STEP 1:
Once you open ApexSQL Diff, click on New button, it asks about source and destination of your SQL server database (Backup, Script folder, Snapshot, Source control) to compare as shown in the image below.

apexsqldiff.1.1

STEP 2:
Once you select source and destination databases to compare, just press Compare button at the bottom of the screen. It takes a while and it will give you all schema differences between both databases. It supports new object shipped with SQL Server 2012 as well, as shown in the image below.
apexsqldiff.1.2

It also provides additional sorting, formatting (alignment & grouping) and filter options like object filter, difference filter & script difference view to give you full control over differences, as shown in the image below.

apexsqldiff.1.2.1

STEP 3:
Once you have the differences, just press Sync button and it can sync both databases and can give you the script also (in case you want to deploy by yourself), as shown in the picture below.

apexsqldiff.1.3

Last but not least, it does not stop working here, it syncs again to check if you missed to upgrade/update any scripts and gives you the difference (if any).
I like this product and I also recommend that you should try and view its results by yourself. Its trial version is available here.
You can get further details about the product from http://blog.apexsql.com/category/apexsql-diff/.  Let me know your opinions whether positive or not.

Advertisements

Read Full Post »

How to view the definition of a module, is one of the activities that we need very frequently whether it is customization or debugging in SQL Server. Today, I was debugging one of the stored procedures, so thought of writing this article.

There are multiple ways to view the definition of a module. I will mention two of them here and you can choose either of them.

METHOD 1 :
Given below is the solution using OBJECT_DEFINITION (One of the built-In metadata functions in SQL Server).

USE AdventureWorks2012 -- Donot forget to change database name here
GO
SELECT OBJECT_DEFINITION (OBJECT_ID('[HumanResources].[vEmployee]'))
-- Donot forget to change schema and table name
-- as highlighed above in blue color.
AS ObjectDefinition;
GO
--OUTPUT

definition of modules.1.1

METHOD 2 :
Given below is the solution using sys.sql_modules (One of the object catalog views in SQL Server).

USE AdventureWorks2012 -- Donot forget to change database name here
GO
SELECT definition
FROM sys.sql_modules
WHERE object_id = OBJECT_ID('HumanResources.dEmployee');
-- Donot forget to change schema and table name
-- as highlighed above in blue color.
GO
--OUTPUT

definition of modules.1.1

CONCLUSION :
As you can see, both methods give you the same result set. However, I personally use Method 1 due its less number of codes. Let me know which method you prefer ?

Read Full Post »

We usually use temporary table to keep temporary data in the stored procedure, it can be local, global or a table variable. Recently, I was working on the stored procedures’ metadata in a client database using sp_describe_first_result_set (a new system stored procedure shipped with SQL Server 2012). Ooopps….. there’s an error, let me explain the error in detail. This error message is available in SQL Server 2012 and above.

Message Number: 11526

Severity : 16

Error Message: The metadata could not be determined because statement ‘%.*ls’ in procedure ‘%.*ls’ uses a temp table.

Error Generation:
Let me create a stored procedure to demonstrate this error.

USE tempdb
GO
--DROP PROCEDURE [dbo].[usp_sample]
--GO
CREATE PROCEDURE [dbo].[usp_sample]
AS
BEGIN
CREATE TABLE #temp_Sample
(
 [ID] INT,
 [NAME] VARCHAR(32)
)
INSERT INTO #temp_Sample VALUES (1,'Bob')

SELECT * FROM #temp_Sample
END
GO

EXEC [dbo].[usp_sample]
GO
--OUTPUT

error_message_11526.1.1

You can see that I have created a stored procedure and executed it as well and got the result set. Now I would like to view the metadata of the above stored procedure. Given below is the script to view the metadata using sp_describe_first_result_set

USE tempdb
GO
Sp_describe_first_result_set @tsql =N'usp_sample'
GO
--OUTPUT

Msg 11526, Level 16, State 1, Procedure sp_describe_first_result_set, Line 1
The metadata could not be determined because statement ‘INSERT INTO #temp_Sample VALUES (1,’Bob’)’ in procedure ‘usp_sample’ uses a temp table.

Ooopps…… I am unable to execute it.

Resolution:
The reason behind this error is, you used a local temporary table in the stored procedure and you cannot view the metadata using sp_describe_first_result_set. So what to do ? Very simple guys, just convert local temporary table to table variable and view its metadata.
Given below is the correct script.

USE tempdb
GO
--DROP PROCEDURE [dbo].[usp_sample]
--GO
CREATE PROCEDURE [dbo].[usp_sample]
AS
BEGIN
DECLARE @temp_Sample TABLE
(
 [ID] INT,
 [NAME] VARCHAR(32)
)
INSERT INTO @temp_Sample VALUES (1,'Bob')

SELECT * FROM @temp_Sample
END
GO

Sp_describe_first_result_set @tsql =N'usp_sample'
GO
--OUTPUT

error_message_11526.1.2

Conclusion :
Remember, whenever you use temporary table in the stored procedure and would like to view the metadata of that stored procedure using sp_describe_first_result_setalways convert local temporary table to table variable.

Read Full Post »

I came across this query when I was automating one of my services for a customer and it had to run on first Sunday of every month. I already had this solution earlier and utilized it in many places, but as my customer is using SQL Server 2012, I would like to write this script in a different way with less no of codes. So I started scripting and finally developed it using EOMONTH function. (A new function shipped with SQL Server 2012).

Let me create a sample to demonstrate the solution.

USE tempdb
GO
--DROP TABLE tbl_Sample
--GO
CREATE TABLE tbl_Sample
(
 [ID] INT,
 [Date] DATETIME
)
GO
INSERT INTO tbl_Sample VALUES (1,'2013-01-05')
INSERT INTO tbl_Sample VALUES (2,'2013-02-01')
INSERT INTO tbl_Sample VALUES (3,'2013-05-05')
INSERT INTO tbl_Sample VALUES (4,'2013-07-20')
INSERT INTO tbl_Sample VALUES (5,'2013-08-28')

GO

SOLUTION 1 : Using EOMONTH
Given below is the script.

USE tempdb
GO
SELECT DATEADD(DAY,8-
 DATEPART(WEEKDAY,DATEADD(DAY,1,EOMONTH([Date])))
,EOMONTH([Date])) AS [First Sunday of the next month]
FROM tbl_Sample
GO
--OUTPUT

firstsundayofthemonth.1.1

SOLUTION 2 : Using Traditional Method
Given below is the script.

USE tempdb
GO
SELECT CONVERT(DATE,DATEADD(DAY,8-
 DATEPART(WEEKDAY,DATEADD(MONTH,DATEDIFF(MONTH, 0, [Date])+1,0))
,DATEADD(MONTH,DATEDIFF(MONTH, 0, [Date])+1, -1)))
AS [First Sunday of the next month]
FROM tbl_Sample
GO
--OUTPUT

firstsundayofthemonth.1.1

Read Full Post »

In one of the my earlier articles related to triggers, I had written about how to get the list of triggers along with its different properties using sys.triggers & OBJECTPROPERTY. It was quite a big script to get these details. Recently, when I was working on some system level stored procedures, I got a shortcut to achieve list of triggers along with its schema using sp_MSforeachtable & sp_helptrigger

Given below is the script.

--This script is compatible with SQL Server 2005 and above.
USE AdventureWorks2012
GO
EXEC sp_MSforeachtable
 @command1 = 'sp_helptrigger ''?'''
,@whereand = 'And Object_id In (Select parent_id From sys.triggers)';
--OUTPUT

listoftrigger.1.1

Read Full Post »

String or binary data would be truncated (Error number 8152) is a very common error. It usually happens when we try to insert any data in string (varchar,nvarchar,char,nchar) data type column which is more than size of the column. So you need to check the data size with respect to the column width and identify which column is creating problem and fix it. It is very simple if you are dealing with less columns in a table. But it becomes nightmare if you are dealing with inert into query with huge number of columns and you need to check one by one column. I received this query from one of my Blog readers Mr Ram Kumar asking if there is a shortcut to resolve this issue and give the column name along with the data creating problems. I started searching for the solution but could not get proper one. So I started developing this solution.
Before proceeding with the solution, I would like to create a sample to demonstrate the problem.

SAMPLE :

--This script is compatible with SQL Server 2005 and above.
--DROP TABLE tbl_sample
--GO
CREATE TABLE tbl_sample
(
 [ID] INT,
 [NAME] VARCHAR(10),
)
GO
INSERT INTO tbl_sample VALUES (1,'Bob Jack Creasey')
GO
INSERT INTO tbl_sample ([ID],[NAME]) VALUES (2,'Frank Richard Wedge')
GO
--OUTPUT

Msg 8152, Level 16, State 14, Line 1
String or binary data would be truncated.
The statement has been terminated.
Msg 8152, Level 16, State 14, Line 2
String or binary data would be truncated.
The statement has been terminated.

SOLTUION :
Given below is the stored procedure that can find the exact column name and its data which is exceeding the limit of column width.

--DROP PROCEDURE usp_String_or_binary_data_truncated
--GO
CREATE PROCEDURE usp_String_or_binary_data_truncated
@String VARCHAR(MAX)
AS

DECLARE @VARCHAR AS VARCHAR(MAX)
DECLARE @Xml AS XML
DECLARE @TCount AS INT
SET @String= REPLACE(REPLACE(REPLACE(REPLACE(@String,'''','')
             ,'[',''),']',''),CHAR(13) + CHAR(10),'')
SET @Xml = CAST(('<a>'+REPLACE(@String,'(','</a><a>')
           +'</a>') AS XML)

SELECT @TCount=COUNT(*)
FROM @Xml.nodes('A') AS FN(A)

;WITH CTE AS
     (SELECT
     (CASE
	 WHEN (CHARINDEX('INSERT INTO',A.value('.', 'varchar(max)'))>0)
     THEN 1
     WHEN CHARINDEX('VALUES',A.value('.', 'varchar(max)'))>0
     THEN 2
	 WHEN (CHARINDEX('INSERT INTO',A.value('.', 'varchar(max)'))=0
     AND CHARINDEX('VALUES',A.value('.', 'varchar(max)'))=0)
     AND @TCount=2  THEN 2
	 WHEN (CHARINDEX('INSERT INTO',A.value('.', 'varchar(max)'))=0
     AND CHARINDEX('VALUES',A.value('.', 'varchar(max)'))=0)
     AND @TCount=3  THEN 3
     END) AS[Batch Number],
     REPLACE(REPLACE(A.value('.', 'varchar(max)')
     ,'INSERT INTO',''),'VALUES ','') AS [Column]
     FROM @Xml.nodes('A') AS FN(A))

, [CTE2] AS
(
    SELECT
	[Batch Number],
	CAST('' + REPLACE([Column], ',' , '')
    + '' AS XML)
    AS [Column name And Data]
    FROM  [CTE]
)
,[CTE3] AS
(
    SELECT [Batch Number],
	ROW_NUMBER() OVER(PARTITION BY [Batch Number]
    ORDER BY [Batch Number] DESC) AS [Row Number],
    Split.a.value('.', 'VARCHAR(MAX)') AS [Column name And Data]
FROM [CTE2]
CROSS APPLY [Column name And Data].nodes('/M')Split(A))

SELECT
 ISNULL(B.[Column name And Data],C.name) AS [Column Name]
,A.[Column name And Data] AS [Column Data]
,C.max_length As [Column Length]
,DATALENGTH(A.[Column name And Data])
AS [Column Data Length]

FROM [CTE3] A
LEFT JOIN [CTE3] B
ON A.[Batch Number]=2 AND B.[Batch Number]=3
AND A.[Row Number] =B.[Row Number]
LEFT JOIN sys.columns C
ON C.object_id =(
	SELECT object_ID(LTRIM(RTRIM([Column name And Data])))
	FROM [CTE3] WHERE [Batch Number]=1
)
AND (C.name = B.[Column name And Data]
OR  (C.column_id =A.[Row Number]
And A.[Batch Number]<>1))
WHERE a.[Batch Number] <>1
AND DATALENGTH(A.[Column name And Data]) >C.max_length
AND C.system_type_id IN (167,175,231,239)
AND C.max_length>0

GO

EXAMPLE :
Now, you simply need to replace all single quotes of your insert into query to double quotes and pass it into the stored procedure.
Given below is the sample.

EXEC usp_String_or_binary_data_truncated 'INSERT INTO tbl_sample VALUES (1,''Bob Jack Creasey'')'
GO
EXEC usp_String_or_binary_data_truncated 'INSERT INTO tbl_sample ([ID],[NAME]) VALUES (2,''Frank Richard Wedge'')'
GO
--OUTPUT

string or binary data truncated.1.1

As you can see above, it returned only the column name(s) whose data sizes exceed the limit of the column width.
Do let me know if you come across situation like that and resolve it in a different ways.

Read Full Post »

SQL Server database Maintenance plays an important role to improve database performance. One of the important aspects of maintenance is SQL Server log, even if you configured it properly, you should keep an eye on it and time to time, you should do its maintenance. To do the maintenance, you need to know the size of SQL Server log, used size and its percentage as well. There are multiple ways to achieve it. One of the famous methods of DBCC command is DBCC SQLPERF to find these details. I also have been using this DBCC Command since SQL Server 2005. It works fine but the only problem with this DBCC command is that if you need to manipulate its result set further, you need to place its result set in the temporary table and manipulate.

A few days ago, I was working on database log size and looking for some alternate solution. After some research, I found one undocumented dynamic management view that is shipped with SQL Server 2012 namely dm_db_log_space_usage. It gives the log size of the database and other details in bytes, you can convert it in MBs and manipulate its result set further (no need for temporary table) as well. Given below is the script to demonstrate how it works.

USE AdventureWorks2012
GO
SELECT
 DB_NAME(database_id) AS [Database Name],
 ROUND(CONVERT(FLOAT,total_log_size_in_bytes/1024)/1024,2) AS [Log Size (MB)],
 ROUND(CONVERT(FLOAT,used_log_space_in_bytes/1024)/1024,2) AS [Log Used size (MB)],
 ROUND(used_log_space_in_percent,2) AS [Log Used % (MB)]
FROM
sys.dm_db_log_space_usage

--OUTPUT

SQL Server log size.1.1

Read Full Post »

« Newer Posts