fn_dblog is one of my favorite undocumented functions; I use it to design multiple recovery solutions. Today, I will discuss its purpose, syntax, parameters and one of the problems that I faced with respect to its parameters.
Purpose :
It gives you the detail view of the transaction log.
Syntax:
fn_dblog (@start,@end)
Parameters:
It accepts two parameters, given below are the details.
@start : Start LSN but it must be in the integer format.
@end : End LSN but it must be in the integer format.
Let me explain it with simple examples :
Example 1: (With default parameters)
Select * from fn_dblog(NULL,NULL) --OUTPUT
Example 2: (Passed LSN)
Lets take a current LSN from above example and pass it as a parameter.The purpose to pass a single LSN is to pick up only one transaction log record.
use test Go Select * from fn_dblog('000005c9:0000133d:0022','000005c9:0000133d:0022') --OUTPUT
Msg 9005, Level 16, State 3, Line 1
Either start LSN or end LSN specified in OpenRowset(DBLog, …) is invalid.
Ooops…… I am unable to execute it.
The reason behind the above error message is that I just picked up LSN from above result set and passed it into fn_dblog.
Whereas the correct way to do it is, first convert the LSN to integer and then pass it to fn_dblog.
Given below is the script that will convert your hexa decimal LSN into integer LSN.
Declare @Xml Xml Declare @String Varchar(Max) Declare @delimiter Varchar(5) Set @delimiter=':' Set @String ='000005c9:0000133d:0022' SET @Xml = cast(('<a>0x'+replace(@String,@delimiter,'</a><a>0x')+'</a>') AS XML) ;With CTE as (SELECT A.value('.', 'varchar(max)') as [Column] from @Xml.nodes('a') AS FN(A) ) Select Stuff((Select ':' + Convert (varchar(max),Convert(INT,cast('' AS XML).value ('xs:hexBinary(substring(sql:column("[Column]"),3) )', 'varbinary(max)'))) from CTE for xml path('') ),1,1,'') as [Current LSN] --OUTPUT
1481:4925:34
Lets pass the above output (1481:4925:34) to the fn_dblog and view the result set.
use test Go Select * from fn_dblog('1481:4925:34','1481:4925:34') --OUTPUT
Now, you can see that fn_dblog filter worked and it picked up only one LSN.
Conclusion :
No doubt fn_dblog is one of the helpful undocumented functions but do not use this function in the production server unless otherwise required. Also, remember to convert the hexa decimal to integer before passing it to fn_dblog as a parameter.
Hi, nice article.
However, i tried to your code snippet to convert the LSN. It is giving following error message.
Declare @Xml Xml
Declare @String Varchar(Max)
Declare @delimiter Varchar(5)
Set @delimiter=’:’
Set @String =’000005c9:0000133d:0022′
SET @Xml = cast((‘<a>0x’+replace(@String,@delimiter,'</a><a>0x’)+'</a>’)
AS XML)
;With CTE as (SELECT A.value(‘.’, ‘varchar(max)’) as [Column]
from @Xml.nodes(‘a’) AS FN(A) )
Select Stuff((Select ‘:’ +
Convert (varchar(max),Convert(INT,cast(” AS XML).value
(‘xs:hexBinary(substring(sql:column("[Column]"),3) )’, ‘varbinary(max)’)))
from CTE for xml path(”) ),1,1,”) as [Current LSN]
–OUTPUT
Msg 2209, Level 16, State 1, Line 13
XQuery [value()]: Syntax error near ‘&’
Hi Frank,
Can you please give me your SQL Server details. I tested this code in SQL Server 2005 and it executed successfully.
Thank you
Imran
Hi Muhammad,
I am very interested in the possibility of using fn_dblog to create a readable mirror database. So, imagine that we take an existing database called DatabaseA and do a full backup and restore it to DatabaseB. Then insert, update and delete a few records in DatabaseA. Then use fn_dblog to read the log and apply the same inserts, updates and deletes (in the correct order of course) to DatabaseB. If we restrict access to DatabaseB so that we are sure no application is performing inserts, updates or deletes on it except for this proposed function that is applying the changes from DatabaseA, we will have a very useful read-only mirror. If that logic is performed every 1 minute (or so) then it could be an interesting option for various recovery needs. What do you think?
Hi,
You can do it in this way but the problem is you need to write a lot of scripts. I think, it is better if you try log shipping or replication.
Thanks
Imran
Yeah, I understand. I should have said that my question sprung out of frustration that database mirroring is being deprecated and AlwaysOn requires Enterprise Edition. Also, it looks like configuring AlwaysOn is far more complicated than most of us would like, and if the reason for doing it is for DR it is probable that one would have more problems getting that working than actual downtime. So a solution that simply pulls data using fn_dblog and then keeps a remote database in sync would be a wonderful thing to have. How about this – can you point me at scripts or code that makes sense of the output from fn_dblog by generating DML statements from it? If so, that’s all I’m looking for. Our existing infrastructure would make it very easy to schedule, secure and monitor a process that makes use of that to keep a remote database in sych. So we are just missing the part that translates the output from fn_dblog into DML statements. Any help?
Thank you for your updates. I basically do not recommend SQL log to be used for this activity as you may lose it due to few other activities. Given below are the details.
1) If your database log size is insufficient.
2) If your database server is restarted.
3) If your recovery model is changed.
4) If someone shrinks the log size.
I developed a solution long back to handle this issue. It was a manual replication process. I created a flag,creation and updation datetime field in each table and on the basis of those time and flag fields, I manually replicated between two servers.
Still, if you would like to proceed with transaction log and need to translate it, I can help you. Kindly let me know your SQL Server version.
Thanks
Imran
Ah, great points about the situations in which fn_dblog can be unreliable, I did not realize all that. I wound up tweaking our transaction log shipping down to every 10 minutes and that is pretty darn good for zero effort, so I think I will leave fn_dblog alone for now. But thanks so much for your insights and help!
I am still considering writing some .NET code that will automatically write trigger code that will log all primary keys and timestamps whenever a record is inserted, updated or deleted. If I can do that and make it very lightweight on the database server in terms of the additional load it creates, then I could easily write more .NET code that comes along and queries for records in that table that it has not seen, and then queries the tables for the current data and keeps another database in sync. There are some interesting logical things to consider, such as the fact that intermediate states will be missed so if a record is modified twice and then the second process comes along and queries, it will only get the final state and not the intermediate state. But I think that is okay, as long as the state that it obtains is atomic. Also, this design would assume that all DDL statements would have to be made to both source and mirror database manually. I think what the system could do is inspect syscolumns, sysobjects, sysindexes, etc., using a stored procedure and quickly determine if the schemas of both databases are equal. If so, proceed, if not, exit. That way schema changes could be rolled out slowly and whenever the schema changes are in both places the synchronization would automatically resume. Neat!
If someone were to write this nicely and test it well, it could become a very valuable tool, seeing as Microsoft is moving to very complicated and expensive models for AlwaysOn, requiring Enterprise Edition, Windows failover clusters, etc., etc. I really like to keep SQL Server as simple as possible so that the uptime and reliability of it does not get hindered by, in this case, trying to increase the uptime and reliability of it!
I’d love to hear any thoughts/comments from you or anyone else. Thanks again Imran!
AN
Hi AN
I was doing research regarding your query and found a very handy tool. I think this tool can resolve your problem.
http://www.red-gate.com/products/sql-development/sql-comparison-sdk/learn-more/synchronizing-databases
Will test this app in detail and update you.
Thanks
Imran
I don’t think the Red Gate tools are built for performance, but rather for utility. They probably work great, but if I were to build something like this I would design it so that it performs fast and can run every 60 seconds. The only way to do that is to make design decisions that a third party tool could not make on my behalf. But if you try it out let me know what you find.
A
Sent from my Windows Phone
________________________________
[…] the DBCC LOGINFO command, the fn_dblog function is undocumented officially too. In “Undocumented function – fn_dblog”, Muhammad Imran explains syntax and gives an example of code to execute. There is a useful script […]