Automatically Fixing High VLF Count on SQL Server 2012+

The Problem

Our databases autogrow because we have thousands of databases and before we know we have databases with high VLF counts in the transaction logs.  If you don’t know why that is bad refer to the resources section of this article.

Update – November 5, 2017

I have added this code to my github repository to make it easier to download.  Be sure to read the steps at the bottom on how to setup the SQL Agent job and the details between here and there to understand what it does.

The Solution

Fix the high VLF count without any manual intervention by the DBA.  This will require a few pieces to the puzzle to work across the various versions of SQL Server.

Step 1

The first part of the process is to capture the info from DBCC LOGINFO or if you are ready for 2017 the new dmv sys.dm_db_log_stats into a table you can read later to know how many VLFs exist in your database currently. So we going to create the table called VLFInfo and used the procedure VLF_UpdateInfo to populate that data.  The procedure would be called in step one of a SQL Agent Job to automate the fixing of VLF files during appropriate maintenance windows on your server.  But as you will see in Step 2 my solutions tries to account for that by not doing them too close to when the file just grew and acquired those new lovely extra VLFs.

CREATE TABLE dbo.VLFInfo(
	[DBName] [sysname] COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
	[NumOfVLFs] [int] NOT NULL,
	[ActiveVLFs] [int] NOT NULL,
	[RecoveryMode] [varchar](7) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
	[LogSizeMB] [int] NOT NULL,
	[LastUpdateTime] [datetime] NOT NULL CONSTRAINT [DF_TLogVLFInfo_LastUpdateTime] DEFAULT (getdate()),
	CONSTRAINT [PK_DBName] PRIMARY KEY CLUSTERED 
(
	[DBName] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) 
) 
GO

CREATE NONCLUSTERED INDEX [IX_LogSizeMB] ON dbo.[VLFInfo]
(
	[LogSizeMB] ASC
)
INCLUDE ([NumOfVLFs]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) 
GO
CREATE PROCEDURE dbo.VLF_UpdateInfo
AS
SET NOCOUNT ON

DECLARE 
	@DBName SYSNAME,
	@vlfcount INT,
	@activevlfcount INT,
	@dbccquery varchar(1000),
	@currentlogsizeMB INT

CREATE TABLE #VLFSummary
    (
      DBName SYSNAME ,
      NumOfVLFs INT ,
      ActiveVLfs INT ,
      RecoveryMode VARCHAR(99) ,
      LogSizeMB INT
    )

CREATE TABLE #VLFInfo
    (
      RecoveryUnitId TINYINT ,
      FileId TINYINT ,
      FileSize BIGINT ,
      StartOffset BIGINT ,
      FSeqNo INT ,
      [Status] TINYINT ,
      Parity TINYINT ,
      CreateLSN NUMERIC(25, 0)
    )

DECLARE csr CURSOR FAST_FORWARD READ_ONLY
FOR
SELECT name
FROM master.sys.databases WHERE database_id <> 2 
AND [state] = 0
AND is_read_only = 0 

OPEN csr

FETCH NEXT FROM csr INTO @dbname

WHILE (@@fetch_status <> -1)
BEGIN
	SET @dbccquery = REPLACE(REPLACE(
		'DBCC loginfo ("{{DatabaseName}}") WITH NO_INFOMSGS, TABLERESULTS'
		,'"','''')
		,'{{DatabaseName}}', @dbname)

	TRUNCATE TABLE #VLFInfo
		
	INSERT INTO #VLFInfo
	EXEC (@dbccquery)

	SET @vlfcount = @@rowcount

	SELECT @activevlfcount = COUNT(*) 
	FROM #VLFInfo WHERE [Status] = 2

	SELECT @currentlogsizeMB = (size/128) 
	FROM master.sys.master_files 
	WHERE type_desc = 'log' 
		AND DB_NAME(database_id)=@dbname

	INSERT INTO #VLFSummary
	VALUES(@dbname, @vlfcount, @activevlfcount, CONVERT(VARCHAR(7),DATABASEPROPERTYEX(@dbname, 'Recovery')), @currentlogsizeMB)

	FETCH NEXT FROM csr INTO @dbname
END

CLOSE csr
DEALLOCATE csr

TRUNCATE TABLE dbo.VLFInfo

INSERT INTO dbo.VLFInfo (DBName, NumOfVLFs, ActiveVLFs, RecoveryMode, LogSizeMB)
SELECT DBName, NumOfVLFs, ActiveVLFs, RecoveryMode, LogSizeMB 
FROM #VLFSummary

DROP TABLE #VLFSummary  
GO

Step 2

Now that we have that information captured we can execute a stored procedure to fix the high VLF count.  As part of our solution, we have an Extended Event session that tracks file growths and shrink activities.  This allows us to run the job all day long and check to see the last time the files have grown.  In our stored procedure VLF_AutoFix we have a parameter @LookBackTime that defaults to 60 minutes, so if the transaction log grew in the last 60 minutes it will not try to fix the high VLFs.

CREATE EVENT SESSION XE_DatabaseSizeChangeEvents
    ON SERVER
    ADD EVENT sqlserver.database_file_size_change
        ( SET collect_database_name = ( 1 ))
    ADD TARGET package0.ring_buffer
    WITH (   MAX_MEMORY = 4096KB ,
             EVENT_RETENTION_MODE = ALLOW_MULTIPLE_EVENT_LOSS ,
             MAX_DISPATCH_LATENCY = 30 SECONDS ,
             MAX_EVENT_SIZE = 0KB ,
             MEMORY_PARTITION_MODE = NONE ,
             TRACK_CAUSALITY = OFF ,
             STARTUP_STATE = ON
         );
GO

ALTER EVENT SESSION XE_DatabaseSizeChangeEvents ON SERVER STATE = start; 
GO

Step 3

Now that we have a way to check to see if the size of the log has changed in the time we specified let’s look at the procedure that we can run to automatically shrink and grow our your log file in the recommended file size chunks. First, we are going to create a table to track when we attempt to auto-fix our high VLF counts. We will use this table in our procedure to not attempt to auto fix the VLFs for another 24 hours if it was unable to fix it the first time due to active VLF at the end of the transaction log. We don’t the job to constantly be trying to fix those VLFs.

CREATE TABLE dbo.[VLFAutoFix](
	[ID] [int] IDENTITY(1,1) NOT NULL,
	[DBName] [nvarchar](128) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
	[CurrentVLFCount] [int] NOT NULL,
	[LogFileSizeMBs] [int] NOT NULL,
	[LogDate] [datetime] NOT NULL,
	CONSTRAINT [PK_TLogVLFAutoFix] PRIMARY KEY CLUSTERED 
(
	[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) 
) 
GO

ALTER TABLE dbo.[VLFAutoFix] ADD CONSTRAINT [DF_VLFAutoFix_LogDate] DEFAULT (getdate()) FOR [LogDate]
GO

NOTE: The algorithm for change in 2014 for how VLFs are created so you most likely will not need this procedure for those environments and it doesn’t use a new formula.

Step 4

In the last part, we have two procedures.  One serves as a wrapper to call the procedure that actually fixes the high VLF count and logs the attempt to our table above.  It also before attempting the fix checks the Extended Event session to see if the log file size changed in the @LookBackTime parameters number of hours and checks the  VLFAutoFix table to see if a fix was attempted the last  @HoursSinceLastFix number of hours. If either those conditions are met it will not attempt to fix the VLFs and wait until the next time the job runs and check again.  The second procedure actually runs your backup job and attempts to do the shrinking and growing the file back out.

----------------------------------------------------------------------------------
-- Procedure Name: VLF_AutoFix
--
-- Desc: Runs VLF_Fix if Ideal VLFCount is over 100 and VLFCount over IdealCount + @VLFIdealOver 
--
--
-- Notes:  Recommendations are to have each VLF be no more than 512 MBs 
--	http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/
--	chunks less than 64MB and up to 64MB = 4 VLFs
--	chunks larger than 64MB and up to 1GB = 8 VLFs
--	chunks larger than 1GB = 16 VLFs
--	ideal size for a VLF 512 MBs, 20 to 30 VLfs , 50 high mark
--  https://www.sqlskills.com/blogs/paul/important-change-vlf-creation-algorithm-sql-server-2014/
--  Algorithms update for 2014 and up
--  Is the growth size less than 1/8 the size of the current log size?
--  Yes: create 1 new VLF equal to the growth size
--  No: use the formula above
----------------------------------------------------------------------------------
CREATE PROCEDURE dbo.VLF_AutoFix
(
	@LookBackTime INT = 60 , -- Number of minutes to look back in the XE to see if things have changed
	@VLFIdealOver INT = 20 , -- Number of over the Ideal number of VLFs is OK
	@VLFCountMin INT = 100 , -- The minimum of VLFs the log has to have to try to fix it
	@VLFIdealSize INT = 512 , -- Ideal size of each VLF
	@HoursSinceLastFix INT = 6 , -- Number of hours since the last time it tried to fix it
	@MaxIncrementSizeMB INT = 8192, -- Size to increase by, in 2012 and below this 8192 gives is our 512
	@LogBackJobName sysname --Name of log backup job on server
)
AS
SET NOCOUNT ON

CREATE TABLE #VLFInfo
(
	RecoveryUnitID INT ,
	FileID INT ,
	FileSize BIGINT ,
	StartOffset BIGINT ,
	FSeqNo BIGINT ,
	[Status] BIGINT ,
	Parity BIGINT ,
	CreateLSN NUMERIC(38)
);
	 
CREATE TABLE #VLFCountResults
	(
		DatabaseName SYSNAME ,
		VLFCount INT ,
		LogFileSize BIGINT
	);

CREATE TABLE #Events ( DatabaseName SYSNAME );
CREATE TABLE #LogFileSize ( LogFileSizeMB INT );

DECLARE @DBName SYSNAME ,
	@LogFileSize INT ,
	@IncrementSizeMB INT ,
	@VLFCount INT ,
	@SQL NVARCHAR(MAX),
	@return_status int,
	@CurrentLogFileSize INT;

DECLARE vlfcursor CURSOR READ_ONLY
FOR
	SELECT  DBName ,
			NumOfVLFs ,
			LogSizeMB 
	FROM    dbo.VLFInfo 
	WHERE   NumOfVLFs > @VLFCountMin
			AND NumOfVLFs - ( LogSizeMB / @VLFIdealSize ) >= @VLFIdealOver  ;
OPEN vlfcursor;

FETCH NEXT FROM vlfcursor INTO @DBName, @VLFCount, @LogFileSize ;
WHILE ( @@fetch_status <> -1 )
	BEGIN
		IF ( @@fetch_status <> -2 )
			BEGIN
				--Query  to see if log files has been grown in the last @LookBackTime Minutes
				WITH    Data
							AS ( SELECT   CAST(target_data AS XML) AS TargetData
								FROM     sys.dm_xe_session_targets dt
										INNER JOIN sys.dm_xe_sessions ds ON ds.address = dt.event_session_address
								WHERE    dt.target_name = N'ring_buffer'
										AND ds.Name = N'XE_DatabaseSizeChangeEvents'
								)
				INSERT INTO #Events
				SELECT  XEventData.XEvent.value('(action[@name="database_name"]/value)[1]', 'SYSNAME') AS DatabaseName
				FROM    Data d
						CROSS APPLY TargetData.nodes('RingBufferTarget/event[@name=''database_file_size_change'']')
						AS XEventData ( XEvent )
				WHERE   XEventData.XEvent.value('(@timestamp)[1]', 'datetime2') > CONVERT(DATETIME2, DATEADD(MINUTE, -1 * @LookBackTime, GETDATE()))
						AND XEventData.XEvent.value('(data[@name="file_type"]/text)[1]', 'NVARCHAR(120)') = N'Log file'
						AND XEventData.XEvent.value('(action[@name="database_name"]/value)[1]', 'SYSNAME') = @DBName;

				--If no growths in last @LookBackTime * -1 minutes then VLF and this process has not been run on this DB in the last @HoursSinceLastFix
				IF @@ROWCOUNT = 0 AND NOT EXISTS (SELECT 1 FROM dbo.VLFAutoFix WHERE DBName = @DBName AND LogDate>= DATEADD(HOUR, @HoursSinceLastFix * -1, GETDATE()))
					BEGIN
						IF @LogFileSize >= @MaxIncrementSizeMB -- 512 MB limit on VLF size, creates 16 VLFs per growth
							SET @IncrementSizeMB = @MaxIncrementSizeMB;
						ELSE
							SET @IncrementSizeMB = @LogFileSize;  -- Else grow back to original size using size as increment value
						
						--Attempt to shrink and regrow log file
						EXEC @return_status = dbo.VLF_Fix @DBName = @DBName,  
							@IncrementSizeMB = @IncrementSizeMB,
							@TargetLogSizeMB = @LogFileSize,
							@LogBackJobName = @LogBackJobName;

						--If previous shrink and regrow was unsuccessful regrow to original size without shrinking
                        SELECT  @CurrentLogFileSize = ( size / 128 )
                        FROM    master.sys.master_files
                        WHERE   type_desc = 'log'
                                AND DB_NAME(database_id) = @DBName

                        IF @LogFileSize > @CurrentLogFileSize
						BEGIN 
                            EXEC dbo.VLF_Fix
                                @DBName = @DBName ,
                                @IncrementSizeMB = @IncrementSizeMB ,
                                @TargetLogSizeMB = @LogFileSize ,
								@LogBackJobName= @LogBackJobName;
						END
							
						--Record the Auto Fix info to a table
						INSERT INTO dbo.VLFAutoFix (DBName, CurrentVLFCount, LogFileSizeMBs)
						VALUES (@DBName, @VLFCount, @LogFileSize);
					END

				TRUNCATE TABLE #Events;
			END
		FETCH NEXT FROM vlfcursor INTO @DBName, @VLFCount, @LogFileSize;
	END

CLOSE vlfcursor;
DEALLOCATE vlfcursor;

DROP TABLE #VLFInfo;
DROP TABLE #VLFCountResults;
DROP TABLE #Events;
GO
CREATE PROCEDURE dbo.VLF_Fix
(
	@DBName sysname,
	@StopTimeSecs INT = 600,
	@DelayIncrementSecs INT = 1,
	@TargetLogSizeMB INT,
	@IncrementSizeMB FLOAT = 8192,
	@LogBackJobName sysname
) 
AS 
SET NOCOUNT ON

DECLARE @Delay INT
DECLARE @DelayTime DATETIME
DECLARE @CutOffTime DATETIME
DECLARE @sqlcmd NVARCHAR(MAX)
DECLARE @DBCCQuery VARCHAR(99)
DECLARE @LoopCtr INT
DECLARE @StepMB INT
DECLARE @LogName sysname
DECLARE @CurrentLogFileSizeMB INT
DECLARE @VLFCount INT
DECLARE @DBRecoveryModel CHAR(1)
DECLARE @SQLExceptionMsg VARCHAR(MAX)

CREATE TABLE #VLFInfo
    (
      RecoveryUnitId TINYINT ,
      FileId TINYINT ,
      FileSize BIGINT ,
      StartOffset BIGINT ,
      FSeqNo INT ,
      [Status] TINYINT ,
      Parity TINYINT ,
      CreateLSN NUMERIC(25, 0)
    )
    
-- Set the stop time for the loop
SET @CutOffTime = DATEADD(s,@StopTimeSecs,GETDATE())
SET @Delay = 0
	
-- Get the recovery model for the database
SET @DBRecoveryModel = ( SELECT ( CASE WHEN d.recovery_model = 3 THEN 'S' ELSE 'F' END  ) FROM sys.databases AS d WHERE d.name = @DBName )

-- Build SQL command for shrink loop
SET @sqlcmd = 
(
    SELECT REPLACE('
        USE [{{@dbname}}];
        CHECKPOINT;
        '
        ,'{{@dbname}}',@DBName)
) +   
	+
	REPLACE(REPLACE(
	'
	SELECT 1
	WHILE @@RowCount > 0
	BEGIN
		SELECT	1
		FROM	msdb.dbo.sysjobs_view job
				INNER JOIN msdb.dbo.sysjobactivity activity
					ON job.job_id = activity.job_id
		WHERE	job.name = "{{JobName}}"
				AND start_execution_date IS NOT NULL
				AND stop_execution_date IS NULL
		ORDER BY start_execution_date DESC
	END '
	,'"', '''')
	,'{{JobName}}', @LogBackJobName)
	 + 
	REPLACE(REPLACE('
	EXEC msdb.dbo.sp_start_job "{{JobName}}"'
	,'"', '''')
	,'{{JobName}}', @LogBackJobName)
	+
	REPLACE(REPLACE(
	'
	SELECT 1
	WHILE @@RowCount > 0
	BEGIN
		SELECT	1
		FROM	msdb.dbo.sysjobs_view job
				INNER JOIN msdb.dbo.sysjobactivity activity
					ON job.job_id = activity.job_id
		WHERE	job.name = "{{JobName}}"
				AND start_execution_date IS NOT NULL
				AND stop_execution_date IS NULL
		ORDER BY start_execution_date DESC
	END'
	,'"', '''')
	,'{{JobName}}', @LogBackJobName)
	 +
(
    SELECT REPLACE('
        DBCC SHRINKFILE ({{file_id}} , 0, TRUNCATEONLY) WITH NO_INFOMSGS;
        CHECKPOINT;
		DBCC SHRINKFILE ({{file_id}} , 0) WITH NO_INFOMSGS;
        '
        ,'{{file_id}}', CONVERT(VARCHAR(99), [file_id]))
    FROM master.sys.master_files
    WHERE type_desc = 'log'
		AND DB_NAME(database_id) = @DBName
) + '
CHECKPOINT;
'

-- Set log name and target size to value of parameter supplied, or existing size if no parameter value supplied
SELECT TOP 1 @LogName = name, @TargetLogSizeMB = ROUND(ISNULL(@TargetLogSizeMB,[size]/128.0),0) FROM master.sys.master_files WHERE database_id = DB_ID(@DBName) AND type = 1 ORDER BY size DESC

-- Get VLF info and store in temporary table
SET @dbccquery = REPLACE(REPLACE(
		'DBCC loginfo ("{{DatabaseName}}") WITH NO_INFOMSGS, TABLERESULTS'
		,'"','''')
		,'{{DatabaseName}}', @dbname)

INSERT INTO #VLFInfo
EXEC (@DBCCQuery)

SELECT @VLFCount = COUNT(*) 
FROM #VLFInfo

SELECT TOP 1 @CurrentLogFileSizeMB = ROUND([size]/128.0,0) 
FROM master.sys.master_files 
WHERE database_id = DB_ID(@DBName) 
	AND type = 1 
ORDER BY size DESC

-- Run the shrinking loop
WHILE ( (GETDATE()<@CutOffTime) AND ((@CurrentLogFileSizeMB > 100) OR (@VLFCount > 8)) AND (@VLFCount > 2))
BEGIN
	-- Run the shrink command only if the most recent log VLF is not active      
	IF ( (SELECT TOP 1 Status FROM #VLFInfo ORDER BY StartOffset DESC)<>2 )
		BEGIN
			EXEC sys.sp_executesql @sqlcmd
		END
	-- Reset values          
	TRUNCATE TABLE #VLFInfo 
	     
	INSERT INTO #VLFInfo
	EXEC (@DBCCQuery)

	SELECT @VLFCount = COUNT(*) 
	FROM #VLFInfo

	SELECT TOP 1 @CurrentLogFileSizeMB = ROUND([size]/128.0,0) 
	FROM master.sys.master_files 
	WHERE database_id = DB_ID(@DBName) 
		AND type = 1 
	ORDER BY size DESC
	
	SET @Delay = @Delay + @DelayIncrementSecs
	SET @DelayTime = DATEADD(s,@Delay,GETDATE())
	
	PRINT 'Waiting for ' + CONVERT(VARCHAR(99),@Delay) + ' seconds ...'
	PRINT 'Current log file size is ' + CONVERT(VARCHAR(99),@CurrentLogFileSizeMB) + 'MB'
	PRINT 'Current VLF count is ' + CONVERT(VARCHAR(99),@VLFCount)

	WAITFOR TIME @DelayTime			     
END

SET @sqlcmd = '-- Target size in MB is ' + ISNULL(CONVERT(VARCHAR(99),@TargetLogSizeMB),'Unknown') + CHAR(13) + CHAR(10)
SET @sqlcmd = @sqlcmd + '-- LogFile name is ' + @LogName + CHAR(13) + CHAR(10)
SET @sqlcmd = @sqlcmd + '-- Ideal increment size is ' + @LogName + CHAR(13) + CHAR(10)

-- Set increment size as close to ideal size as possible (this works better if a target size is supplied that is a multiple of the increment size obviously)
SELECT @StepMB = ROUND(@TargetLogSizeMB / CEILING(@TargetLogSizeMB / @IncrementSizeMB),0) 
		,@LoopCtr = CEILING(@TargetLogSizeMB / @IncrementSizeMB)     
		,@TargetLogSizeMB = @StepMB

WHILE (@LoopCtr > 0)
BEGIN
	SELECT @sqlcmd = @sqlcmd + 'ALTER DATABASE [' + @DBName + '] MODIFY FILE (NAME = N''' + @LogName + ''', SIZE = ' + CONVERT(VARCHAR(9),@TargetLogSizeMB) + 'MB);' + CHAR(13) + CHAR(10)
	SELECT @TargetLogSizeMB = @TargetLogSizeMB + @StepMB,@LoopCtr = @LoopCtr - 1
END

IF ( ((GETDATE()<@CutOffTime) AND ((@CurrentLogFileSizeMB <= 100) OR (@VLFCount = 2))))
BEGIN  
	EXEC sys.sp_executesql @sqlcmd
END  
ELSE
BEGIN
	PRINT 'Unable to reduce VLFs sufficiently within the specified time period ...' 
        
	PRINT 'Please try again'
	SET @SQLExceptionMsg = 'Current log file size is ' + CONVERT(VARCHAR(99),@CurrentLogFileSizeMB) + 'MB'
		+ 'Current VLF count is ' + CONVERT(VARCHAR(99),@VLFCount)
	RAISERROR(@SQLExceptionMsg, 16, 1)
END  

DROP TABLE #VLFInfo
GO

Bringing All Together

Well by now you have seen a whole bunch of code and it would be a good time to give you a summary on whole to implement the solution in your environment.  First, create your table VLFInfo.  Then create a job to run the first stored procedure VLF_UpdateInfo to populate that table.  Before you set up the next steps to do any automatic fixes check the VLFInfo across all your servers and see if where you can test this at.

Next, you need to create the Extended Event XE_DatabaseSizeChangeEvents session and let it start collecting data for you.  Then we can create the tracking table VLFAutoFix so we will know when it attempts to fix high VLF counts.  Then create stored procedures VLF_AutoFix and VLF_Fix.  Then as part of the job you created setup step two to execute procedure VLF_AutoFix.

Now create a SQL Agent job.  Step one will call procedure VLF_UpdateInfo.  Step two will call VLF_AutoFix with the parameter @LogBackJobName parameter specifying your log back job so it can help mark the VLFs a the end the log file not active.

Currently, this system runs hourly across 400 servers over thousands of databases with 30K+ transactions per second without causing any issues because it does take the precautions not to run it if the file has grown in the last hour and do not attempt to fix it within 6 hours of the previous attempt.

Resources

Transaction Log VLFs – too many or too few?

8 Steps to better Transaction Log throughput

Performance impact of lots of VLFs in the transaction log

Related Posts

13 thoughts on “Automatically Fixing High VLF Count on SQL Server 2012+

  1. Nice article Tracy, I might roll this one out too! Tiny typo in title, replace “Sever” with “Server” ^_^

  2. Tracy,

    Are you sure this works on 2008 SQL Srvr? Particularly the XE session in step 2. I just tried to run Step 2 on a SQL Server 2008 R2 database, and it fails with the message “The event name, “sqlserver.database_file_size_change”, is invalid, or the object could not be found”.

    Perhaps SS 2012 or greater… I haven’t tried this yet on a 2012 instance.

  3. I implemented something similar in my shop, only it warns of VLF issues instead of fixing them. The option is there to auto fix, but it is turned off. We have a lot of log reader sensitive activity – cdc, log shipping, etc. and shrinking a log at the wrong time can be problematic.

    1. We started off that way with warnings but found with the extra logic of not doing if the log grew recently we didn’t have any issues with it. We have AGs that we run it with no issues and replication.

  4. Tracy,

    Nice job! Could you explain more about why “not to run it if the file has grown in the last hour and to not attempt to fix it within 6 hours of the previous attempt.”?

    Thanks,
    Tom

    1. Yes, we don’t run if it has grown in the last hour just in case the activity that caused it to grow is continuing to happen so we want the activity to settle down. We wait the 6 hours because we don’t want to constantly trying to run shrink commands every hour while we wait for the active VLF to be wrap around to the beginning of the file. It’s just a safety measure to not have too much maintenance activity occurring on the system.

  5. very nicely done – thank you for sharing. question I have is that we’ve got an external backup provider (Microsoft’s Data Protection Manager) – and if we used your script, we’d break the recovery chain pretty quickly because we’d have tlog backups going on in two places. Do you know how to kick off a DPM backup from a sql job? could be through powershell…thoughts?

    1. I think in this scenario where you have a third party tool, or your taking your backups not with SQL Agent jobs the best thing you can do to work around it is to setup a SQL Agent that issues a WAITFOR command. Set the wait time for the time between backups assuming your logs backups are taken fairly close together say every 5 to 15 minutes.

      The other option would be to comment out the lines (54-93) that take the backups and get back as many VLFs are you can at that time then I would probably increase the @HoursSinceLastFix to higher number to account for the fact you can maximize the benefits of the log backups. This will give the log file a longer amount of time to wrap around to the beginning of the file.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.