I ask this question at every client that hires me to enhance their Microsoft SQL Server Business Intelligence solution. It seems like a pretty easy and straight forward question right? You would be surprised at how many companies or BI teams within companies struggle to answer this question easily or have up to date information. The answer usually requires a DBA to find, copy, or manually create an excel sheet containing the data file information, then use formulas to sum it all up. There is nothing wrong with this approach but it is time consuming.
There are usually several data marts or a warehouse that is physically split up over many different databases, and it can take a few days to figure this out if there are many data files. Another question I ask is how full are your data marts or warehouse? You would be surprised at how many companies or BI teams struggle with this information or just don’t know that it is important. This info can help with capacity planning and performance as well. If your data files have to grow a few times a week to meet increasing data needs you are taking a performance hit each time the data file has to grow.
To simplify these questions, I wrote a script a few years ago that make the answers easy to attain. Disclaimer: I gathered some of the guts of the #datainfo T-SQL from a SQL Server forum a few years ago. Credit goes to the author of it as it got me thinking to create this script and his / her thought process was very cool.
I hope this script helps those out that struggle with these questions. This script can obviously be used to answer any database space question or modified to suit your needs rather easily.
--This script will gather all the database sizes and log file sizes on a SQL server rolled up. set nocount on create table #datainfo ( [databasename] varchar(250), [datafile] varchar(250), [totaldbspace(mb)] numeric(10,4), [useddbspace(mb)] numeric(10,4), [freedbspace(mb)] numeric(10,4)) create table #loginfo ( [databasename] varchar(150), [totallogspace(mb)] numeric(10,4), [logspaceused(%)] numeric(10,4), [status] int) insert into #loginfo exec ('dbcc sqlperf(logspace) with no_infomsgs') select database_id, name into #dbs from sys.databases declare @dbname varchar(250) declare @dbid int declare @sql varchar(5000) set @dbid = 1 while @dbid <= (select max(database_id) from #dbs) begin select @dbname = name from #dbs where database_id = @dbid set @sql = 'use '+@dbname+' create table #showfilestats ( fileid int, filegroup int, totalextents float, usedextents float, name varchar(150), filename varchar(150)) create table #helpfile ( name varchar(150), fileid int, filename varchar(500), filegroup varchar(100), size varchar(150), maxsize varchar(150), growth varchar(150), usage varchar(100)) insert into #helpfile exec (''sp_helpfile'') insert into #showfilestats exec (''dbcc showfilestats with no_infomsgs'') insert into #datainfo select '''+@dbname+''', h.name, (totalextents*64/1024), (usedextents*64/1024), ((totalextents - usedextents)*64/1024) from #showfilestats s join #helpfile h on h.fileid = s.fileid drop table #showfilestats drop table #helpfile' exec (@sql) set @dbid = @dbid + 1 end --This can get you all the raw data if you want to not sum it up and see individual files, etc. --select * --from #datainfo d --join #loginfo l on l.[DatabaseName]=d.[DatabaseName] select d.DatabaseName as DatabaseName, SUM(d.[TotalDBSpace(MB)]) as [TotalDBSpace(MB)], SUM(d.[UsedDBSpace(MB)]) as [UsedDBSpace(MB)], SUM(d.[FreeDBSpace(MB)]) as [FreeDBSpace(MB)], SUM(l.[TotalLogSpace(MB)]) as [TotalLogSpace(MB)], SUM(cast(l.[LogSpaceUsed(%)]*l.[TotalLogSpace(MB)] as numeric(10,4))) as [UsedLogSpace(MB)], SUM(cast(l.[TotalLogSpace(MB)] - (l.[LogSpaceUsed(%)]*(l.[TotalLogSpace(MB)]/100)) as numeric(10,4))) as [FreeLogSpace(MB)], SUM(d.[TotalDBSpace(MB)]+l.[TotalLogSpace(MB)]) as [TotalSpaceLogAndDB(MB)] from #datainfo d join #loginfo l on l.[DatabaseName]=d.[DatabaseName] group by d.DatabaseName drop table #loginfo drop table #datainfo drop table #dbs GO
The results look like: