Posted on Leave a comment

Dropping All Tables, SPs, FNs, and Views

USE {DATABASE_NAME}
GO

——————————————————–
— Drop Procedures
——————————————————–
DECLARE @objname sysname
DECLARE @sql nvarchar(max)

DECLARE obj_cur CURSOR FAST_FORWARD
FOR
SELECT name FROM sysobjects WHERE type = ‘P’

OPEN obj_cur

FETCH obj_cur INTO @objname

WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = ‘DROP PROCEDURE ‘ + @objname
EXEC sp_executesql @sql
FETCH obj_cur INTO @objname
END

CLOSE obj_cur
DEALLOCATE obj_cur

GO

——————————————————–
— Drop Functions
——————————————————–
DECLARE @objname sysname
DECLARE @sql nvarchar(max)
DECLARE obj_cur CURSOR FAST_FORWARD
FOR
SELECT name FROM sysobjects WHERE type IN (‘FN’,’IF’,’TF’)

OPEN obj_cur

FETCH obj_cur INTO @objname

WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = ‘DROP FUNCTION ‘ + @objname
EXEC sp_executesql @sql
FETCH obj_cur INTO @objname
END

CLOSE obj_cur
DEALLOCATE obj_cur

GO

——————————————————–
— Drop Views
——————————————————–
DECLARE @objname sysname
DECLARE @sql nvarchar(max)

DECLARE obj_cur CURSOR FAST_FORWARD
FOR
SELECT name FROM sysobjects WHERE type = ‘V’

OPEN obj_cur

FETCH obj_cur INTO @objname

WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = ‘DROP VIEW ‘ + @objname
EXEC sp_executesql @sql
FETCH obj_cur INTO @objname
END

CLOSE obj_cur
DEALLOCATE obj_cur

GO

——————————————————–
— Drop Constraints
——————————————————–
DECLARE @objname sysname
DECLARE @sql nvarchar(max)
DECLARE @tabname sysname

DECLARE obj_cur CURSOR FAST_FORWARD
FOR
SELECT A.name [const_name]
, B.name [table_name]
FROM sysobjects A
, sysobjects B
, sysconstraints C
WHERE A.id = C.constid
AND B.id = C.id
AND B.name NOT LIKE ‘sys%’
AND A.type IN ( ‘F’, ‘K’ )
ORDER BY A.type, 2 DESC, 1

OPEN obj_cur
FETCH obj_cur INTO @objname, @tabname

WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = ‘ALTER TABLE ‘ + @tabname + ‘ NOCHECK CONSTRAINT ‘ + @objname
PRINT @sql
EXEC sp_executesql @sql
FETCH obj_cur INTO @objname, @tabname
END

CLOSE obj_cur

OPEN obj_cur
FETCH obj_cur INTO @objname, @tabname

WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = ‘ALTER TABLE ‘ + @tabname + ‘ DROP CONSTRAINT ‘ + @objname
PRINT @sql
EXEC sp_executesql @sql
FETCH obj_cur INTO @objname, @tabname
END

CLOSE obj_cur
DEALLOCATE obj_cur
GO

——————————————————–
— Drop Tables
——————————————————–
DECLARE @objname sysname
DECLARE @sql nvarchar(max)
DECLARE obj_cur CURSOR FAST_FORWARD
FOR
SELECT name FROM sysobjects WHERE type = ‘U’

OPEN obj_cur

FETCH obj_cur INTO @objname

WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = ‘DROP TABLE ‘ + @objname
EXEC sp_executesql @sql
FETCH obj_cur INTO @objname
END

CLOSE obj_cur
DEALLOCATE obj_cur
GO

Posted on Leave a comment

Dropping Erroneous Tables from Master database

USE master
GO

DECLARE @tabname sysname
, @sql nvarchar(256)
DECLARE tab_cur CURSOR FAST_FORWARD FOR
SELECT name
FROM sysobjects WITH (NOLOCK)
WHERE type = ‘U’
AND ( name LIKE ‘L_%’ OR name LIKE ‘T_%’ )

OPEN tab_cur
FETCH tab_cur INTO @tabname

WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = ‘DROP TABLE ‘ + @tabname
EXEC sp_executesql @sql
FETCH tab_cur INTO @tabname
END

CLOSE tab_cur
DEALLOCATE tab_cur
GO

Posted on Leave a comment

Rebuilding all Indexes

USE [DATABASE_NAME]
GO

DECLARE @TableName varchar(255)

DECLARE TableCursor CURSOR FAST_FORWARD FOR
SELECT table_name
FROM information_schema.tables WITH (NOLOCK)
WHERE table_type = ‘base table’

OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @TableName

WHILE @@FETCH_STATUS = 0
BEGIN
DBCC DBREINDEX(@TableName,’ ‘,90)
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor
DEALLOCATE TableCursor
GO

Posted on Leave a comment

Counts from Each Table in DB

USE [dbname]
GO

SET NOCOUNT ON
DECLARE @tabname sysname
DECLARE @sql nvarchar(max)
DECLARE @rows int
DECLARE table_cur CURSOR FAST_FORWARD FOR

SELECT name
FROM sysobjects
WHERE type = ‘U’

CREATE TABLE #results_tmp ( tabname sysname, cnt int )

OPEN table_cur
FETCH table_cur INTO @tabname

WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = ‘INSERT INTO #results_tmp SELECT ”’ + @tabname + ”’, count(1) FROM ‘ + @tabname
EXEC sp_executesql @sql
FETCH table_cur INTO @tabname
END

CLOSE table_cur
DEALLOCATE table_cur

SELECT cnt, tabname FROM #results_tmp ORDER BY cnt DESC
DROP TABLE #results_tmp
GO

Posted on Leave a comment

Reordering Subgroup

USE [db]
GO

–delcare variables
DECLARE @i int
DECLARE @lastrow int
DECLARE @curPoc int
DECLARE @curDir int
DECLARE @lastDir int
DECLARE @order int

DECLARE @pocs AS TABLE
( rownum int IDENTITY(1, 1) PRIMARY KEY NOT NULL
, pocID int NOT NULL
, fk_directorateID int NOT NULL
)

— this builds a var table that is pre-sorted according to criteria
INSERT INTO @pocs ( pocID, fk_directorateID )
SELECT A.pocId
, A.fk_directorateID
FROM t_poc A (NOLOCK)
WHERE A.isArchived = 0
ORDER BY A.fk_directorateID
, A.orderBy
, A.pocID

SELECT @lastrow = max(A.rownum)
FROM @pocs A

SET @i = 1
SET @lastDir = 0
SET @order = 0

–loop through all POCs in POC table
WHILE @i

Posted on Leave a comment

Default data file location

To prevent the C: drive from filling up, it is a good idea to set the database default location. To do this:

1. Open SQL Server Management Studio
2. Right click the server instance
3. Select “Properties”
4. In the Server Properties window, select “Database Settings”
5. Under “Database default locations”, specify path for “Data:” and “Log:”, for example: “D:SQLDATA”

Additionally, if space on the C: drive is limited, check the properties of the TEMPDB.

This can be found under the “Databases” –> “System Databases” branches in the server’s tree-view.

First, since the tempdb does not autoshrink, you can manually shrink it by right-clicking tempdb and selecting “Task”–>”Shrink”–>”Database”.

Next, right-click the tempdb database and select “Properties”. Then select “Files”. You can set the “tempdev.mdf” file to be restricted growth and add an additional database file that is unrestricted on another drive.

Posted on Leave a comment

Truncate Transaction Log

If the transaction log is not being backed up, it will eventually grow to its maximum size or run out of disk space. The first thing to try when the transaction log is full is to perform a backup of it.

USE MASTER
GO

ALTER DATABASE myproj SET RECOVERY FULL;
GO

BACKUP DATABASE myproj TO DISK = ‘server01data$myprojData.bak’;
GO

BACKUP LOG myproj TO DISK = ‘server01data$myprojLog.bak’;
GO

ALTER DATABASE myproj SET RECOVERY SIMPLE;
GO

Transaction log
A record of modifications performed to a database. The amount of information logged in the transaction log depends on the recovery model for a database.

Transaction Log Backup
Backs up the active portion and truncates the inactive portion of the transaction log.

http://technet.microsoft.com/en-us/library/cc966495.aspx

Recovery Models
Simple
The simple recovery model does what it implies, it gives you a simple backup that can be used to replace your entire database in the event of a failure or if you have the need to restore your database to another server. With this recovery model you have the ability to do complete backups (an entire copy) or differential backups (any changes since the last complete backup). With this recovery model you are exposed to any failures since the last backup completed. Here are some reasons why you may choose this recovery model:

Your data is not critical and can easily be recreated
The database is only used for test or development
Data is static and does not change
Losing any or all transactions since the last backup is not a problem
Data is derived and can easily be recreated
Type of backups you can run:

Complete backups
Differential backups
File and/or Filegroup backups
Partial backups
Copy-Only backups
Under the simple recovery model, the transaction log is automatically truncated to remove any inactive virtual log files. Truncation usually occurs after each CHECKPOINT but can be delayed under some conditions, such as a long-running transaction, which can keep large portions of the transaction log in an “active” state.

http://msdn.microsoft.com/en-us/library/ms345414.aspx

Bulk_Logged
The bulk logged recovery sort of does what it implies. With this model there are certain bulk operations such as BULK INSERT, CREATE INDEX, SELECT INTO, etc… that are not fully logged in the transaction log and therefore do not take as much space in the transaction log. The advantage of using this recovery model is that your transaction logs will not get that large if you are doing bulk operations and you have the ability to do point in time recovery as long as your last transaction log backup does not include a bulk operation as mentioned above. If no bulk operations are run this recovery model works the same as the Full recovery model. One thing to note is that if you use this recovery model you also need to issue transaction log backups otherwise your database transaction log will continue to grow. Here are some reasons why you may choose this recovery model:

Data is critical, but you do not want to log large bulk operations
Bulk operations are done at different times versus normal processing.
You still want to be able to recover to a point in time
Type of backups you can run:

Complete backups
Differential backups
File and/or Filegroup backups
Partial backups
Copy-Only backups
Transaction log backups
Full
The full recovery model is the most complete recovery model and allows you to recover all of your data to any point in time as long as all backup files are useable. With this model all operations are fully logged which means that you can recover your database to any point. In addition, if the database is set to the full recovery model you need to also issue transaction log backups otherwise your database transaction log will continue to grow forever. Here are some reasons why you may choose this recovery model:

Data is critical and data can not be lost.
You always need the ability to do a point-in-time recovery.
You are using database mirroring
Type of backups you can run:

Complete backups
Differential backups
File and/or Filegroup backups
Partial backups
Copy-Only backups
Transaction log backups

Posted on Leave a comment

Recover from a corrupted Master DB

To rebuild a SQL 2005 master database:

start /wait setup.exe /qn
INSTANCENAME= REINSTALL=SQL_Engine
REBUILDDATABASE=1 SAPWD=

In SQL 2008 use:

setup.exe
/QUIET
/ACTION=REBUILDDATABASE
/INSTANCENAME=instance_name
/SQLSYSADMINACCOUNTS= accounts
[/SAPWD=password]
[/SQLCOLLATION=collation_name]

Then start in single user mode:

sqlservr.exe -m -s

Then restore the latest backup of master and user databases, bringing the instance to the latest point possible.

More Info:

http://www.sqlservercentral.com/articles/Administration/73207/

http://www.networkworld.com/community/node/39292

Posted on Leave a comment

Ten SQL Server Stats to Monitor

Top 10 SQL Server Counters for Monitoring SQL Server Performance

1. SQLServer: Buffer Manager: Buffer cache hit ratio
The buffer cache hit ratio counter represents how often SQL Server is able to find data pages in its buffer cache when a query needs a data page. The higher this number the better, because it means SQL Server was able to get data for queries out of memory instead of reading from disk. You want this number to be as close to 100 as possible. Having this counter at 100 means that 100% of the time SQL Server has found the needed data pages in memory. A low buffer cache hit ratio could indicate a memory problem.

2. SQLServer: Buffer Manager: Page life expectancy
The page life expectancy counter measures how long pages stay in the buffer cache in seconds. The longer a page stays in memory, the more likely SQL Server will not need to read from disk to resolve a query. You should watch this counter over time to determine a baseline for what is normal in your database environment. Some say anything below 300 (or 5 minutes) means you might need additional memory.

3. SQLServer: SQL Statistics: Batch Requests/Sec
Batch Requests/Sec measures the number of batches SQL Server is receiving per second. This counter is a good indicator of how much activity is being processed by your SQL Server box. The higher the number, the more queries are being executed on your box. Like many counters, there is no single number that can be used universally to indicate your machine is too busy. Today’s machines are getting more and more powerful all the time and therefore can process more batch requests per second. You should review this counter over time to determine a baseline number for your environment.

4. SQLServer: SQL Statistics: SQL Compilations/Sec
The SQL Compilations/Sec measure the number of times SQL Server compiles an execution plan per second. Compiling an execution plan is a resource-intensive operation. Compilations/Sec should be compared with the number of Batch Requests/Sec to get an indication of whether or not complications might be hurting your performance. To do that, divide the number of batch requests by the number of compiles per second to give you a ratio of the number of batches executed per compile. Ideally you want to have one compile per every 10 batch requests.

5. SQLServer: SQL Statistics: SQL Re-Compilations/Sec
When the execution plan is invalidated due to some significant event, SQL Server will re-compile it. The Re-compilations/Sec counter measures the number of time a re-compile event was triggered per second. Re-compiles, like compiles, are expensive operations so you want to minimize the number of re-compiles. Ideally you want to keep this counter less than 10% of the number of Compilations/Sec.

6. SQLServer: General Statistics: User Connections
The user connections counter identifies the number of different users that are connected to SQL Server at the time the sample was taken. You need to watch this counter over time to understand your baseline user connection numbers. Once you have some idea of your high and low water marks during normal usage of your system, you can then look for times when this counter exceeds the high and low marks. If the value of this counter goes down and the load on the system is the same, then you might have a bottleneck that is not allowing your server to handle the normal load. Keep in mind though that this counter value might go down just because less people are using your SQL Server instance.

7. SQLServer: Locks: Lock Waits / Sec: _Total
In order for SQL Server to manage concurrent users on the system, SQL Server needs to lock resources from time to time. The lock waits per second counter tracks the number of times per second that SQL Server is not able to retain a lock right away for a resource. Ideally you don’t want any request to wait for a lock. Therefore you want to keep this counter at zero, or close to zero at all times.

8. SQLServer: Access Methods: Page Splits / Sec
This counter measures the number of times SQL Server had to split a page when updating or inserting data per second. Page splits are expensive, and cause your table to perform more poorly due to fragmentation. Therefore, the fewer page splits you have the better your system will perform. Ideally this counter should be less than 20% of the batch requests per second.

9. SQLServer: General Statistic: Processes Block
The processes blocked counter identifies the number of blocked processes. When one process is blocking another process, the blocked process cannot move forward with its execution plan until the resource that is causing it to wait is freed up. Ideally you don’t want to see any blocked processes. When processes are being blocked you should investigate.

10. SQLServer: Buffer Manager: Checkpoint Pages / Sec
The checkpoint pages per second counter measures the number of pages written to disk by a checkpoint operation. You should watch this counter over time to establish a baseline for your systems. Once a baseline value has been established you can watch this value to see if it is climbing. If this counter is climbing, it might mean you are running into memory pressures that are causing dirty pages to be flushed to disk more frequently than normal.

http://www.databasejournal.com/features/mssql/article.php/3932406/Top-10-SQL-Server-Counters-for-Monitoring-SQL-Server-Performance.htm

Posted on Leave a comment

To INDEX or Not to INDEX

0. The exception to any of the below is if the cost of the index in maintenance time is greater than the benefit of the index or the size of the index is within roughly 80% the size of the data or greater in size than data in the table.

1. Primary keys should almost always be indexed.

2. Foreign keys should almost always be indexed.

3. Look through the most common executed queries and add indexes for any columns in the “where” clauses that are not covered by steps 1 & 2.

4. Look through long running queries and add indexes for any columns in the “where” clauses that are not covered by steps 1, 2 & 3.

NOTE: Exception in #3 & #4 is if the comparison operator is “LIKE”. Also, whether you do #3 then #4, or #4 then #3 depends mostly on how the application is being used and where the bottlenecks are.

Other things to watch out for:

– Try not to put “too many” indexes on a table, as this would in turn slow down inserts/updates.

– I usually start with step “1” above and implement a step at a time until I stop seeing benefit, then back off a step.

– Compound indexes should be used infrequently and sparingly

– Clustered indexes can actually degrade performance depending on how the column is used. Most design tools automatically create a clustered index on the primary key, but this should not be taken as a clustered index is “always” a good thing. There are actually very few cases where a clustered index actually results in a benefit, more often than not it creates more overhead than it produces in benefit.