Posted on Leave a comment

Truncate Transaction Log

If the transaction log is not being backed up, it will eventually grow to its maximum size or run out of disk space. The first thing to try when the transaction log is full is to perform a backup of it.

USE MASTER
GO

ALTER DATABASE myproj SET RECOVERY FULL;
GO

BACKUP DATABASE myproj TO DISK = ‘server01data$myprojData.bak’;
GO

BACKUP LOG myproj TO DISK = ‘server01data$myprojLog.bak’;
GO

ALTER DATABASE myproj SET RECOVERY SIMPLE;
GO

Transaction log
A record of modifications performed to a database. The amount of information logged in the transaction log depends on the recovery model for a database.

Transaction Log Backup
Backs up the active portion and truncates the inactive portion of the transaction log.

http://technet.microsoft.com/en-us/library/cc966495.aspx

Recovery Models
Simple
The simple recovery model does what it implies, it gives you a simple backup that can be used to replace your entire database in the event of a failure or if you have the need to restore your database to another server. With this recovery model you have the ability to do complete backups (an entire copy) or differential backups (any changes since the last complete backup). With this recovery model you are exposed to any failures since the last backup completed. Here are some reasons why you may choose this recovery model:

Your data is not critical and can easily be recreated
The database is only used for test or development
Data is static and does not change
Losing any or all transactions since the last backup is not a problem
Data is derived and can easily be recreated
Type of backups you can run:

Complete backups
Differential backups
File and/or Filegroup backups
Partial backups
Copy-Only backups
Under the simple recovery model, the transaction log is automatically truncated to remove any inactive virtual log files. Truncation usually occurs after each CHECKPOINT but can be delayed under some conditions, such as a long-running transaction, which can keep large portions of the transaction log in an “active” state.

http://msdn.microsoft.com/en-us/library/ms345414.aspx

Bulk_Logged
The bulk logged recovery sort of does what it implies. With this model there are certain bulk operations such as BULK INSERT, CREATE INDEX, SELECT INTO, etc… that are not fully logged in the transaction log and therefore do not take as much space in the transaction log. The advantage of using this recovery model is that your transaction logs will not get that large if you are doing bulk operations and you have the ability to do point in time recovery as long as your last transaction log backup does not include a bulk operation as mentioned above. If no bulk operations are run this recovery model works the same as the Full recovery model. One thing to note is that if you use this recovery model you also need to issue transaction log backups otherwise your database transaction log will continue to grow. Here are some reasons why you may choose this recovery model:

Data is critical, but you do not want to log large bulk operations
Bulk operations are done at different times versus normal processing.
You still want to be able to recover to a point in time
Type of backups you can run:

Complete backups
Differential backups
File and/or Filegroup backups
Partial backups
Copy-Only backups
Transaction log backups
Full
The full recovery model is the most complete recovery model and allows you to recover all of your data to any point in time as long as all backup files are useable. With this model all operations are fully logged which means that you can recover your database to any point. In addition, if the database is set to the full recovery model you need to also issue transaction log backups otherwise your database transaction log will continue to grow forever. Here are some reasons why you may choose this recovery model:

Data is critical and data can not be lost.
You always need the ability to do a point-in-time recovery.
You are using database mirroring
Type of backups you can run:

Complete backups
Differential backups
File and/or Filegroup backups
Partial backups
Copy-Only backups
Transaction log backups

Posted on Leave a comment

Ten SQL Server Stats to Monitor

Top 10 SQL Server Counters for Monitoring SQL Server Performance

1. SQLServer: Buffer Manager: Buffer cache hit ratio
The buffer cache hit ratio counter represents how often SQL Server is able to find data pages in its buffer cache when a query needs a data page. The higher this number the better, because it means SQL Server was able to get data for queries out of memory instead of reading from disk. You want this number to be as close to 100 as possible. Having this counter at 100 means that 100% of the time SQL Server has found the needed data pages in memory. A low buffer cache hit ratio could indicate a memory problem.

2. SQLServer: Buffer Manager: Page life expectancy
The page life expectancy counter measures how long pages stay in the buffer cache in seconds. The longer a page stays in memory, the more likely SQL Server will not need to read from disk to resolve a query. You should watch this counter over time to determine a baseline for what is normal in your database environment. Some say anything below 300 (or 5 minutes) means you might need additional memory.

3. SQLServer: SQL Statistics: Batch Requests/Sec
Batch Requests/Sec measures the number of batches SQL Server is receiving per second. This counter is a good indicator of how much activity is being processed by your SQL Server box. The higher the number, the more queries are being executed on your box. Like many counters, there is no single number that can be used universally to indicate your machine is too busy. Today’s machines are getting more and more powerful all the time and therefore can process more batch requests per second. You should review this counter over time to determine a baseline number for your environment.

4. SQLServer: SQL Statistics: SQL Compilations/Sec
The SQL Compilations/Sec measure the number of times SQL Server compiles an execution plan per second. Compiling an execution plan is a resource-intensive operation. Compilations/Sec should be compared with the number of Batch Requests/Sec to get an indication of whether or not complications might be hurting your performance. To do that, divide the number of batch requests by the number of compiles per second to give you a ratio of the number of batches executed per compile. Ideally you want to have one compile per every 10 batch requests.

5. SQLServer: SQL Statistics: SQL Re-Compilations/Sec
When the execution plan is invalidated due to some significant event, SQL Server will re-compile it. The Re-compilations/Sec counter measures the number of time a re-compile event was triggered per second. Re-compiles, like compiles, are expensive operations so you want to minimize the number of re-compiles. Ideally you want to keep this counter less than 10% of the number of Compilations/Sec.

6. SQLServer: General Statistics: User Connections
The user connections counter identifies the number of different users that are connected to SQL Server at the time the sample was taken. You need to watch this counter over time to understand your baseline user connection numbers. Once you have some idea of your high and low water marks during normal usage of your system, you can then look for times when this counter exceeds the high and low marks. If the value of this counter goes down and the load on the system is the same, then you might have a bottleneck that is not allowing your server to handle the normal load. Keep in mind though that this counter value might go down just because less people are using your SQL Server instance.

7. SQLServer: Locks: Lock Waits / Sec: _Total
In order for SQL Server to manage concurrent users on the system, SQL Server needs to lock resources from time to time. The lock waits per second counter tracks the number of times per second that SQL Server is not able to retain a lock right away for a resource. Ideally you don’t want any request to wait for a lock. Therefore you want to keep this counter at zero, or close to zero at all times.

8. SQLServer: Access Methods: Page Splits / Sec
This counter measures the number of times SQL Server had to split a page when updating or inserting data per second. Page splits are expensive, and cause your table to perform more poorly due to fragmentation. Therefore, the fewer page splits you have the better your system will perform. Ideally this counter should be less than 20% of the batch requests per second.

9. SQLServer: General Statistic: Processes Block
The processes blocked counter identifies the number of blocked processes. When one process is blocking another process, the blocked process cannot move forward with its execution plan until the resource that is causing it to wait is freed up. Ideally you don’t want to see any blocked processes. When processes are being blocked you should investigate.

10. SQLServer: Buffer Manager: Checkpoint Pages / Sec
The checkpoint pages per second counter measures the number of pages written to disk by a checkpoint operation. You should watch this counter over time to establish a baseline for your systems. Once a baseline value has been established you can watch this value to see if it is climbing. If this counter is climbing, it might mean you are running into memory pressures that are causing dirty pages to be flushed to disk more frequently than normal.

http://www.databasejournal.com/features/mssql/article.php/3932406/Top-10-SQL-Server-Counters-for-Monitoring-SQL-Server-Performance.htm

Posted on Leave a comment

To INDEX or Not to INDEX

0. The exception to any of the below is if the cost of the index in maintenance time is greater than the benefit of the index or the size of the index is within roughly 80% the size of the data or greater in size than data in the table.

1. Primary keys should almost always be indexed.

2. Foreign keys should almost always be indexed.

3. Look through the most common executed queries and add indexes for any columns in the “where” clauses that are not covered by steps 1 & 2.

4. Look through long running queries and add indexes for any columns in the “where” clauses that are not covered by steps 1, 2 & 3.

NOTE: Exception in #3 & #4 is if the comparison operator is “LIKE”. Also, whether you do #3 then #4, or #4 then #3 depends mostly on how the application is being used and where the bottlenecks are.

Other things to watch out for:

– Try not to put “too many” indexes on a table, as this would in turn slow down inserts/updates.

– I usually start with step “1” above and implement a step at a time until I stop seeing benefit, then back off a step.

– Compound indexes should be used infrequently and sparingly

– Clustered indexes can actually degrade performance depending on how the column is used. Most design tools automatically create a clustered index on the primary key, but this should not be taken as a clustered index is “always” a good thing. There are actually very few cases where a clustered index actually results in a benefit, more often than not it creates more overhead than it produces in benefit.

Posted on Leave a comment

Starting SQL Server in Maintenance Mode

Open a command prompt and run the commands below (note: your path to the Binn folder may vary) to switch on the SQL Server with minimal configuration single user mode:

cd C:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLBinn

sqlservr.exe -f

This can be useful in cases where the SQL Server will not start in normal mode due to something like the partition the tempdb is on becomes unavailable. If something like this happens can start in single user mode and execute the following:

alter database tempdb move file (filename=tempdev, FileName =’C:DATAtempdb.mdf’)
alter database tempdb move file (filename=templog, FileName =’C:DATAtemplog.ldf’)

Posted on Leave a comment

Reducing SQL Server Locks

Reducing SQL Server Locks
By : Brad McGehee
Apr 13, 2006
http://www.sql-server-performance.com/tips/reducing_locks_p1.aspx

if your users are complaining that they have to wait for their transactions to complete, you may want to find out if object locking on the server is contributing to this problem. To do this, use the SQL Server Locks Object: Average Wait Time (ms). You can use this counter to measure the average wait time of a variety of locks, including database, extent, Key, Page, RID, and table.

If you identify one or more types of locks causing transaction delays, then you will want to investigate further to see if you can identify what specific transactions are causing the locking. The Profiler is as a good tool for this detailed analysis. [7.0, 2000, 2005] Updated 9-18-2006

*****

Use sp_who and sp_who2 (the sp_who2 stored procedure is not documented in the SQL Server Books Online, but offers more detail than sp_who) to identify which processes may be blocking other processes. While blocking can also be identified using Enterprise Manager or Management Studio, these two commands work much faster and more efficiently than these GUI-based front-ends. [6.5, 7.0, 2000, 2005] Updated 9-18-2006

*****

On tables that change little, if at all, such as lookup tables, consider altering the default lock level for these tables. By default, SQL Server uses row level locking for all tables, unless the SQL Query Optimizer determines that a more appropriate locking level, such as page or table locks, is more appropriate. For most lookup tables that aren’t huge, SQL Server will automatically use row level locking. Because row locking has to be done at the row level, SQL Server needs to work harder to maintain row locks that it does for either page or table locks. Since lookup tables aren’t being changed by users, it would be more efficient to use a table lock instead of many individual row locks. How do you accomplish this?

You can override how SQL Server performs locking on a table by using the SP_INDEXOPTION command. Below is an example of code you can run to tell SQL Server to use page locking, not row locks, for a specific table:

SP_INDEXOPTION ‘table_name’, ‘AllowRowLocks’, FALSE
GO
SP_INDEXOPTION ‘table_name’, ‘AllowPageLocks’, FALSE
GO

This code turns off both row and page locking for the table, thus only table locking is available. [7.0, 2000, 2005] Updated 9-18-2006

*****

Keep all Transact-SQL transactions as short as possible. This helps to reduce the number of locks (of all types), helping to speed up the overall performance of your SQL Server applications. If practical, you may want to break down long transactions into groups of smaller transactions. In addition, only include those Transact-SQL commands within a transaction that are necessary for the transaction. Leave all other code outside of the transaction. [6.5, 7.0, 2000, 2005] Updated 8-21-2005

*****

An often overlooked cause of locking is an I/O bottleneck. Whenever your server experiences an I/O bottleneck, the longer it takes user’s transactions to complete. And the longer they take to complete, the longer locks must be held, which can lead to other transactions having to wait for previous locks to be released.

If your server is experiencing excessive locking problems, be sure to check if you are also running into an I/O bottleneck. If you do find that you have an I/O bottleneck, then resolving it will help to resolve your locking problem, speeding up the performance of your server. [6.5, 7.0, 2000, 2005] Updated 8-21-2005

*****

To help reduce the amount of time tables are locked, which hurts concurrency and performance, avoid interleaving reads and database changes within the same transaction. Instead, try to do all your reads first, then perform all of the database changes (UPDATES, INSERTS, DELETES) near the end of the transaction. This helps to minimize the amount of time that exclusive locks are held. [6.5, 7.0, 2000, 2005] Updated 8-21-2005

*****

Any conditional logic, variable assignment, and other related preliminary setup should be done outside of transactions, not inside them. Don’t ever pause a transaction to wait for user input. User input should always be done outside of a transaction. Otherwise, you will be contributing to locking, hurting SQL Server’s overall performance. [6.5, 7.0, 2000, 2005] Updated 8-21-2005

*****

Encapsulate all transactions within stored procedures, including both the BEGIN TRANSACTION and COMMIT TRANSACTION statements in the procedure. This provides two benefits that help to reduce blocking locks.

First, it limits the client application and SQL Server to communications before and after the transaction, thus forcing any messages between the client and the server to occur at a time other than when the transaction is running (reducing transaction time).

Second, It prevents the user from leaving an open transaction (holding locks open) because the stored procedure forces any transactions that it starts to complete or abort. [6.5, 7.0, 2000, 2005] Updated 8-21-2005

*****

If you have a client application that needs to “check-out” data for awhile, then perhaps update it later, or maybe not, you don’t want the records locked during the entire time the record is being viewed. Assuming “viewing” the data is much more common that “updating” the data, then one way to handle this particular circumstance is to have the application select the record (not using UPDATE, which will put a share lock on the record) and send it to the client.

If the user just “views” the record and never updates it, then nothing has to be done. But if the user decides to update the record, then the application can perform an UPDATE by adding a WHERE clause that checks to see whether the values in the current data are the same as those that were retrieved.

Similarly, you can check a timestamp column in the record, if it exists. If the data is the same, then the UPDATE can be made. If the record has changed, then the application must include code to notify the user so he or she can decide how to proceed. While this requires extra coding, it reduces locking and can increase overall application performance. [6.5, 7.0, 2000, 2005] Updated 10-16-2005

Posted on Leave a comment

Recursive Org List

CREATE FUNCTION [f_GetParentOrgs] ( @org_id int )
RETURNS @parentOrgs TABLE ( ID int IDENTITY PRIMARY KEY, org_id int )
AS
BEGIN
DECLARE @row int
DECLARE @org int
INSERT INTO @parentOrgs ( org_id )
SELECT organizationParentID
FROM t_organization WITH (NOLOCK)
WHERE organizationID = @org_id
AND organizationParentID NOT IN ( SELECT org_id FROM @parentOrgs )
SELECT @row = count(1) FROM @parentOrgs

WHILE @row > 0
BEGIN
SELECT @org = org_id
FROM @parentOrgs
WHERE ID = @row
INSERT INTO @parentOrgs ( org_id )
SELECT org_id
FROM dbo.f_GetParentOrgs( @org )
WHERE org_id NOT IN
( SELECT org_id
FROM @parentOrgs
)
SET @row = @row – 1
END

–this is to include original org parameter in list if needed
INSERT INTO @parentOrgs ( org_id )
VALUES ( @org_id )
RETURN
END

Posted on Leave a comment

Setting up a New Instance

To create an additional instance on a box that already has SQL Server 2005 installed, you will need to:

1. Run the “setup” on disk 1 of SQL Server 2005 install disks

2. One page of the install wizard prompts to use “default” instance, or create a named instance. Select create a named instance and enter the name you want to call the new instance.

3. After installation, will need to apply any service packs (ie: SQL Server 2005 Service Pack 3). Even if you have applied this service pack to existing instances, will still need to apply this for the new instance.

4. After install of the service packs, you will need to run the SQL Server Configuration Manager. This can be found in “Start”->”Microsoft SQL Server 2005″->”Configuration Tools”->”SQL Server Configuration Manager”. In this tool’s UI, expand “SQL Server 2005 Network Configuration”, then select “Protocols for xxxxx”, where xxxx is the instance name you created in step 2. You will then Enable the TCP/IP protocol.

5. Can then use SQL Server Management Studio to connect to the instance. For the server name use SERVERNAMEINSTANCENAME (ie: vejxsefw3db1510DEVTEST)

6. During this process, you may need to stop/restart the instance several times. When the instance is restarted, the SQL Server Agent service will need to be manually started.

Posted on Leave a comment

Queries Run Slower as Procedures

Sympton: Stored procedure runs slow in query analyzer but the same query pasted into query analyzer runs fast.

You may be a victim of sql server’s so called “parameter sniffing”

(also sometimes called parameter spoofing).

The short sample for how to get around this is to adjust your stored procedure to use local variables and assign the parameter values to the local variables. It is silly, but it does consistantly work.

CREATE PROCEDURE dbo.MyProcedure( @parm1 int )

AS
  
   DECLARE @myvar1 int
   SET @myvar1 = @parm1

  
SELECT *
    
     FROM mytable
   
    WHERE colA = @myvar1

Apparently, it has to do with the the optimization query plans for stored procedures.

If the procedure is complex enough and contains parameters that the optimizer ‘decides’ will impact how the query plan is generated, then the procedure takes longer to compile and run.

The method described above is one way to work around this.

The above example fools the optimization engine into compiling the procedure and query plan once.

There may be cases where you will need to use the RECOMPILE directive in the stored procedure definition when the parameters actually DO impact query optimization (ie: dynamic where clauses).

More info and other ways to deal with ‘Parameter Sniffing’ can be found in links below:

http://www.eggheadcafe.com/tutorials/aspnet/353cb16c-3cde-44dd-a264-750c1ce4e423/sql-server-query-analyzer.aspx

http://blogs.msdn.com/queryoptteam/archive/2006/03/31/565991.aspx

http://www.sqlmag.com/Article/ArticleID/94369/sql_server_94369.html

http://stackoverflow.com/questions/211355/parameter-sniffing-or-spoofing-in-sql-server/215861

Posted on Leave a comment

Security

Use Windows Only authentication mode for security if possible

By using integrated security, you can greatly simplify administration by relying on the OS security and saving yourself from maintaining two separate security models. This also keeps passwords out of connection strings.

Take the time to audit SQL logins for null or weak passwords
Use the following code to check for null passwords:

Use master
Select name,
Password
from syslogins
where password is null
order by name

There are a multitude of free and commercial tools to check for weak passwords. SQLPing2 is free and can be used to check for weak and null passwords.

Frequently check group and role memberships
While the SQL Server security model has many enhancements, it also adds the extra layer of permissions that we must monitor to make sure no one has been given more access than they need or that they’ve already circumvented security to elevate themselves. There’s also the spectre of user’s who have changed roles within the company but the SQL Server permissions structure has not been adjusted. This goes back to assigning object access to groups and not individuals.

Physically secure the SQL Server
Lock it behind a door and lock away the key while you’re at it. Someone sitting in front of the server will always find a way.

Rewrite applications to use more user-defined stored procedures and views
This minimizes the need to give direct access to tables. It gives the developer more control over how data can be accessed.

Enable logging of all user login events
You can also do this via script but using the following code:
xp_instance_regwrite N’HKEY_LOCAL_MACHINE’, N’SOFTWAREMicrosoftMSSQLServerMSSQLServer’,N’AuditLevel’, REG_DWORD,3

Check master..Sp_password for trojan code
Compare your production scripts to the default script on a fresh installation and keep that code handy.

Check master..Sp_helpstartup for trojan procedures
Make sure no one has placed a backdoor here. Use Sp_unmakestartup to remove any rogue procedures.

Disable SQL Mail capability unless absolutely necessary
Leaving it open gives a potential attacker another means of delivering potential trojans, viruses, or simply launching a particularly nasty denial of service attack. By itself it is fairly harmless but it can be used to help an attacker.

Remove the Guest user from databases to keep unauthorized users out
This is the default but vigilant in case some dbo gets loose with the access controls. The exception to this is the master and tempdb databases as the guest account is required

Make sure all SQL Server data and system files are installed on NTFS partitions
If someone should gain access to the OS, make sure that the necessary permissions are in place to prevent a catastrophe.

Use a low-privilege user account for SQL Server service rather than LocalSystem or Administrator
This account should only have minimal privileges (a local user is best) and should help contain an attack to the server in case of compromise. Notice that when using Enterprise Manager or SQL Server Configuration Manager (SQL 2005) to make this change, the ACLs on files, the registry, and user rights are done for you automatically.

Secure the “sa” account with a strong password
This assumes you are using the SQL Server and Windows security mode. If possible, use the Windows Only mode and don’t worry about people brute-forcing your ‘sa’ accounts. Of course, even so you’ll want to set a strong password in case someone changes modes on you.

Choose only the network libraries you absolutely require
Better yet, if the SQL Server is local-only then why not disable all network libraries and use shared memory to access the SQL Server? Just use the name ‘(local)’ as the server name. If your SQL Server requires connectivity from other hosts, use the TCP/IP netlib and then determine if SSL is needed.

Make sure the latest OS and SQL Server Service Packs/Hot-Fixes are applied

This goes without saying but I’ll say it anyway for completeness. Simply perform a “select @@version” on your SQL Server and compare the results to the linked page.

Restrict to sysadmins-only access to dangerous stored procedures and extended stored procedures
There are quite a few of them, and this could take some time. Be careful not to do this on a production server first. Test on a development machine so you don’t break any functionality. Below is a list of the ones we recommend you assess:

sp_sdidebug
xp_availablemedia
xp_cmdshell
xp_deletemail
xp_dirtree
xp_dropwebtask
xp_dsninfo
xp_enumdsn
xp_enumerrorlogs
xp_enumgroups
xp_enumqueuedtasks
xp_eventlog
xp_findnextmsg
xp_fixeddrives
xp_getfiledetails
xp_getnetname
xp_grantlogin
xp_logevent
xp_loginconfig
xp_logininfo
xp_makewebtask
xp_msver
xp_regread

http://www.sqlsecurity.com/