Quantcast
Channel: microsoft sql server – SQL Studies
Viewing all 450 articles
Browse latest View live

Transactions: Rolling back part of a transaction.

$
0
0

In my previous post I mentioned the fact that the ROLLBACK command rolls back the entire transaction all the way to the top level. If that is the case then can we roll back an inner transaction and still maintain and commit the rest of the transaction? Yes as it happens we can, by using the SAVE TRANSACTION command. The SAVE TRANSACTION command acts very much like the BEGIN TRANSACTION command in that it begins a new transaction, adds one to @@TRANCOUNT, and can be committed using the COMMIT command. However in this case it also creates a save point that can be rolled back to by using the ROLLBACK TRANSACTION command. As always it’s easier with an example.

-- Create a table to use during the tests
CREATE TABLE tb_TransactionTest (value int)
GO
 -- Test using 2 transactions with a save point to allow 
-- us to roll back only the inner of the transaction.
BEGIN TRANSACTION -- outer transaction
	PRINT @@TRANCOUNT
	INSERT INTO tb_TransactionTest VALUES (1)
	SAVE TRANSACTION TestTrans-- inner transaction with save point
		PRINT @@TRANCOUNT
		INSERT INTO tb_TransactionTest VALUES (2)
	ROLLBACK TRANSACTION TestTrans --roll back to the save point
	PRINT @@TRANCOUNT
	INSERT INTO tb_TransactionTest VALUES (3)
IF @@TRANCOUNT > 0
	COMMIT -- commit the outer transaction
PRINT @@TRANCOUNT
SELECT * FROM tb_TransactionTest
GO
-- Clean up table to use during the tests
DROP TABLE tb_TransactionTest
GO

Unlike in the previous tests where the ROLLBACK either rolled back all the way to the beginning of the top level transaction this time it rolled back just to the save point giving a result of 1, 3. Also even though I kept in the code that checked the @@TRANCOUNT before the commit the transaction level is in fact still 1 so the COMMIT statement wouldn’t have errored out either way.

Transactions are a big subject which I’m going to explore over the next few weeks. If you have any subjects you would like me to cover or think I’ve missed something feel free to comment or email me.


Filed under: Microsoft SQL Server, SQLServerPedia Syndication, T-SQL, Transactions Tagged: code language, language sql, microsoft sql server, T-SQL, transactions

SSMS Shortcuts and more

$
0
0

I went and voted for #tribalawards and when I was finished they offer you links to 6 different free PDFs. Not sure if they are the same for everyone but for me it included Grant Fritchey’s SQL Server Execution Plans 2nd Edition which I highly recommend. It also included a PDF wall chart of the SSMS query window keyboard shortcuts. Now for those of you who know me I LOVE keyboard shortcuts so I immediately downloaded and printed it out. It’s one 8.5 x 11 sheet, well laid out and easy to read. It’s shortcut awesomeness!


Filed under: Documentation, Microsoft SQL Server, SQLServerPedia Syndication, SSMS Tagged: Grant Fritchey, keyboard shortcuts, microsoft sql server, SSMS

Transactions: Rolling back a transaction inside a stored procedure.

$
0
0

So over the last couple of posts I’ve talked about the fact that the ROLLBACK command will roll back an entire transaction no matter how many layers down the ROLLBACK is executed. Well this has an interesting implication with a stored procedure. If a ROLLBACK command is issued inside of a stored procedure then any transactions begun outside of the stored procedure will be rolled back as well and @@TRANCOUNT will be set to 0.

CREATE TABLE tb_TransactionTest (value int)
GO
-- This stored procedure will roll back a transaction if the 
-- @ROLLBACK parameter is a 1.
CREATE PROCEDURE usp_TransactionTest @Value int, @RollBack bit
AS 
BEGIN
	BEGIN TRANSACTION
	INSERT INTO tb_TransactionTest VALUES (@Value)
	IF @Rollback = 1 
		-- If the procedure is called from within a transaction
		-- this is going to cause us to have a different 
		-- @@TRANCOUNT when we exit the procedure than when we
		-- started it.
		ROLLBACK TRANSACTION
	ELSE
		COMMIT
END
GO

-- Begin a new transaction
BEGIN TRANSACTION
INSERT INTO tb_TransactionTest VALUES (1)
-- Run the sp with the param to roll back a transaction
-- This will return an error because tran count has changed
EXEC usp_TransactionTest 2,1
-- Run the commit to close the initial transaction
-- This will return an error because there is no valid transaction
-- to commit.
COMMIT
-- No rows are in the table because the initial insert was
-- rolled back.
SELECT * FROM tb_TransactionTest
GO

The obvious problem here is that any code that uses that stored procedure is going to have to check @@TRANCOUNT before issuing a ROLLBACK or a COMMIT or risk an error because there is no transaction to close. The less obvious problem is that SQL doesn’t like it if the transaction count is different after the execution of a stored procedure. So in the example above we are actually going to get two errors and no data in the tb_TransactionTest table. The solution to both problems is to use the SAVE TRANSACTION command inside the stored procedure.

-- This stored procedure will roll back a saved transaction if 
-- the @ROLLBACK parameter is a 1.
ALTER PROCEDURE usp_TransactionTest @Value int, @RollBack bit
AS 
BEGIN
	SAVE TRANSACTION TranTest
	INSERT INTO tb_TransactionTest VALUES (@Value)
	IF @Rollback = 1 
		-- Since we are rolling back to a saved transaction 
		-- @@TRANCONT will go back to what it was right before
		-- the SAVE TRANSACTION.
		ROLLBACK TRANSACTION TranTest
	ELSE
		COMMIT
END
GO

TRUNCATE TABLE tb_TransactionTest
-- Begin a new transaction
BEGIN TRANSACTION
INSERT INTO tb_TransactionTest VALUES (1)
-- Run the sp with the param to roll back a transaction
EXEC usp_TransactionTest 2,1
-- Run the commit to close the initial transaction
COMMIT
SELECT * FROM tb_TransactionTest
GO

This time at the end of the batch we have no errors and a row with a 1 in tb_TransactionTest. Now this was a very simple example and there is a much better one in BOL under SAVE TRANSACTION that I highly recommend reviewing before dealing with a transaction inside a stored procedure.

Transactions are a big subject which I’m going to explore over several posts. I am by no means going to cover the subject exhaustively but if you have any subjects you would like me to cover or think I’ve missed something feel free to comment or email me.


Filed under: Microsoft SQL Server, SQLServerPedia Syndication, T-SQL, Transactions Tagged: code language, language sql, microsoft sql server, T-SQL, transactions

Transactions: What are they?

$
0
0

I’ve done a couple of posts now talking about how rolling back a transaction works. I thought this time I would back up a bit and talk about what exactly a transaction is and why we have them. A transaction is simply a unit of work. A unit of work is a series of inserts/updates/deletes that go together. So why do we care? Well one of my favorite examples is paying a bill.

Bob is paying $50 to his internet provider “ImaPain”. This is going to require two commands.

UPDATE Balances SET CurrentBalance = CurrentBalance-50 
	WHERE name = 'Bob'
UPDATE Balances SET CurrentBalance = CurrentBalance+50 
	WHERE name = 'ImaPain'

So what happens if we cancel the transfer in the middle and only the first command has occurred? Bob now has $50 less and his provider still hasn’t been paid. No one is happy at this point. But what if instead we wrap this “unit of work” in a transaction?

BEGIN TRANSACTION
UPDATE Balances SET CurrentBalance = CurrentBalance-50 
	WHERE name = 'Bob'
UPDATE Balances SET CurrentBalance = CurrentBalance+50 
	WHERE name = 'ImaPain'
COMMIT

Now if we cancel the transfer in the middle (deliberately or through a crash) the whole process rolls back at once and Bob isn’t out any money. His provider hasn’t been paid yet but at least Bob still has the money to do so.

That was an explicit transaction. An explicit transaction is defined by BOL as “one in which you explicitly define both the start and end of the transaction.”. There are also implicit transactions that SQL creates and ends on its own. Again a unit of work but this time we don’t have to deliberately start and commit a transaction. Here we are giving everyone a raise!

UPDATE PayTable SET HourlyPay = HourlyPay + 1

Oh no! Our connection was lost about half way through the command! We have updated 20 employees of our total roster of 50. SQL uses an implicit transaction to make sure that any changes before the end of the command are rolled back. It wouldn’t do for a random half of the employees to get a raise and not the other.

If you stop there it sounds like the best thing to do is to wrap everything in transactions to prevent possible problems. Unfortunately this has its own set of problems. As a rule transactions should be as small as possible. Among other things this is to avoid blocking. Discussing transactions and blocking in detail is way beyond the scope of this post as you have to get into the various transaction isolation levels and how each handles blocking. In general though all locks held by a statement in a transaction are held until the end of the transaction. If this happens to block a statement in another transaction then that block will be held until the end of the first transaction. Another good reason to keep your transactions as small as possible is to avoid losing information during a crash. Using the example above let’s say we both Bob and James are paying their internet provider and it’s all put into a single transaction.

BEGIN TRANSACTION
UPDATE Balances SET CurrentBalance = CurrentBalance-50 
	WHERE name = 'Bob'
UPDATE Balances SET CurrentBalance = CurrentBalance+50 
	WHERE name = 'ImaPain'
UPDATE Balances SET CurrentBalance = CurrentBalance-50 
	WHERE name = 'James'
UPDATE Balances SET CurrentBalance = CurrentBalance+50 
	WHERE name = 'ImaPain'
COMMIT

What happens if the connection fails or the server goes down in the middle of the transaction, say after the 3rd statement. Even though both statements required for Bob’s payment have completed his payment is rolled back along with James’. If however we had used 2 transactions then we would only have lost the one pair of updates instead of both.

BEGIN TRANSACTION
UPDATE Balances SET CurrentBalance = CurrentBalance-50 
	WHERE name = 'Bob'
UPDATE Balances SET CurrentBalance = CurrentBalance+50 
	WHERE name = 'ImaPain'
COMMIT
BEGIN TRANSACTION
UPDATE Balances SET CurrentBalance = CurrentBalance-50 
	WHERE name = 'James'
UPDATE Balances SET CurrentBalance = CurrentBalance+50 
	WHERE name = 'ImaPain'
COMMIT

Now when the same crash happens only the incomplete payment fails and is rolled back. Bob’s payment has completed successfully.

To sum it up transactions are an extremely important tool used to make sure that a unit of work is either completed together or rolled back together. For additional reading you should look at ACID (Atomic, Consistent, Isolated, Durable) compliance.

Transactions are a big subject which I’m going to explore over several posts. I am by no means going to cover the subject exhaustively but if you have any subjects you would like me to cover or think I’ve missed something feel free to comment or email me.


Filed under: Microsoft SQL Server, SQLServerPedia Syndication, T-SQL, Transactions Tagged: code language, language sql, microsoft sql server, T-SQL, transactions

Guid vs Identity columns (Ints)

$
0
0

I came across an interesting question on SE last week. Guid vs INT – Which is better as a primary key? In addition to the quite good accepted answer I thought I would throw in my own take.

  • Size
    • GUIDs are 16 bytes and hold more values you then could ever use.
    • With an identity column you can choose a data type dependent on your need.
      • tinyint 1 byte 0-255
      • smallint 2 bytes -2^15 (-32,768) to 2^15-1 (32,767)
      • int 4 bytes -2^31 (-2,147,483,648) to 2^31-1 (2,147,483,647)
      • bigint 8 bytes -2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807)

    Remember that the size of your column affects not just how much space the table takes up but how many pages (both index and data) need to be read to perform a given operation. Bigger the column the less you can fit in a page, the more pages need to be read, the slower your queries. Even if by a very small amount.

  • Uniqueness
    • GUIDs are considered universally unique. This isn’t exactly true but if you look here and here you will see that it’s really close enough to true to work with.
    • Identity columns are only as unique as you make them. If you put a unique constraint on the column (or PRIMARY KEY) then you will be at least guaranteed that you have a unique value in that table. But only in the table itself, when you compare to other tables, databases etc there is no uniqueness

    You have to decide here how unique you need your column to be and if it’s worth the space it’s going to take up.

  • Portability
    • Because they are universally unique GUIDs are completely portable. You can move the values from place to place with no difficulty.
    • Identity columns are not really portable. Anyone who has tried to merge two tables with identity columns, between prod and test for example, knows what a pain this is.
  • If you are never going to merge data with another table/location then you are probably ok with an identity column. If on the other hand you expect to need to merge data from multiple tables/locations then you should probably think about GUIDs.

  • Ease of use
    • GUIDs require the use of NEWSEQUENTIALID() or NEWID() either as a default, part of the insert, a trigger etc
    • Identity columns are created and then you actually have to avoid them to make them work properly.

    Personally I find identity columns MUCH easier but on the other hand I use them far more often than GUIDs so I have a lot more experience there. They do say you tend to go with what you know.

And last but not least here is BOLs take on each of them.

As with many design considerations this is an important decision. When deciding to use a GUID vs an integer Identity column you should balance the portability of a GUID vs it’s additional space required and the smaller size of the identity column vs the major pain that moving a row with an identity column from one table to another can be. The fact that integers are easier to work with when debugging is true but somewhat insignificant if you truly need a GUID. And to be honest I’ll bet you get used to it. Integers “looking” better when displayed to the end user is a factor but somewhat less important when compared to other considerations.

You, your co-workers and your replacement (there will always be one) will have to live with your decision so at least think about it before you decide. One of the things I dislike most is the “Always do it this way” mentality.


Filed under: Microsoft SQL Server, SQLServerPedia Syndication, T-SQL Tagged: code language, language sql, microsoft sql server, sql statements, T-SQL

sp_SrvPermissions & sp_DBPermissions V4.0

$
0
0

These are a couple of stored procedures I wrote to help me with security research. Each sp returns three data sets.

  1. A list of principals and some basic properties about them.
  2. Role membership
  3. Object/Database/Server level permissions

Each row of each dataset has not only the appropriate properties but a set of do/undo scripts. For example a script to add someone to a role, or remove them for a role, grant them a permission, revoke the permission from them.

Last but not least each sp has a number of parameters for restricting the result sets. For example principal name, role name, principal type, object name etc.

These sp’s can be run/stored anywhere and work just fine but if you run them in master then you can call them from any database on the instance.

Examples of times I’ve found them handy:

  • I need to know every database a user has access to and what access they have.
  • I need to know all permissions for a given user.
  • I need to copy a login from one server to another (with SID and password).
  • I need to know everyone who has permissions to a specific object in the database.
  • I need to know everyone who is a member of sysadmin.

 
Latest update: Below are the latest additions. My personal favorite is the new ALL option for the DBName parameter. And of course if you happen to notice a problem, or have a suggestion please post them here and I’ll be glad to fix/add as appropriate.

sp_SrvPermissions
– 11/18/2013 – Corrected bug in the order of the parameters for sp_addsrvrolemember
and sp_dropsrvrolemember, also added parameter names both.
– 01/09/2014 – Added an ORDER BY to each of the result sets. See above for details.

sp_DBPermissions
– 11/18/2013 – Added parameter names to sp_addrolemember and sp_droprolemember.
– 11/19/2013 – Added an ORDER BY to each of the result sets. See above for details.
– 01/04/2014 – Add an ALL option to the DBName parameter.


Filed under: Dynamic SQL, Microsoft SQL Server, Security, SQLServerPedia Syndication, System Functions and Stored Procedures, T-SQL Tagged: code language, database permissions, dynamic sql, language sql, microsoft sql server, security, server permissions

Transactions: Who, What and Where

$
0
0

Recently we had a scenario where we had a handful of queries being blocked. Nothing unusual there but when I looked into sys.dm_exec_requests I could see all of the blocked requests, but could not find a request with a session_id matching the blocking_session_id. The session showed up in sys.dm_exec_sessions but it was “sleeping” and hadn’t performed a request in hours. So what was going on?

Well unfortunately sys.dm_exec_requests only pulls currently executing requests. A session that has an open transaction but isn’t actively doing anything isn’t considered an “executing” request and won’t show up in sys.dm_exec_requests. In order to get a list of sessions with an open transaction you can run a query off sys.dm_tran_session_transactions or if you are running SQL 2012 or higher sys.dm_exec_sessions has a new column open_transaction_count. However we need do something about our blocking transaction and as I see it we have a couple of options.

Easiest is to find and talk to the user. Sys.dm_exec_sessions has the login_name, nt_domain, nt_user_name to help you identify the user, but if they are using a generic SQL login then that won’t help much. Next we can look at host_name to find the users machine and program_name to tell what program they are connecting from. (Frequently it helps to tell the user what program is the problem if they are using Excel, Access, and SSMS to connect to the instance.) And if you want to you can look at sys.dm_exec_connections and get the IP(client_net_address) of the connecting machine.

But let’s say that it’s the middle of the night and now we need to decide if we are going to kill the process, wake up the user (if we have their number), or just let it run till morning. In order to make that decision it would help to know what exactly they are doing. If we had a row in sys.dm_exec_requests we could use sys.dm_exec_sql_text to get the actual query they are running. But as we said before there is no row in sys.dm_exec_requests. So now we again have a couple of options. Simplest is to look at sys.dm_exec_connections and use the most_recent_sql_handle column with sys.dm_exec_sql_text to get the last query run by the connection. Unfortunately it does not give us all of the SQL statements within the transaction. It only returns the set of statements in the last batch executed within the transaction. I’ll post a proof soon. So that may not give us enough information. If so we can go the somewhat more complicated route and take a look at the locks held by the session.

We can tie sys.dm_exec_sessions to sys.dm_tran_session_transactions to get a list of the transactions tied to the session.

SELECT *
FROM sys.dm_exec_sessions sessions
JOIN sys.dm_tran_session_transactions trans
	ON sessions.session_id = trans.session_id

On the other hand sys.dm_tran_locks has session_id also and we can get a lot of additional information that will be very helpful.

SELECT request_session_id AS session_id,
	request_owner_id AS transaction_id,
	DB_NAME(resource_database_id) AS DatabaseName,
	OBJECT_SCHEMA_NAME(resource_associated_entity_id,
			resource_database_id) AS SchemaName,
	OBJECT_NAME(resource_associated_entity_id,
			resource_database_id) AS ObjectName,
	request_mode, request_type, request_status,
	COUNT_BIG(1) AS lock_count
FROM sys.dm_tran_locks
WHERE resource_type = 'OBJECT'
GROUP BY request_session_id, request_owner_id,
	resource_database_id, resource_associated_entity_id,
	request_mode, request_type, request_status

I did run into a bit of a problem here. Sometimes (and I’m not sure when or why) I got blocked when trying to use OBJECT_NAME and OBJECT_SCHEMA_NAME in this query. You could join to sys.objects and sys.schemas instead but only one database at a time. If you want to use this query as it stands you can try running it, and if it gets blocked kill it. Then query for just the resource_database_id to figure out which database you need, then go there and join to sys.objects and sys.schemas. Cumbersome but I don’t know a better way I’m afraid.

Now I’m only looking at object locks in this query but there are a number of other types that you may want to look at (DATABASE, FILE, PAGE etc). In general I’ve found the OBJECT locks to be the most useful though. By combining object_name, request_mode, request_type and request_status we can get a fairly good idea of what the individual is doing. If they are only doing a series of selects there is no problem you should be fairly safe in killing the connection. If they are doing updates, inserts, deletes etc you can take the # of locks, your knowledge of the tables and their use and make an informed decision.

Transactions are a big subject which I’m going to explore over several posts. I am by no means going to cover the subject exhaustively but if you have any subjects you would like me to cover or think I’ve missed something feel free to comment or email me.


Filed under: Microsoft SQL Server, Problem Resolution, SQLServerPedia Syndication, System Functions and Stored Procedures, T-SQL, Transactions Tagged: code language, language sql, microsoft sql server, T-SQL, transactions

The “most_recent_sql_handle” column

$
0
0

While researching my last post I ran across an interesting column I hadn’t noticed before, sys.dm_exec_connections.most_recent_sql_handle. I mentioned it in my previous post but I felt it was interesting enough that I would point it out specifically. Here is the BOL definition:

The SQL handle of the last request executed on this connection. The most_recent_sql_handle column is always in sync with the most_recent_session_id column. Is nullable.

So why is this so interesting? Well primarily for the specific use I mentioned last time. There are requests that are blocked but the session_id doing the blocking doesn’t have an entry in sys.dm_exec_requests. In order to find out what the user has been doing in that session you can at least find out what the last batch they ran was by using most_recent_sql_handle. Remember this is only the last batch not everything in the transaction but it can still be fairly useful since frequently a transaction only has one batch in it.


Filed under: Microsoft SQL Server, SQLServerPedia Syndication, System Functions and Stored Procedures, T-SQL Tagged: code language, DMV, language sql, microsoft sql server, sql statements, system functions, T-SQL

Negative session_ids

$
0
0

I probably had the most fun all week when a query I was running came up blocked. Sounds strange right? Well the blocking_session_id was a negative 2 (-2)! I’ve never seen anything like it before. Once I had resolved my problem (see below). I started doing some research on negative session_ids in general. To start with there are the BOL entries for sys.dm_exec_requests

blocking_session_id
ID of the session that is blocking the request. If this column is NULL, the request is not blocked, or the session information of the blocking session is not available (or cannot be identified).
-2 = The blocking resource is owned by an orphaned distributed transaction.
-3 = The blocking resource is owned by a deferred recovery transaction.
-4 = Session ID of the blocking latch owner could not be determined at this time because of internal latch state transitions.

And sys.dm_tran_locks

request_session_id
Session ID that currently owns this request. The owning session ID can change for distributed and bound transactions. A value of -2 indicates that the request belongs to an orphaned distributed transaction. A value of -3 indicates that the request belongs to a deferred recovery transaction, such as, a transaction for which a rollback has been deferred at recovery because the rollback could not be completed successfully.

Note that between the two of them we see possible negative values of -2, -3 and -4.

I’m going to go in reverse order since that matches how difficult it was to find information on each (-4 was by far the hardest).

-4 : Session ID of the blocking latch owner could not be determined at this time because of internal latch state transitions.
I found a couple of links on this. Here and Here. Both links had some interesting information and in both cases it appeared to be an issue with tempdb. The second in particular had a block of information from Microsoft.

This is directly from Microsoft:
Troubleshooting contention in DDL operations
Evaluate your application and query plans and see if you can minimize the creation of temporary tables. To do this, monitor the perfmon counters Temp Tables Creation Rate and Temp Tables For Destruction. You can also run SQL Profiler to correlate the values of these counters with the currently running queries. This will help you identify the queries that are causing the contention in system catalog. This might occur, for example, if a temporary object is being created inside a loop or a stored procedure.
Verify if temp objects (temp tables and variables) are being cached. SQL2005 caches Temp objects only when the following conditions are satisfied:

  • Named constraints are not created.
  • Data Definition Language (DDL) statements that affect the table are not run after the temp table has been created, such as the CREATE INDEX or CREATE STATISTICS statements.
  • Temp object is not created by using dynamic SQL, such as: sp_executesql N’create table #t(a int)’.
  • Temp object is created inside another object, such as a stored procedure, trigger, and user-defined function; or is the return table of a user-defined, table-valued function.

-3 : The blocking resource is owned by a deferred recovery transaction.
Really the only good information I could find was a post by Paul Randal and a BOL entry. Of course given the sources I consider both of these pretty definitive. Both state that this is a problem with an uncommitted transaction that was trying to be rolled back when a database is being brought online. From what I understood of Paul’s post the only way to recover from this was to restore the database from backup.

-2 : The blocking resource is owned by an orphaned distributed transaction.
This is the particular problem I ran into. I did some research and didn’t come up with anything quickly so I posted on dba.stackexchange.com and between Martin Smith and Thomas Stringer they pointed me in the right direction. I found this link on www.sqlservercentral.com and it had the solution I ended up using. (Getting the req_transactionUOW from syslockinfo and using that value with KILL.) The problem here is again a transaction that is trying to be rolled back. In this case it’s a distributed transaction that got lost by MSDTC (Microsoft Distributed Transaction Coordinator). A distributed transaction is one that is taking place in multiple databases, frequently on multiple servers. If a problem occurs and the correct state of the transaction can’t be determined you run into this problem.

There you go. Negative session_ids. Hope you had as much fun with them as I did.


Filed under: Microsoft SQL Server, Problem Resolution, SQLServerPedia Syndication, System Functions and Stored Procedures, T-SQL, Transactions Tagged: code language, DMV, language sql, microsoft sql server, problem resolution, sql statements, system functions, T-SQL, transactions

Using sys.dm_exec_sql_text() to figure out blocking is sometimes flawed.

$
0
0

I frequently rely on joining sys.dm_exec_requests and sys.dm_exec_sql_text() to know what queries are running on a system and when I have a blocking situation I like to look and see what query is running that is blocking everything else. I’ve mentioned recently that you can also use sys.dm_exec_connections.most_recent_sql_handle to see the last batch that was run by a connection. I recently realized that this can be somewhat misleading at times.

-- Setup
CREATE TABLE TranTest (id int not null identity(1,1), 
	Numb varchar(30));
CREATE TABLE TranTest2 (id int not null identity(1,1), 
	Numb varchar(30));
GO
INSERT INTO TranTest VALUES ('One');
INSERT INTO TranTest2 VALUES ('One');
GO
-- Connection 1
BEGIN TRANSACTION

INSERT INTO TranTest VALUES ('Two');
GO
INSERT INTO TranTest2 VALUES ('Two');
GO

INSERT INTO TranTest VALUES ('Three');
INSERT INTO TranTest VALUES ('Four');
GO
-- Connection 2
SELECT * FROM TranTest2;

Connection 2 is now blocked. Let’s take a look an output of sys.dm_exec_requests and sys.dm_exec_sql_text.

SELECT session_id, blocking_session_id, text 
FROM sys.dm_exec_requests
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
WHERE session_id > 50

sql_text1

First we can exclude session_id 52 since that is the session I’m running the requests query from. That leaves session 53 which is being blocked by session 51. And session 51 isn’t in the list at all. So in order to get the sql_handle we have to go to sys.dm_exec_connections.most_recent_sql_handle.

SELECT session_id, text
FROM sys.dm_exec_connections
CROSS APPLY sys.dm_exec_sql_text(most_recent_sql_handle)
WHERE session_id = 51

sql_text2

As a reminder the query being blocked is this:

SELECT * FROM TranTest2

And based on the results from sql_text the blocking query is:

INSERT INTO TranTest VALUES ('Three')
INSERT INTO TranTest VALUES ('Four')

Which have nothing to do with each other. The reason for the blocking is the command:

INSERT INTO TranTest2 VALUES ('Two')
GO

Which didn’t show up because it’s in a previous batch within the same transaction. In order to see more information we would either need a piece of monitoring software (Idera’s Diagnostic Manager or Red Gate’s SQL Monitor for example) or need to look into sys.dm_tran_locks. I have more detail on looking into sys.dm_tran_locks in this post.


Filed under: DMV, Microsoft SQL Server, SQLServerPedia Syndication, System Functions and Stored Procedures, T-SQL, Transactions Tagged: code language, DMV, language sql, microsoft sql server, sql statements, T-SQL

Transactions: Creating a single restore point across multiple databases.

$
0
0

This is a disaster and recovery trick I’ve found to be useful for developers with batch processes that hit multiple databases. If you have read up much on either the BEGIN TRANSACTION or RESTORE statements you will probably have noticed the MARK option. If you mark a transaction in the log file of a database then you have the option of restoring that database either before or at the mark. This does require that the database be in either the full or bulk logged recovery model because you have to be able to take transaction log backups. The particular use for this that I’m discussing involves creating marked transactions in multiple databases then if you have a situation that requires you recover one or more of the databases you can recover all of them to the exact same point.

So for example you have databases A, B and C. You have a batch process at night that updates data in all three databases. Right at the beginning of your batch process, or perhaps at various checkpoints in the process, you create a marked transaction in each of the databases (or one or more distributed transactions) that inserts a row into a table (for example a table that says “Batch 123 has started”) in each of the databases and then commits the transaction. Then during your batch process your instance crashes. Assuming that you didn’t have just one big transaction for the entire process (not something I would typically recommend) you now have several databases that are potentially out of sync with each other and at the very least you don’t know where your process was at the precise moment of the crash. So how does the mark help? With marked transactions in place you can restore all three databases back to the beginning of the batch, or one of your checkpoints, using the RESTORE STOPBEFOREMARK.

-- Set up for the test
USE master;
GO
CREATE DATABASE DatabaseA;
CREATE DATABASE DatabaseB;
CREATE DATABASE DatabaseC;
GO

USE DatabaseA;
GO
CREATE TABLE BatchList (Id Int NOT NULL IDENTITY(1,1), BatchDate DateTime);
GO
CREATE TABLE TableA (Col1 varchar(10), Col2 varchar(10));
GO

USE DatabaseB;
GO
CREATE TABLE BatchList (Id Int NOT NULL IDENTITY(1,1), BatchDate DateTime);
GO
CREATE TABLE TableB (Col1 varchar(10), Col2 varchar(10));
GO

USE DatabaseC;
GO
CREATE TABLE BatchList (Id Int NOT NULL IDENTITY(1,1), BatchDate DateTime);
GO
CREATE TABLE TableC (Col1 varchar(10), Col2 varchar(10));
GO
-- Initial backups for all of the databases
BACKUP DATABASE DatabaseA TO DISK = 'Y:\MybackupFolder\DatabaseA.bak';
BACKUP DATABASE DatabaseB TO DISK = 'Y:\MybackupFolder\DatabaseB.bak';
BACKUP DATABASE DatabaseC TO DISK = 'Y:\MybackupFolder\DatabaseC.bak';
GO
-- Create a marked transaction in each database
DECLARE @MarkName varchar(100);
SET @MarkName = 'Batch started at ' + CAST(getdate() AS varchar(20));

BEGIN TRANSACTION BatchStart
   WITH MARK @MarkName;

INSERT INTO DatabaseA.dbo.BatchList (BatchDate) VALUES (GetDate());
INSERT INTO DatabaseB.dbo.BatchList (BatchDate) VALUES (GetDate());
INSERT INTO DatabaseC.dbo.BatchList (BatchDate) VALUES (GetDate());

COMMIT;
GO

Because all three databases in my example are on a single instance creating a distributed transaction that touched all three of the databases at once was easy. If one or more of the databases were on different instances then I would have had to deal with it slightly differently. In the first “additional reading” link below there is a good example of using a stored procedure to push the transaction to other instances.

Once a marked transaction is committed then an entry for each of the databases touched is entered into logmarkhistory

SELECT * FROM msdb.dbo.logmarkhistory;
GO

TransactionMark1

Since we are running a multi hour batch process and we take regular log backups we can hope that at least one set of log backups will occur during our process.

-- Log backups
BACKUP LOG DatabaseA TO DISK = 'C:\Backups\DatabaseA_Log1.bak';
BACKUP LOG DatabaseB TO DISK = 'C:\Backups\DatabaseB_Log1.bak';
BACKUP LOG DatabaseC TO DISK = 'C:\Backups\DatabaseC_Log1.bak';
GO
-- Part of the batch process
INSERT INTO DatabaseA.dbo.TableA VALUES ('A','B');
INSERT INTO DatabaseB.dbo.TableB VALUES ('C','D');
INSERT INTO DatabaseA.dbo.TableA VALUES ('E','F');
INSERT INTO DatabaseC.dbo.TableC VALUES ('G','H');
INSERT INTO DatabaseA.dbo.TableA VALUES ('I','J');
INSERT INTO DatabaseB.dbo.TableB VALUES ('K','L');
INSERT INTO DatabaseB.dbo.TableB VALUES ('M','N');
INSERT INTO DatabaseC.dbo.TableC VALUES ('O','P');
GO

About an hour into the batch process DatabaseB goes suspect. At this point we could do a point in time recovery. However if there are several batch processes running one after another, we have several checkpoints in the process or for whatever other reason we aren’t 100% certain of the time the batch started then doing a point in time recovery isn’t the best option. Which is of course why we set up the marked transaction in the first place. We now do a RESTORE STOPBEFOREMARK on each database and we are right back at the beginning of the batch, or checkpoint, ready to try again.

USE master;
GO
RESTORE DATABASE DatabaseA FROM DISK = 'C:\Backups\DatabaseA.bak' WITH REPLACE, NORECOVERY;
RESTORE LOG DatabaseA FROM DISK = 'C:\Backups\DatabaseA_Log1.bak' WITH  
	STOPBEFOREMARK = 'Batch started at Jan 28 2014  9:09PM';
RESTORE DATABASE DatabaseA WITH RECOVERY;

RESTORE DATABASE DatabaseB FROM DISK = 'C:\Backups\DatabaseB.bak' WITH REPLACE, NORECOVERY;
RESTORE LOG DatabaseB FROM DISK = 'C:\Backups\DatabaseB_Log1.bak' WITH 
	STOPBEFOREMARK = 'Batch started at Jan 28 2014  9:09PM';
RESTORE DATABASE DatabaseB WITH RECOVERY;

RESTORE DATABASE DatabaseC FROM DISK = 'C:\Backups\DatabaseC.bak' WITH REPLACE, NORECOVERY;
RESTORE LOG DatabaseC FROM DISK = 'C:\Backups\DatabaseC_Log1.bak' WITH  
	STOPBEFOREMARK = 'Batch started at Jan 28 2014  9:09PM';
RESTORE DATABASE DatabaseC WITH RECOVERY;
GO

 

For additional reading here are some BOL links I found on the same subject.

 

Transactions are a big subject which I’m going to explore over several posts. I am by no means going to cover the subject exhaustively but if you have any subjects you would like me to cover or think I’ve missed something feel free to comment or email me.


Filed under: Backups, Microsoft SQL Server, SQLServerPedia Syndication, Transactions Tagged: backups, code language, language sql, microsoft sql server, T-SQL, transactions

Transactions: What commands aren’t allowed?

$
0
0

Transactions are great tools that every DBA and developer should learn how to use. Unfortunately not everything can be put inside a transaction. There are a handful of commands that won’t work inside a transaction. CREATE, ALTER and DROP DATABASE for example. The full list of commands can be found here.

When you try to run one of these commands inside a transaction you will get the following error.

BEGIN TRANSACTION
CREATE DATABASE NoTransaction
COMMIT

Msg 3902, Level 16, State 1, Line 1
The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.

I suspect that anything that affects the file system is not going to work in a transaction. Xp_cmdshell for example is not on the list and doesn’t given an error but the results aren’t rolled back.

BEGIN TRANSACTION
EXEC xp_cmdshell 'dir c:\ > c:\temp\dir.txt'
ROLLBACK
EXEC xp_cmdshell 'dir c:\temp\dir.txt'

You will note that dir.txt exists even though the transaction was rolled back. It’s fairly obvious, but still something that should be kept in mind. According to the link above UPDATE STATISTICS is another one of those things that will not throw an error but still doesn’t get rolled back.

Transactions are a big subject which I’m going to explore over several posts. I am by no means going to cover the subject exhaustively but if you have any subjects you would like me to cover or think I’ve missed something feel free to comment or email me.


Filed under: Microsoft SQL Server, SQLServerPedia Syndication, T-SQL, Transactions Tagged: code language, language sql, microsoft sql server, T-SQL, transactions

Check your SQL Agent history settings before it’s too late!

$
0
0

A little while back I was doing some research into a failed job and ran into a slight problem. The Agent history settings were such that I was only seeing the last 2-3 runs of the job. This job is run “on demand” and I really wanted to see the last 10 runs or so. No help for it though, the history no longer existed unless I was willing to start restoring old copies of MSDB. It did however get me looking at the history settings for SQL Agent.

SQLAgentHistory1

The history settings were set to the default of 1000 lines for the log as a whole and 100 lines for the individual jobs. It’s important to remember here that if you have a job that runs 4 steps that is 5 lines total, one line for the job and one line for each of the steps that runs. My system had more than 30 jobs which frequently had 10-20 steps each. If you multiply that out it you can see how you could get over 1000 lines total fairly quickly. For the future I changed the lines per job to 200 and changed the max total to 10,000 lines. That was a larger total than I really needed but it will allow me to add additional jobs without worrying about losing history information.

Here are a couple of general possibilities for you to consider.

There is an option for keeping only information younger than a certain date. This is a great option if all of your jobs have a fairly uniform schedule. However if you have different schedules, say a daily schedule and a weekly schedule, then this option is going to have problems. If you keep 2 weeks worth of history then you have 14 entries for the daily jobs, and 2 for the weekly jobs. If you keep 5 weeks of history your weekly jobs have 5 entries but your daily have 35. It gets worse of course when you add in monthly, quarterly or yearly jobs.

Frequently there are a wide variety of schedules and the “Remove agent history” option really isn’t one for these cases. So how should we set the “Limit size of job history log” settings? Start with three important values. The number of jobs, the minimum number of runs you want to see in history, and the largest number of steps that actually run on your biggest job (this one can generally be fudged down a bit). Then use the following formulas.

Max job history rows per job: Runs * (Max Steps + 1)
Max job history log size: Jobs * Runs * (Max Steps + 1) * 1.5

This “rows per job” setting is now big enough to cover your largest job. You will have extra runs in your smaller jobs but that isn’t generally a problem. The overall log size is now big enough to cover all of your jobs and leaving plenty of extra space for future jobs. Personally I would tend to round up a bit as well. 4331 for example would be an odd number and I would tend to make it 4500 or something like that. Yes I realize I’m suggesting keeping more history than you really need. However the size of each row in the sysjobhistory table is at most right around 4.5KB so keeping 10k rows is only around 45MB. That’s pretty small really. The worst thing that is likely to happen is it takes a bit longer to bring up the job history viewer.

Of course when you have some jobs with one or two steps and some with 20 or more you end up with the same type of discrepancy you had with the daily and weekly jobs where some jobs have a 5 or 6 runs worth of data and others have 50 or 60. At that point you just have to pick what sounds best to you.

Either way check your settings and make sure you have plenty of history now rather than waiting until after you start researching one of your jobs.


Filed under: Microsoft SQL Server, Problem Resolution, SQL Agent Jobs, SQLServerPedia Syndication, SSMS Tagged: microsoft sql server, problem resolution, SQL Agent Jobs, SSMS

DBA Myths: An index on a bit column will never be used.

$
0
0

Not true. (Or I guess probably wouldn’t be posting about it would I?)

Probably the first thing I should point out is that just because you can doesn’t mean you should. I can only think of a few very edge cases where an index on just a bit column would be appropriate. Not even if you add a few included columns. Primarily if a bit column is going to be part of an index it should be just that, part of a bigger index.

On top of that I realize that some people really dislike bit’s. Personally I disagree. I think they are a datatype like any other. I’m not going to waste space on a tinyint or char(1) when a bit will do. Now don’t get me wrong, I’m also not going to use them when they are not appropriate either.

And on to a quick proof:

-- Set up code
CREATE TABLE BitIndexTest (Id int NOT NULL identity(1,1), myBit bit)

CREATE INDEX ix_BitIndexTest ON BitIndexTest(myBit)
-- Load data
INSERT INTO BitIndexTest VALUES (0)
GO 100

-- Warning, this can take a while to run
INSERT INTO BitIndexTest VALUES (1)
GO 2000000

UPDATE STATISTICS BitIndexTest
-- Run query
SELECT * FROM BitIndexTest WHERE myBit = 0

And here is the execution plan. Notice that there is an index seek using the index on the bit column.

BitIndex

-- Cleanup code
DROP TABLE BitIndexTest

Now this is a bit of an edge case. A small number of rows with one value and a large number with the other. If it had been a much higher percentage then you get a table scan instead of an index seek. In fact if you up the number of 0’s to 1000 then it switches over to a table scan. If you include (id)to the index then it will use the index longer but I couldn’t say for certain how much longer.

Again this an edge case. And because it bears repeating, just because you can create an index on just a bit column doesn’t mean you should. If you are using SQL 2008 or higher then a better solution to the same problem would be a filtered index. I still wouldn’t put a filtered index on the 90% side of a 10/90 split. I probably wouldn’t put one on a 50/50 split for that matter. But if you are going to pick one or the other a filtered index is the better solution.


Filed under: Index, Microsoft SQL Server, SQLServerPedia Syndication Tagged: index, microsoft sql server

Getting a query plan

$
0
0

Query plans are an essential tool when doing performance tuning. When looking at a query plan you should be aware that there are two different types of query plans. There are Estimated and Actual query plans (also called Execution Plans). Estimated and Actual query plans have the following differences:

An Estimated Query (Execution) Plan comes from a batch that has not actually been executed and an Actual Query (Execution) Plan comes from a batch that has been executed.

An Estimated Query (Execution) Plan contains only estimated counts (from the statistics) and an Actual Query (Execution) Plan contains both the estimated counts and the actual counts (from the execution itself).

Estimated Execution Plan

There are of course several ways to retrieve an estimated query plan. In SSMS we can use the “Display Estimated Execution Plan” option to display the estimated execution plan of our currently selected query or batch by doing one of the following.

  • Selecting it in the toolbar QueryPlan1
  • The menu option Query-> Display Estimated Execution Plan
  • Ctrl+L

 
Any of these options will immediately cause the estimated plan to be generated and displayed. Some other methods of getting an estimated plan worth looking at include:

SET SHOWPLAN_ALL
SET SHOWPLAN_XML
SET SHOWPLAN_TEXT

Once one of these options is turned on then any T-SQL run on the connection will not actually be executed but the query plan in various formats will be displayed. An interesting effect of this is that you can only turn one on at a time. When you try to execute the second SET SHOWPLAN command it just gives you the execution plan for it. SHOWPLAN_XML causes the XML for the graphical plan to be displayed. SHOWPLAN_TEXT and SHOWPLAN_ALL cause a text version of the plan to be displayed. This text version can be more useful than the XML format when using a text only interface such as SQLCMD. In fact some of my co-workers with a lot of DB2 for zOS experience find the output of SHOWPLAN_TEXT and SHOWPLAN_ALL to be easier to read than the graphical output.

Actual Execution Plan

Of course all of the above options only display the estimated execution plan and frequently we want the actual execution plan. If we are using SSMS we can turn on the “Include Actual Execution Plan” option by doing one of the following:

  • Selecting it in the toolbar QueryPlan2
  • The menu option Query-> Include Actual Execution Plan
  • Ctrl+M

 
Once the “Include Actual Execution Plan” option is turned on the query will have to be executed in order to get the plan. Now if we want to show an actual execution plan for a batch that has already been executed or is currently being executed (estimated only since actual numbers aren’t available yet) we can turn to DMOs. The DMO sys.dm_exec_query_plan takes the plan_handle from one of the following DMOs and returns the xml plan.

sys.dm_exec_cached_plans
sys.dm_exec_query_stats
sys.dm_exec_requests
sys.dm_exec_procedure_stats

This does of course require that the plan be still in the cache.

Everything so far displays the query plan for a batch. With a large batch (say a particularly large stored procedure) sometimes it’s handy to get the plan for an individual query from a batch. This brings us to one of my favorite DMO’s sys.dm_exec_text_query_plan. This particular DMO has several differences from sys.dm_exec_query_plan (list in BOL) but the one in particular that I want to discuss here is the fact that it has 2 extra parameters. When statement_start_offset and statement_end_offset are passed in along with the plan_handle they return back just the portion of the plan for that section of the batch. This is particularly helpful if you are using sys.dm_exec_query_stats when performance tuning. The combination will let you look at the individual plans for each of the queries listed along with a number of helpful performance statistics (CPU time, execution time, reads, writes and CLR time). One important note is that you should convert the column query_plan to XML so that you can click on it in the results pane to open the graphical view of the plan.

SELECT CAST(query_plan AS XML) AS XML_Plan, *
FROM sys.dm_exec_query_stats
CROSS APPLY sys.dm_exec_text_query_plan(plan_handle, statement_start_offset, statement_end_offset)

Not all entries display a plan and you can see the possible reasons in BOL under the remarks section for sys.dm_exec_text_query_plan.

Once you have found the query plan it helps to understand what you are looking at. That is a big study and one I’m only really beginning at. I highly recommend getting a copy of the book SQL Server Execution Plans, Section Edition by Grant Fritchey. It even has a pdf download for free!


Filed under: DMV, Microsoft SQL Server, Query Plans, SQLServerPedia Syndication, System Functions and Stored Procedures Tagged: DMV, execution plan, Grant Fritchey, microsoft sql server, query plan

What’s the difference between a temp table and a table variable?

$
0
0

I recently saw an answer to this question on dba.stackexchange.com written by Martin Smith. It was probably one of the most complete answers to this question I have ever seen. In fact it’s probably one of the most complete answers possible. I highly recommend that you read it. In the mean time here is a summary:

  • Table variables are actually stored in tempdb just like temporary tables.
  • With respect to default collation, user defined data types and xml collections table variables act like they are part of the local database, temporary tables act like they are part of tempdb.
  • Temp tables have a much wider scope. If created at the outer scope (@@NESTLEVEL = 0) they can span batches.
  • Rollback will affect temp tables but not table variables.
  • Table variables do not support TRUNCATE.
  • Column statistics are maintained for temp tables not table variables.
  • Indexes: Prior to SQL 2014 table variables only support indexes if created implicitly by creating a unique constraint or primary key. Post SQL 2014 table variables do support non-clustered indexes when declared in line. Also table variables do not support INCLUDE columns, filtered indexes or partitioning. Temp tables seem to fully support indexing (although the partitioning scheme would have to be created in tempdb of course).
  • Queries with table variables don’t get parallel plans, the same is not true with temp tables.

Filed under: Microsoft SQL Server, SQLServerPedia Syndication, T-SQL Tagged: code language, microsoft sql server, T-SQL

Impersonating a server level permissions

$
0
0

Warnings up front, this has some serious security implications. The method I’m going to use minimizes that somewhat but it’s really easy to shoot yourself in the foot here, so be careful!

Impersonation allows you to grant a user the ability to mimic another user and gain access to all of the permissions that the impersonated user has. However if you have worked with this much you will know that you can only impersonate database level permissions. Or can you?

To start with data base level impersonation. There are 2 users, UserA and UserB. UserA is dbo and UserB can impersonate UserA. UserB can do anything that dbo can by impersonating UserA. We want to apply the principal of least privilege wherever possible so we start by only granting UserA those privileges that UserB needs to impersonate. However, we will probably soon have a UserC that needs a subset of those permissions. We don’t want to have to create yet another user to be impersonated, and we don’t want to grant UserC the extra privileges that UserA has. The solution is to create stored procedures that do the work. These stored procedures then use the EXECUTE AS clause to have the stored procedure run as if another user is actually running it. Then we grant execute access to that stored procedure.

An excellent example of this is creating a stored procedure that truncates a table.

-- Create table to truncate
CREATE TABLE TruncateMe (Id int NOT NULL IDENTITY(1,1))
GO
-- User that has permission to truncate the table
CREATE USER Imp_TruncateMe WITHOUT LOGIN
-- Grant user ALTER permission so it can truncate the table
GRANT ALTER ON TruncateMe TO Imp_TruncateMe
GO
-- Create procedure to do the truncate impersonating Imp_TruncateMe
CREATE PROCEDURE dbo.Truncate_TruncateMe
WITH EXECUTE AS 'Imp_TruncateMe'
AS 
TRUNCATE TABLE TruncateMe
GO

Now in order to give someone permission to truncate our table we don’t have to grant the IMPERSONATE permission, or even the ALTER TABLE permission, we can just grant EXECUTE to the stored procedure.

That’s great but we want to impersonate a server level permission. To start with we need the TRUSTWORTHY setting of databases. So what does TRUSTWORTHY do? If the TRUSTWORTHY database setting is set to ON then the instance trusts EVERYTHING in that database. This means that any impersonated user in the database will have the ability to use the permissions of the associated login. This can have some pretty serious security implications. Personally I don’t know every implication of using TRUSTWORTHY but I think this one is pretty significant on it’s own.

Here is the scenario, you want to grant a junior DBA access to run DBCC HELP. Unfortunately this DBCC command requires membership in the sysadmin server role. You aren’t quite ready to give your junior DBA sysadmin permissions so you need a work around.

FYI if this seems contrived, it is. I couldn’t come up with a good server level permission on the fly. This should be good enough to get the point across though.

As a method of minimizing risk I put my “impersonation” stored procedures in a separate database when I’m setting TRUSTWORTHY ON. And even more particularly when I’m using a permission that requires sysadmin. Why? Because no matter how careful you are mistakes happen. Application databases tend to have fairly complicated permissions and eventually someone is granted db_owner and you have forgotten that they can now generate stored procedures that can impersonate a login with sysadmin permissions. I want a database where the users only have CONNECT to the database and EXECUTE to specific SPs. And those are the only users in the DB. That means that the SPs have to be created by a sysadmin, but I’m ok with that restriction.

In general the process runs like this:

  1. We create a new database and set the TRUSTWORTHY flag on
  2. We create a login with the permissions we want
  3. Set the login as the owner of the new database
  4. We create a stored procedure that does the work we want within the new database.
  5. We add the EXECUTE AS OWNER clause to the SP

 
I ran several tests here and the only way I could get it to work was by using EXECUTE AS OWNER. EXECUTE AS ‘UserName’ would not work even with TRUSTWORTHY ON. I did not try anything other than setting up the “OWNER” (schema of the SP) as dbo. It might work with a different schema but I suspect not.

-- Create a login to be the owner of our impersonation database
USE master
GO

-- Make the password as obnoxious as possible because 
-- no one ever needs to or should log in as this login.
CREATE LOGIN Imp_DBO WITH PASSWORD = 'VeryStrongPassword', 
	CHECK_EXPIRATION = OFF, CHECK_POLICY = OFF;
GO
-- Create database with TRUSTWORTHY set ON
-- to contain our impersonation SP
CREATE DATABASE ImpTest WITH TRUSTWORTHY ON;
GO
-- Change owner to login created for the purpose
ALTER AUTHORIZATION ON DATABASE::ImpTest TO Imp_DBO;
GO
-- Grant the login the permissions needed
USE master;
GO

-- In this case sysadmin is required but only use it 
-- if it is REQUIRED!
ALTER SERVER ROLE sysadmin ADD MEMBER Imp_DBO;
GO
-- Create stored procedure to mimic DBCC HELP
USE ImpTest
GO

-- SP must be in the dbo schema for this to work.
CREATE PROCEDURE dbo.MyDBCCHelp (@dbcc_param varchar(50))
WITH EXECUTE AS OWNER
AS 
DBCC HELP (@dbcc_param)
GO
-- Create a login to test with
USE master
GO

CREATE LOGIN Imp_User WITH PASSWORD = 'VeryStrongPassword', 
	CHECK_EXPIRATION = OFF, CHECK_POLICY = OFF;
GO
-- Create user in the ImpTest database and 
-- grant it execute to dbo.MyDBCCHelp
USE ImpTest
GO

CREATE USER Imp_User FROM LOGIN Imp_User
GO

GRANT EXECUTE ON dbo.MyDBCCHelp TO Imp_User
GO

Open a connection using the new login and test.

EXEC ImpTest.dbo.MyDBCCHelp 'CHECKDB'

Now our new login can run a sysadmin only DBCC command and the only permissions it has are to connect to the new database and to execute a stored procedure. I could even put additional controls into the new stored procedure if I wanted to, logging for example. I should note that you want to use ORIGINAL_LOGIN if you are logging user information when using impersonation in order to get the original login name.

Obviously this won’t work with views and functions as there is no EXECUTE AS clause.

Last time, when using this method of impersonation I create a separate database that has the absolute minimal security inside of it. The only users are those that need to execute the stored procedure(s) and they only have execute on the specific stored procedure(s) that they need. This avoids giving any access I don’t intend. You can create a world of problems if you aren’t careful.


Filed under: Impersonation, Microsoft SQL Server, Security, SQLServerPedia Syndication, T-SQL Tagged: code language, database permissions, Impersonation, language sql, microsoft sql server, security, server permissions, sql statements, T-SQL

Using SQLFiddle

$
0
0

SQL Fiddle is a free website that you can use to demonstrate and save a query example in any one of 13 different DMBSs (as of this posting) including two different versions of SQL Server (2008 & 2012).

www.sqlfiddle.com

SQLFiddle1

First you set up your work environment and “Build Schema”.

SQLFiddle2

Then you can put in your query, run it, and see the output and even the execution plan.

SQLFiddle3

At this point you can copy the link (for example the above at http://www.sqlfiddle.com/#!6/dbd09/1) and share it with others who can see it. You will see a lot of this on dba.stackexchange.com as it’s a great way to display a problem.
It’s also a good way to share demonstrations. I’m not sure I would want to us it to demo a stress test though ;).


Filed under: Microsoft SQL Server, SQLServerPedia Syndication Tagged: microsoft sql server, sql fiddle, tools

Characters you should never ever use in an object or schema name!

$
0
0

You can put pretty much any character you want into an object or schema name by enclosing the name in []‘s. This does not however mean that you should. There are two specific cases that I’ve seen that are in general a bad idea.

I’ve seen names that actually have []s around them.

CREATE SCHEMA [[bracket]]]
	CREATE TABLE [[bracket]]] (id int)
GO
SELECT * FROM [[backet]]].[[bracket]]]
GO

And other’s that have periods.

CREATE SCHEMA [do.t]
	CREATE TABLE [do.t].[do.t] (id int)
GO
SELECT * FROM [do.t].[do.t]
GO

Why is this a problem? Well you can see from above you have to do extra work to even do a select. Also there are a number of commands that just won’t work. Sp_help for example.

EXEC sp_help [[bracket]].[bracket]]]
GO
EXEC sp_help [do.t.do.t]
GO

And if nothing else they are really really confusing! So please do not put periods or brackets in object or schema names. Do it for me if not for yourself. I mean there is always that outside chance that I’ll have to work on your systems one day!


Filed under: Microsoft SQL Server, SQLServerPedia Syndication, T-SQL Tagged: code language, language sql, microsoft sql server, sql statements, T-SQL

:CONNECT in SSMS

$
0
0

Those people who are familiar with SQLCMD will recognize this command. It is used to connect to an instance from within a SQLCMD script. What they may not realize is that this command (and other SQLCMD commands) can be used in a query window by turning on SQLCMD mode. There is a great overview of using SQLCMD commands in SSMS here.

I am going to highlight an interesting aspect of the CONNECT command in SSMS.

First a basic example. Note: all of the connections start on the instance (local)\SQL2012

SELECT @@SERVERNAME
GO
:CONNECT (local)\SQL2008R2
SELECT @@SERVERNAME
GO
:CONNECT (local)\SQL2012
SELECT @@SERVERNAME
GO

With an output of

CONNECT1

So using :CONNECT you can change connections inside a script. I’ll frequently use this technique to get a piece of information from a number of different servers at once.

For example, how many databases do I have on each server?

:CONNECT (local)\SQL2012
SELECT @@SERVERNAME, COUNT(1) FROM sys.databases
GO
:CONNECT (local)\SQL2008R2
SELECT @@SERVERNAME, COUNT(1) FROM sys.databases
GO

CONNECT2

There are a couple of odd aspects to the CONNECT command. The connect command happens at the beginning of the batch regardless of where in the batch it is, and if you have more than one of them in a single batch only the last one counts. I did have a couple of runs where I got different results but I couldn’t reproduce them and they only happened once or twice so it may have been a PEBCAK issue. Again remember that my connections are all initially on (local)\SQL2012.

SELECT @@SERVERNAME
:CONNECT (local)\SQL2008R2

CONNECT4

SELECT @@SERVERNAME
:CONNECT (local)\SQL2008R2
SELECT @@SERVERNAME
:CONNECT (local)\SQL2012
SELECT @@SERVERNAME
GO
:CONNECT (local)\SQL2012
SELECT @@SERVERNAME
:CONNECT (local)\SQL2008R2
SELECT @@SERVERNAME
GO

CONNECT3


Filed under: Microsoft SQL Server, SQLServerPedia Syndication, SSMS, T-SQL Tagged: language sql, microsoft sql server, SSMS
Viewing all 450 articles
Browse latest View live