Quantcast
Channel: microsoft sql server – SQL Studies
Viewing all 450 articles
Browse latest View live

I don’t want to grant permission to all the tables in the database at once.

$
0
0

A couple of weeks ago I did a post on granting or denying permissions to all the tables within a database. However sometimes you don’t want to grant permissions to the whole database at once. This is still pretty easy but there are no built-in roles to do it. There are still two options; granting permissions to the table(s) themselves or granting permissions to one or more schemas.

Granting permission to a schema

This is similar to granting permissions to the database. Every table/view owned by the schema is affected and if you only use dbo then it’s exactly the same as granting permission to the database. If however you use multiple schemas you can grant permissions at a somewhat more granular level. For example granting the accounting department read(SELECT)/write(INSERT, UPDATE, DELETE) access to the Account schema and the HR department read/write access to the Employee schema. The other benefit is that as a “global” grant (affecting everything under the schema) every time you add a table or view the permissions are extended to the new object. I particularly like using this with the EXECUTE permission. Once I have granted EXECUTE to a schema then that group/role now can execute any new stored procedures or functions.

Granting permission to an individual object

You can grant any appropriate permission to an individual object. For example SELECT to a table or view and EXECUTE to a stored procedure or function. Generally this is the most granular permissions I bother granting. It’s possible to grant permissions at a column level but I’ve found a reason to do it. Typically I grant permissions at the object level I have particular table that need special handling. A table containing employee payroll information for example. You can move these into their own database or schema but sometimes that isn’t the most efficient option.

If you decide to use this method I HIGHLY recommend creating a role and granting the permissions to the role. Then granting that role to the individual or group who need it. This has two major advantages. If you name the role something obvious (EmployeePayroll for example) then its fairly obvious what needs to be granted for someone to be able to perform a specific job. Also, and even more important in my opinion, if you need to grant permissions to more than one or two objects it’s far easier to add them to a role than to remember what permissions needed to be granted. For example you determine that a job requires SELECT permission on 5 tables, INSERT and UPDATE on 3 and DELETE on 1 and you then grant those permissions to an individual. Six months later he decides to leave and you need to grant permissions to his replacement. How hard would it be to forget one of those grants? How easy would it have been to have removed the old person from a role and add a new one? Of course I’m ignoring the fact that generally you want to use AD groups and that avoids the problem all together but I still feel it’s a valid point.

Remember in general that granting permissions at the right level (database, schema, object) is important. This ties directly into the rule of least privilege and balancing it against continence and ease of use. It’s very easy to grant read/write to the entire database and sometimes it’s even appropriate. Keep in mind however, that it isn’t always. Sometimes you need to break your permissions down at a more granular level.


Filed under: Microsoft SQL Server, Security, SQLServerPedia Syndication Tagged: microsoft sql server, object permissions, schema permissions, security

Aliasing a SQL Server: When it works, when it doesn’t and when it may be your problem.

$
0
0

Creating an alias for a SQL Server is fairly easy and there are several ways to do it. Configuration Manager is my personal favorite. Open up configuration manager and select the SQL Native Client xx Configuration. Under that you will find Aliases.

Alias1

From here you can add, update or delete aliases.

So at this point some of the more polite members of the audience are probably thinking “Unfortunately I have no idea what you are talking about. Would you please explain what an alias is?” And I appreciate that from both of you. The rest are probably thinking something along the lines of “You idiot, if you’re going to talk about aliases it would be nice if you explained what they are first!”

Per BOL:

An alias is an alternate name that can be used to make a connection. The alias encapsulates the required elements of a connection string, and exposes them with a name chosen by the user.

In other words once you have created an alias you can connect to the aliased machine using the new name in the connection string. For example I create an alias GEORGE and point it to my (local)\sql2012 instance. I can now connect to my instance using either its correct name (local)\sql2012 or GEORGE.

When it’s working

Let’s say you have a series of servers: LARRY, MOE and CURLY. You decide to do a side by side upgrade of CURLY. You are going to move all of the databases on CURLY (a 2008 server) to SHEMP (a 2012 server) and then shut CURLY down. There is a bit of a problem however. The developers have told you in no uncertain terms they do not have time to find all the dozens of places that CURLY was hard coded into the various applications. We can solve this easily enough by creating an alias on the application server pointing the alias CURLY to the new server SHEMP. Now when the connection strings try to go to CURLY the alias says to go to SHEMP and everything continues to work.

When it doesn’t

Over the weekend you’ve moved the databases and the applications were all tested. Everything went smoothly so CURLY was permanently shut down. Monday morning rolls around and the developers start calling. They can’t connect to the database. Want to guess why not? An alias only works locally, each individual client (the machine you want to connect from) must have it’s own alias created.

When it might be your problem

Over time alias’ get created (sometimes by accident believe it or not) and get forgotten. Over the years I have seen a number of situations where the answer to “Why can’t I connect to server XYZ?” is an alias that the user either didn’t know about or had forgotten. That’s why this has become one of those Start with Stupid steps that I take when someone can’t connect but everything else looks ok.


Filed under: Microsoft SQL Server, SQLServerPedia Syndication Tagged: alias, microsoft sql server

sp_SrvPermissions & sp_DBPermissions V5.0

$
0
0

These are a couple of stored procedures I wrote to help me with security research. Each of the stored procedures returns three data sets.

  1. A list of principals and some basic properties about them.
  2. Role membership
  3. Object/Database/Server level permissions

 
Each of the datasets has a set of do/undo scripts as well as various useful columns. For example the second data set contains information about which logins/users belong to which roles; and scripts to either add or remove the login/user from that role.

The stored procedures also have a number of parameters for restricting the result sets. For example principal name, role name, principal type, object name etc.

These sp’s can be run/stored anywhere and work just fine but if you run them in master then you can more easily call them from any database on the instance.

A few examples of times I’ve found them particularly handy:

  • I need to know every database a user has access to and what access they have.
  • I need to know all permissions for a given login across all databases.
  • I need to copy a login from one server to another (with SID and password).
  • I need to know everyone who has permissions to a specific object in the database.
  • I need to know everyone who is a member of sysadmin.

 
Standard disclaimers: This code is as-is. You should be careful running code you haven’t reviewed on your systems. Feel free to use it and place it on your systems. Please do not post my code without giving proper credit and preferably back to the original page. And of course if you happen to notice a problem, or have a suggestion please post them here or email me and I’ll be glad to fix/add as appropriate.

Latest update: Below are the latest additions.

sp_SrvPermissions

  • 04/27/2014 – Add @DBName parameter.

sp_DBPermissions

  • 4/29/2014 – Fix: Removed extra print statements
  • 4/29/2014 – Fix: Added SET NOCOUNT ON
  • 4/29/2014 – Added a USE statement to the scripts when using the @DBName = ‘All’ option
  • 5/01/2014 – Added @Permission parameter
  • 5/14/2014 – Added additional permissions based on information from Kendal Van Dyke’s post

    http://www.kendalvandyke.com/2014/02/using-sysobjects-when-scripting.html

  • 6/02/2014 – Added @LoginName parameter

Filed under: Dynamic SQL, Microsoft SQL Server, Security, SQLServerPedia Syndication, System Functions and Stored Procedures, T-SQL Tagged: code language, database permissions, dynamic sql, language sql, microsoft sql server, security, server permissions

Dealing with a long string

$
0
0

Every now and again you have to put a really long string (more than 8000 characters). Dynamic SQL is the most frequent example where I see this but I do see it elsewhere as well and it’s very easy to make a simple mistake. This is caused by the fact that a string is a varchar, at least based on all of the evidence I can find. It would probably take a real internals expert to say for sure.

-- This is the best evidence I could find of the 
-- data type of a string
SELECT SQL_VARIANT_PROPERTY('A string','BaseType');
--Returns: varchar

Note that it is varchar and not varchar(max). Varchar and varchar(max) have very different size limits. A varchar(max) has a limit of 2GB and a varchar has a limit of 8000 characters. So what is this mistake I’m talking about? Watch.

DECLARE @str varchar(max);

SET @str = REPLICATE('1',950) +
	REPLICATE('2',950) +
	REPLICATE('3',950) +
	REPLICATE('4',950) +
	REPLICATE('5',950) +
	REPLICATE('6',950) +
	REPLICATE('7',950) +
	REPLICATE('8',950) +
	REPLICATE('9',950) +
	REPLICATE('0',950); 

SELECT LEN(@str);
GO
-- Output 8000

And of course 10 * 950 characters is 9500. This is a rather contrived example but again if you are dealing with long pieces of dynamic SQL it can and does come up occasionally. So what’s the fix? Add smaller strings multiple times like this.

DECLARE @str varchar(max);

SET @str = REPLICATE('1',950) +
	REPLICATE('2',950) +
	REPLICATE('3',950) +
	REPLICATE('4',950) +
	REPLICATE('5',950); 
SET @str = @str + 
	REPLICATE('6',950) +
	REPLICATE('7',950) +
	REPLICATE('8',950) +
	REPLICATE('9',950) +
	REPLICATE('0',950); 

SELECT LEN(@str);
-- Output 9500

Personally I try to break up strings long before I run into issues, it’s safer that way. I still mess up occasionally though, and when I get a weird error that looks like my string has been truncated this is one of the first things I check for.


Filed under: Microsoft SQL Server, SQLServerPedia Syndication, T-SQL Tagged: code language, language sql, microsoft sql server, strings, T-SQL

Make BOL your friend

$
0
0

One of the most powerful tools we have as users of SQL Server is Books Online (BOL).  Whether you work mainly as an admin, a developer or in BI, Microsoft has provided a HUGE amount of information for you to use.  But BOL is by no means your only resource especially when problem solving. When tackling problems, studying for your next certification exam, or reading for amusement (yes I’m weird) the community at large is a huge resource. This resource comes in the form of blogs, articles, forums and twitter.  As helpful as the community is, even the best of us will resort to searching BOL for answers – at least occasionally.

So why do I say make BOL your friend?  Well the obvious answer is because it’s a helpful tool.  The less obvious answer is that it can frequently be a royal pain in the neck to find the information you are looking for.  This isn’t because it’s poorly organized or poorly indexed.  It’s because BOL is enormous.  Frequently, for any given subject, there will be multiple entries.  Something complex, such as the SELECT statement has BOL entries for each of the major clauses and a page of examples.  There are entries letting you know what’s new, what the breaking changes are and what behaviors have changed in any given version.  There are whole groups of articles on CLR, XML, XQuery and more.  Between the contents tree and searching, not to mention the index on the local copy of BOL the only way to make full use of this amazing tool is to practice.

Now to be fair, BOL and I are pretty good work acquaintances.  We go out for lunch every now and again but have never been to each other’s houses.  I’ve known people however that could find some of the most amazing information.  These people not only have dinner at BOL’s house on a regular basis but are godparents to their kids.  I’m not saying you need know BOL that well but the more you practice, the easier it will be to find the information you need.


Filed under: Documentation, Microsoft SQL Server, SQLServerPedia Syndication Tagged: BOL, Documentation, microsoft sql server

db_ddladmin and the SSMS table designer

$
0
0

If you want to grant a user the ability to create/alter/delete any table, SP, function etc in a database you have a several options. For example:

  • You can grant all of the CREATE permissions either to the database itself or to all of the schemas.
  • You can add the user to the db_ddladmin role
  • You can add the user to the db_owner role

 
I’m sure if you tried you could come up with several other options but these are the ones that come immediately to mind. The first one sounds really complicated to me. The third one is way more power than you want to grant unless it’s absolutely needed. That leaves the second, which in my opinion is really the way you want to go. In fact db_ddladmin was specifically designed for this type of security role.

Interestingly if you grant add someone to the db_ddladmin role and they try to go into the table designer in SSMS they are going to see the following warning:

DDLAdminSSMSOE1

No big deal really. This is just a warning and doesn’t actually say you won’t be able to make changes, just that you might not have sufficient permissions.

However, even more interestingly if you are using SSMS 2008R2 then you are also going to see the following error:

DDLAdminSSMSOE2

At this point you really do have a problem. There is a bug in the table designer that will not allow a user that is not at least a member of the db_owner role to modify tables. I’m currently on 2008R2 SP2 and I have no idea if this will be fixed in SP3 but I’m not holding my breath. There is a closed connect item on this that indicated it would be fixed in a future version (and it is fixed in SQL 2012) but didn’t mention a fix being released in a service pack. Also after some superficial testing I believe this error will occur at ANY level of permissions lower than membership in the db_owner role.

If you happen to be the type that prefers to use code over the GUI like I am, you may never notice this. Or for that matter if you are not living in the past like I am you won’t notice it either. Unfortunately it was a real shock to me when one of my users ran into it. So be warned!


Filed under: Microsoft SQL Server, Problem Resolution, Security, SQLServerPedia Syndication, SSMS Tagged: database permissions, microsoft sql server, problem resolution, security, SSMS

What does it mean that a value is NULL?

$
0
0

Let’s start by assuming that ANSI_NULLS are ON. If you aren’t sure what ANSI_NULLS are exactly, don’t worry, I’ll be going over that in some detail in a future post. However Microsoft tells us that ANSI_NULLS will always be ON in the near future. So we are not going to worry about that here.

So what does it mean to say that a value is NULL? Basically it means that the value is unknown. As an example, consider a binary variable. Its value can either be 1 or 0; but it can also be unknown (i.e. NULL). Pretty simple right? Well… on the surface maybe, but when you start thinking through the implications, it gets more and more complicated. For example:

DECLARE @Bin Binary
DECLARE @Bin2 Binary

SET @Bin = 1
SET @Bin2 = NULL

Here are the possible comparisons

  • (@Bin = 1) returns True
  • (@Bin = 0) returns False
  • (@Bin = NULL) returns NULL
  • (@Bin2 = 1) returns NULL
  • (@Bin2 = 0) returns NULL
  • (@Bin2 = NULL) returns NULL

 
Any value when compared to a NULL is NULL. Why? Well it’s kind of like Schrödinger’s Cat. Until a value is placed in it you don’t know what the value is. In fact, I like to mentally replace NULL with “unknown”:

  • (@Bin = unknown) returns unknown
  • (@Bin2 = 1) returns unknown
  • (@Bin2 = 0) returns unknown
  • (@Bin2 = unknown) returns unknown

 
Obviously, if a value is unknown, the result of any comparison is unknown. In other words NULL = NULL returns NULL.

Next example:

CREATE TABLE #NullTest (MyVal binary)
INSERT INTO #NullTest VALUES (1),(1),(0),(0),(NULL) 

SELECT COUNT(1) FROM #NullTest
WHERE MyVal = 1 -- Returns 2

SELECT COUNT(1) FROM #NullTest
WHERE MyVal = 0 -- Returns 2

SELECT COUNT(1) FROM #NullTest
WHERE MyVal = NULL -- Returns 0

So we get two rows where MyVal = 1, two rows where MyVal = 0 and no rows where MyVal = NULL for a grand total of four. But wait, we inserted five rows!! So as we go farther down the rabbit hole we start to realize that aggregate queries can be heavily affected by NULLs. The implications go farther and farther down the rabbit hole as you think about it. So how do we work around this? Well there are a couple of options.

ISNULL(MyVal,0) will return a 0 if MyVal is a NULL and return the value of MyVal otherwise. This leads to queries that look like this:

SELECT COUNT(1) FROM #NullTest
WHERE ISNULL(MyVal,1) = 1

I don’t recommend this for several reasons but the fact that it isn’t SARGable is probably sufficient.

Next we have the option “IS NULL”. Which is used like this

SELECT COUNT(1) FROM #NullTest
WHERE MyVal = 1
  OR MyVal IS NULL

This is generally the method I would use although you do have to watch out with your OR operator. I see a lot of logical mistakes when people use OR carelessly. Here is a simple example:

CREATE TABLE #NullTest (MyVal binary, DateVal datetime)
INSERT INTO #NullTest VALUES 
			(1, '1/1/1900'),
			(1, '1/1/1901'),
			(0, NULL),
			(0, '1/1/2001'),
			(NULL, '1/1/2000') 

-- What you meant
SELECT *
FROM #NullTest
WHERE 
	(MyVal = 1
	  OR MyVal IS NULL)
 AND 
	(DateVal > '1/1/1950'
	  OR DateVal IS NULL)

-- What you entered
SELECT *
FROM #NullTest
WHERE MyVal = 1
  OR MyVal IS NULL
 AND DateVal > '1/1/1950'
  OR DateVal IS NULL

-- How the computer saw it.
SELECT *
FROM #NullTest
WHERE 
	MyVal = 1
  OR 
	(MyVal IS NULL
	AND DateVal > '1/1/1950')
  OR 
	DateVal IS NULL

I’ll let you run it yourself if you want to see the different results, but just looking at the code you should be able to tell how easy it is to mess yourself up.

There are other methods of course, and I’m sure more will show up over time. The important thing to remember is that if you are going to allow NULLs in your columns then you need to understand what a NULL is and plan for it accordingly. If you don’t Thomas LaRock is liable to come after you (not as bad as Grant Fritchey coming after you if you aren’t taking your backups but still.)


Filed under: Microsoft SQL Server, SQLServerPedia Syndication, T-SQL Tagged: code language, Grant Fritchey, language sql, microsoft sql server, NULL, Thomas LaRock

Add them to ALL the roles!

$
0
0

I seem to get a lot of permissions questions these days and one of the more frequent ones goes along these lines “I still don’t have the right permissions on database xyq.” So of course the first thing I do is use my handy dandy sp_dbpermissions stored procedure to check out all of their current permissions. Every now and again I’ll see a specific patern of permissions that always leaves me stunned. All I can assume is that a user requested “Add me to all of the roles” and a DBA not paying enough attention got click happy and did just that.

The list of standard database roles looks like this:

  • db_accessadmin
  • db_backupoperator
  • db_datareader
  • db_datawriter
  • db_ddladmin
  • db_denydatareader
  • db_denydatawriter
  • db_owner
  • db_securityadmin
  • public

 
So does anyone see a problem with adding a user to all of these roles at once? I mean other than the fact that if someone is a member of db_owner they really don’t need to a member of any other role. Did you figure it out? Remember that DENY overrides GRANT. A member of db_denydatareader and db_denydatawriter is not going to be able to read or write from the database even if they are a member of of the db_owner role. Now this is not the case if you are the actual database owner (or sysadmin) but those are exceptions to the rule. So the moral of the story is to only add users to the roles they actually need. Not just blindly add them to ALL the roles.


Filed under: Microsoft SQL Server, Security, SQLServerPedia Syndication Tagged: microsoft sql server, role, security

Why am I getting a primary/unique key violation?

$
0
0

This may seem like a question with a simple answer but there is a bit more to it than you might think. In fact I know of 3 possible reasons (and there may be more I don’t know) for seeing a primary key error. Technically they occur for any unique key, of which the primary key is one of possibly many, and they all boil down to trying to end up with two rows in the table that “match” based on the unique key.

For my examples I’m going to use the AdventureWorks2012.Person.Address table and I’ll actually be hitting a unique key not the primary key. This is because I’m lazy and it was the first one I found with a unique key I could work with easily. You can take my word for it that a primary key will react exactly the same way.

Here are the top 3 rows of Person.Address with just the columns we care about.

SELECT TOP 3 AddressLine1, AddressLine2, 
	City, StateProvinceID, PostalCode
FROM Person.Address
ORDER BY AddressID

PK_Error


Inserting a row that already exists in the table.

This is by far the most common cause of a unique/primary key error that I see. A row exists in the table and you try to insert another one with the same key data.

INSERT INTO Person.Address (AddressLine1, AddressLine2, 
		City, StateProvinceID, PostalCode)
	VALUES ('1970 Napa Ct.',NULL, 'Bothell',79,'98011')

With a result of

Msg 2601, Level 14, State 1, Line 1
Cannot insert duplicate key row in object ‘Person.Address’ with unique index ‘IX_Address_AddressLine1_AddressLine2_City_StateProvinceID_PostalCode’. The duplicate key value is (1970 Napa Ct., , Bothell, 79, 98011).
The statement has been terminated.

Note that the name of the unique index that is violated is listed along with the duplicate key data. If you are inserting multiple rows causing a unique/primary key violation then only on key value set is listed. In my testing it was always the first duplicate found but I couldn’t guarantee it.


Updating a row that causes a duplicate.

This one is also fairly common. In this you are updating a row that causes a duplicate to occur.

UPDATE Person.Address 
	SET AddressLine1 = '1970 Napa Ct.' 
WHERE AddressID = 2

With the same result.

Msg 2601, Level 14, State 1, Line 1
Cannot insert duplicate key row in object ‘Person.Address’ with unique index ‘IX_Address_AddressLine1_AddressLine2_City_StateProvinceID_PostalCode’. The duplicate key value is (1970 Napa Ct., , Bothell, 79, 98011).
The statement has been terminated.


Inserting two (or more) identical rows

This one really throws people. It’s by far the least common cause of the error and the first one I think of when I hear “I checked but there are no duplicates.” or “The table is empty how can I be getting a primary key error?”

IF EXISTS (SELECT 1 FROM sys.tables WHERE name = 'DupTest')
	DROP TABLE DupTest;
GO
SELECT TOP 0 AddressLine1, AddressLine2, 
	City, StateProvinceID, PostalCode INTO DupTest 
FROM Person.Address;

CREATE UNIQUE INDEX ixu_DupTest ON DupTest(AddressLine1, 
	AddressLine2, City, StateProvinceID, PostalCode);

INSERT INTO DupTest
SELECT AddressLine1, AddressLine2, City, 
	StateProvinceID, PostalCode 
FROM Person.Address
UNION ALL
SELECT AddressLine1, AddressLine2, City, 
	StateProvinceID, PostalCode 
FROM Person.Address;
GO

I am creating a brand new table so we can be certain I’m not inserting a row that already exists in the table. I’m not updating anything. So why am I getting an error? If you look at the insert statement you will see that I’m inserting every row from Person.Address twice. If SQL actually allowed this to run I would end up with duplicates in the table. The simple method I use to check for this particular problem is to wrap the problem query in an “outer” query to check for the duplicate. Like so:

SELECT -- List of columns from the unique/primary key
	AddressLine1, AddressLine2, City, 
	StateProvinceID, PostalCode 
FROM (
	-- Query we need to find the duplicates from
	SELECT AddressLine1, AddressLine2, City, 
		StateProvinceID, PostalCode 
	FROM Person.Address
	UNION ALL
	SELECT AddressLine1, AddressLine2, City, 
		StateProvinceID, PostalCode 
	FROM Person.Address
) x
GROUP BY -- List of columns from the unique/primary key
	AddressLine1, AddressLine2, City, 
	StateProvinceID, PostalCode 
HAVING COUNT(1) > 1

Any rows that turn up are duplicates in the query and will return you a unique/primary key error.


The important point to remember here is that a unique/primary key error is not always caused by inserting a row with unique column data that already exists. Once you have checked the destination table it’s time to check your source data as well.


Filed under: Microsoft SQL Server, Problem Resolution, SQLServerPedia Syndication, T-SQL Tagged: code language, language sql, microsoft sql server, problem resolution, sql statements, T-SQL

The clustered index columns are in all of the non clustered indexes.

$
0
0

Did you know that whatever columns you pick as your clustered index will be included in any non clustered indexes on the same table? But don’t take my word for it. Let’s take a look!

First things first I’m going to use some AdventureWorks2012 tables to make a test table.

-- Create a convinent composite table 
SELECT Pers.BusinessEntityID, Pers.Title, Pers.FirstName, Pers.MiddleName, 
	Pers.LastName,Addr.AddressLine1, Addr.AddressLine2, Addr.City, 
	Addr.StateProvinceID, Addr.PostalCode, Addr.SpatialLocation
	INTO People1
FROM AdventureWorks2012.Person.Address Addr
JOIN AdventureWorks2012.Person.BusinessEntityAddress BEA
	ON Addr.AddressID = BEA.AddressID
JOIN AdventureWorks2012.Person.Person Pers
	ON BEA.BusinessEntityID = Pers.BusinessEntityID
GO
-- Add indexes including a non-unique (important later) clustered index
CREATE CLUSTERED INDEX ix1_People1 ON People1(AddressLine1, AddressLine2, City, StateProvinceID, PostalCode)
CREATE INDEX ix2_People1 ON People1(BusinessEntityID)
CREATE INDEX ix3_People1 ON People1(LastName, FirstName, MiddleName)
GO

The clustered index (CI) is on the 5 address columns and there are non-clustered indexes (NCI) on the BusinessEntityID and the 3 name columns. We can look at the structure of a page from one of the indexes by using sys.dm_db_database_page_allocations and DBCC PAGE (links are in the code below).

-- Get the page id for NCI ix2_People1
-- Info on sys.dm_db_database_page_allocations: 
--		http://www.jasonstrate.com/2013/04/a-replacement-for-dbcc-ind-in-sql-server-2012/
SELECT indexes.name, indexes.index_id, indexes.type_desc, 
	pages.allocated_page_file_id, pages.allocated_page_page_id, pages.is_iam_page
FROM sys.indexes
JOIN sys.dm_db_database_page_allocations(DB_ID(), OBJECT_ID('People1'), NULL, NULL, NULL) pages
	ON indexes.object_id = pages.object_id
	AND indexes.index_id = pages.index_id
WHERE indexes.name = 'ix2_People1'

CI_NCI_1

In order to use DBCC PAGE I need the file id and the page id. I’m using page 288 instead of 638 because page 638 is the IAM page. All of my pages are in file 1 (I only have the one data file).

-- View index page
-- Info on DBCC PAGE:
--		http://www.sqlskills.com/blogs/paul/inside-the-storage-engine-using-dbcc-page-and-dbcc-ind-to-find-out-if-page-splits-ever-roll-back/
-- Turn traceflag 3604 on so we can see the results
DBCC TRACEON (3604);
-- Take a look at the contents of one of the index pages
DECLARE @DBID int
SET @DBID = DB_ID()
DBCC PAGE(@DBID, 1, 288, 3)

CI_NCI_2

You can see the first 9 (of 217) rows in the page in the image above. You can see that while the BusinessEntityID is the only column I indexed on (and the only one that will show up if you look at sp_helpindex or anything similar) there are actually 6 additional columns in the index. The 5 columns from the CI and the UNIQUIFIER column. In case you are interested the UNIQUIFIER is added any time you have a CI that is non-unique (which is why I deliberately made this one non-unique).

I’m going to stop here and point out that I created a clustered index not a primary key. A primary key is by default a unique clustered index but it doesn’t have to be. It has to be unique but not clustered. Because it has to be unique if you create a primary key for your CI then you won’t see the uniquifier column when you look at the page information.

I deliberately created a long clustered index on columns that probably shouldn’t be used as the clustered index to demonstrate a couple of points. First your CI choice is going to affect the size of your indexes.

To start I’m going to create another table (exactly the same) with a different, smaller CI.

-- Create a convinent composite table 
SELECT Pers.BusinessEntityID, Pers.Title, Pers.FirstName, Pers.MiddleName, 
	Pers.LastName,Addr.AddressLine1, Addr.AddressLine2, Addr.City, 
	Addr.StateProvinceID, Addr.PostalCode, Addr.SpatialLocation
	INTO People2
FROM AdventureWorks2012.Person.Address Addr
JOIN AdventureWorks2012.Person.BusinessEntityAddress BEA
	ON Addr.AddressID = BEA.AddressID
JOIN AdventureWorks2012.Person.Person Pers
	ON BEA.BusinessEntityID = Pers.BusinessEntityID
GO
-- Add indexes including a non-unique (important later) clustered index
CREATE CLUSTERED INDEX ix1_People2 ON People2(BusinessEntityID)
CREATE INDEX ix2_People2 ON People2(AddressLine1, AddressLine2, City, StateProvinceID, PostalCode)
CREATE INDEX ix3_People2 ON People2(LastName, FirstName, MiddleName)
GO

I’m now going to use a modified version of a query I got off of Basit’s(b/t) blog.

Note that ix1 in both cases should be about the same. The CI IS the table. It contains all of the data for the table so there shouldn’t be any significant change in size. The second index (ix2) is also going to be about the same size. I just swapped the two sets of columns so both ix2s are going to contain BusinessEntityID, AddressLine1, AddressLine2, City, StateProvinceID, and PostalCode. The third index (ix3) on the other hand should show a fairly significant difference. The table People1 will have the columns LastName, FirstName, and MiddleName & AddressLine1, AddressLine2, City, StateProvinceID, and PostalCode while the table People2 will have columns LastName, FirstName, and MiddleName & BusinessEntityID.

SELECT OBJECT_NAME(i.object_id) AS TableName, i.[name] AS IndexName
    ,SUM(s.[used_page_count]) * 8 AS IndexSizeKB
FROM sys.dm_db_partition_stats AS s
INNER JOIN sys.indexes AS i ON s.[object_id] = i.[object_id]
    AND s.[index_id] = i.[index_id]
WHERE OBJECT_NAME(i.object_id) IN ('People1','People2')
GROUP BY OBJECT_NAME(i.object_id), i.[name]
ORDER BY OBJECT_NAME(i.object_id), i.[name]
GO

CI_NCI_3

So exactly what I expected. ix1 & ix2 are about the same size in both tables. However ix3 for table People1 is about three times the size of ix3 on People2. Not a big deal with a small table with only 3 indexes. You get to a mm row table with 5 or 6 NCIs it could get rather significant.

Now on an up note with a larger clustered index you do get increased coverage.

SELECT FirstName, LastName, AddressLine1, AddressLine2,
	City, StateProvinceID, PostalCode
FROM People1
WHERE LastName LIKE 'A%'
  AND AddressLine2 IS NOT NULL

SELECT FirstName, LastName, AddressLine1, AddressLine2,
	City, StateProvinceID, PostalCode
FROM People2
WHERE LastName LIKE 'A%'
  AND AddressLine2 IS NOT NULL

CI_NCI_4

I realize it’s a bit of a goofy query but it does demonstrate the point. In People1 where the CI contains the address information the optimizer was able to use ix3 as a covering index. In People2 where the CI is the BusinessEntityID the optimizer had to use both ix2 and ix3 and ended up taking 95% of the combined time of the two queries. Since the columns in the CI are in all indexes they can always be used when determining if the index covers a query.

Now in my opinion these are not primary reasons for picking out a clustered index. They are more consequences of a CI choice. Important consequences admittedly. Hopefully though, this does point out some of the reasons why picking out the CI for a table is at once very important and very tricky.


Filed under: Index, Microsoft SQL Server, SQLServerPedia Syndication Tagged: index, microsoft sql server

Deny vs Revoke

$
0
0

Quick quiz. Which of these two commands is the opposite of GRANT?

  1. DENY
  2. REVOKE

 
Well lets start with some definitions

  • GRANT – Grants permissions on a securable to a principal.
  • DENY – Denies a permission to a principal.
  • REVOKE – Removes a previously granted or denied permission.

 
While I can really see some arguments either way in the end I would have to go with REVOKE as the opposite of both GRANT and DENY. If you look at the definitions both GRANT and DENY generate a permission rule while REVOKE removes that rule.

These two commands are fairly basic but you would be surprised how often people get them confused. As we see above DENY stops a user from accessing an permission. Except in a very few specific cases (sysadmin & dbo) a DENY will override a GRANT. This means that if a user is denied a permission they can not inherit a GRANT from another source.

-- Set up a login and user
CREATE LOGIN DenyTest WITH PASSWORD = 'DenyTest', 
     CHECK_POLICY = OFF;
GO
USE Test2;
GO
CREATE USER DenyTest FROM LOGIN DenyTest;

-- Set up a role that grants SELECT permissions to the database
CREATE ROLE GrantSelectRole;
GRANT SELECT TO GrantSelectRole;
EXEC sp_addrolemember 'GrantSelectRole','DenyTest';

-- Create a table with some values
CREATE TABLE Test (abc varchar(10));
INSERT INTO Test VALUES ('abcd');
INSERT INTO Test VALUES ('efgh');

Then in a window logged in as DenyTest

USE Test2;
GO
SELECT * FROM Test;

DenyTest_1

Next we DENY SELECT to the user

DENY SELECT TO DenyTest;

Run our test again

USE Test2;
GO
SELECT * FROM Test;

But this time we get an error

DenyTest_2

And in fact we can do the reverse (grant to the user & deny the role).

REVOKE SELECT TO DenyTest;
REVOKE SELECT TO GrantSelectRole;
DENY SELECT TO GrantSelectRole;
GRANT SELECT TO DenyTest;

And get exactly the same error.

DenyTest_3

But if I don’t include a DENY I can put the GRANT on the role or the user and the user will have the permissions needed.

REVOKE SELECT TO DenyTest;
REVOKE SELECT TO GrantSelectRole;
GRANT SELECT TO GrantSelectRole;

OR

GRANT SELECT TO DenyTest;

And we now have access again

DenyTest_4

So remember.

  • GRANT and DENY create a permission rule
  • REVOKE removes a permission rule
  • DENY always overrides a GRANT no matter what level the GRANT and DENY rules are placed.

 

BONUS: If you issue a GRANT that directly overrides a DENY (or vise-versa) the DENY is actually removed from the principal.

DENY SELECT TO DenyTest;
GRANT SELECT TO DenyTest;

The above code actually ends up with a single permission. SELECT is GRANTed to DenyTest.

If you run the opposite

GRANT SELECT TO DenyTest;
DENY SELECT TO DenyTest;

There is still a single permission rule but this time SELECT is DENYed to DenyTest


Filed under: Microsoft SQL Server, Security, SQLServerPedia Syndication, T-SQL Tagged: code language, database permissions, language sql, microsoft sql server, security, T-SQL

SchemaBinding – What & Why

$
0
0
What

When you use the SchemaBinding keyword while creating a view or function you bind the structure of any underlying tables or views. So what does that mean? It means that as long as that schemabound object exists as a schemabound object (ie you don’t remove schemabinding) you are limited in changes that can be made to the tables or views that it refers to.

That still sounds a bit confusing. This may be easier with an example (I like examples).

CREATE SCHEMA Bound 
CREATE TABLE Table1 (Id Int, Col1 varchar(50), Col2 varchar(50))
CREATE TABLE Table2 (Id Int, Col1 varchar(50), Col2 varchar(50))
CREATE TABLE Table3 (Id Int, Col1 varchar(50), Col2 varchar(50))
CREATE VIEW UnBoundView AS
SELECT Id, Col1, Col2 FROM Bound.Table1
CREATE VIEW BoundView WITH SCHEMABINDING AS
SELECT Table2.Id, Table2.Col1, Table2.Col2
FROM Bound.Table2
JOIN Bound.Table3
	ON Table2.Id = Table3.Id;
GO

So I’ve created three tables and two views under the schema Bound. The first view is unbound and references Table1 and the second view references tables Table2 and Table3 and is schemabound. I do want to point out a couple of things.

 
Next I’m going to try to drop a column referenced by each of the views.

ALTER TABLE Bound.Table1 DROP COLUMN Col2;
GO

This one works fine. Bound.Table1.Col2 is not referenced by any schemabound objects.

ALTER TABLE Bound.Table2 DROP COLUMN Col2;
GO

This one get’s an error.

Msg 5074, Level 16, State 1, Line 1
The object ‘BoundView’ is dependent on column ‘Col2′.
Msg 4922, Level 16, State 9, Line 1
ALTER TABLE DROP COLUMN Col2 failed because one or more objects access this column.

One of my favorite parts of writing a blog is when I learn something new. In this case I had made the incorrect assumption that if a table was referenced by a schemabound function or view that you could not make any changes to it’s structure. Turns out that only the columns referenced by the function or view are bound.

So for example these work:

ALTER TABLE Bound.Table3 ALTER COLUMN Col1 varchar(51);
ALTER TABLE Bound.Table2 ADD Col3 varchar(50);
ALTER TABLE Bound.Table2 DROP COLUMN Col3;
ALTER TABLE Bound.Table2 ALTER COLUMN Col3 varchar(51);
ALTER TABLE Bound.Table2 ADD CONSTRAINT df_Tb2_Col1 DEFAULT 'A' FOR Col1

And these don’t:

ALTER TABLE Bound.Table2 ALTER COLUMN Col1 varchar(51);
ALTER TABLE Bound.Table2 DROP COLUMN Col2;

And here are a couple of other restrictions & factoids.

  • You can not change the collation of a database with schemabound objects.
  • You can not use SELECT * in a schemabound view.
  • You can not run sp_refreshview on a schemabound view. You do get a rather unhelpful error though.
  • You can make any change to the table that do not affect the structure of the bound columns.
  • You can find out if an object is schemabound by looking at the column is_schema_bound in sys.sql_modules or the system function OBJECTPROPERTY(object_id, ‘is_schema_bound’).
  • If you reference a view or function in a schemabound view or function then that view or function must also be schemabound.
  • Objects that are bound (tables/views) can not be dropped while a schemabound object references them

 

Why

Schemabinding isn’t a commonly used tool unless you are setting up an indexed view and then it can get lost in the crowd of other required restrictions. It does have uses outside of indexed views however. I could see using it if there is a mission critical view/function that just CAN’T break. By including the SCHEMABINDING clause you protect the view/function from unexpected changes to the tables underneath them. In fact if all of the data access in an application is through views and TVFs I might consider schemabinding all of them. It might be less important in a small shop with only a couple of developers and/or DBAs where everyone knows what changes are being made and what effect they will have. However if you are in a big shop with dozens of applications, may of which use the same databases, you can easily make a change to a table that breaks code in another application that you were completely unaware of.

So in the end SCHEMABINDING isn’t a world changing clause but still one that you should be aware of.


Filed under: Index, Microsoft SQL Server, SQLServerPedia Syndication, T-SQL Tagged: code language, index, language sql, microsoft sql server, T-SQL

More than sp_help

$
0
0

If you have worked with SQL Server for very long you have probably run across the extremely useful system function called sp_help. This handy little function will return a list of the objects in the database if you don’t pass in a parameter. If you do pass in a parameter (and it’s a valid object name) then it returns different types of detailed information about the object named in the parameter depending on the type of object.

If for example you run:

sp_help sp_help -- parameter name can be in quotes or not if 
         -- it's a single part name

You get the date the SP was created and a list of any parameters with their names, data_types, and schemas.

sp_help_1

If on the other hand you run:

sp_help [sys.objects] -- If the object name is a two part name 
         -- then it must be quoted or put in []s

Now you get the create date, the list of columns and their schema, the identity column if there is one, the RowGuidCol if there is one, and lists of indexes, constraints and foreign keys if they exist.

sp_help_2

Again, if you have worked with SQL Server for a while you probably know most if not all of this. What you may not know is that there are a number of sp_help functions. sp_helpindex for example returns the list of indexes from a table or view. Likewise sp_helpconstraint and any CHECK or DEFAULT constraints, sp_helptrigger and any triggers and sp_helptext and the definition of any code based object. At a higher level you see sp_helpdb, sp_helpfile and sp_helpfilegroups that display information on databases, files and filegroups respectively. It’s well worth taking at least a brief look at the list of sp_help functions because while all of this information and more is available in the system views and DMOs, sometimes a sp_help function has just the information you need and can be much quicker than writing a query.


Filed under: Microsoft SQL Server, SQLServerPedia Syndication, System Functions and Stored Procedures, T-SQL Tagged: code language, language sql, microsoft sql server, sql statements, system functions, T-SQL

Generating a restore script

$
0
0

In order to speed up our backups on a large database our team decided to stripe the backup files. In case you weren’t aware of this particular backup feature it his means that a single backup is written to multiple files which can dramatically speed up your backups and restores. Unfortunately in this partiular case it also meant that our script that automatically generated restore commands broke. And of course I was asked to correct it. First thing I did was to tweet to #sqlhelp and I recieved a number of great scripts. Unfortunately none were exactly what I needed so I started merging and modifying and building my own. I was about a third of the way done when I happend to be reading dba.stackexchange.com and ran across a link to a restore command generator called sp_RestoreScriptGenie by Paul Brewer and based on a script by Robert Davis(b/t).

Among other things it has the following features:

  • It will generate the most recent restore script for all user databases if you don’t pass in a parameter.
  • Multi-file backup files are support for FULL, DIFF and LOG backups.
  • Flag to include scripts for the system databases.
  • Option to pass in a single database name and generate the restore for just that database.
  • Generate the scripts to restore to a specific time.
  • Flag to modify the script to leave the database in standby mode.
  • Parameters to modify the data, log and backup directories.

 

Now it isn’t perfect:

  • In the discussion they mention changing the “Device_Type” from = 2 to 7.
  • I couldn’t find a version more recent than early 2013 so I’m not sure if it is still supported.
  • The scripts are generated from msdb so it has limited usefulness in a DR situation.
  • It can only handle 10 files per backup (if you are using more than 10 files for the backup you may have additional problems).
  • It automatically includes a CHECKDB at the end of each restore. Really a good thing but I would rather be able to turn it off if I need to.

 

As you can see I consider it a very good script generator given the items on the problem list are fairly minor and the positives are pretty cool. I believe we will be using it in our office and I have added it to my Free Scripts page in case you want to use it too.


Filed under: Backups, Dynamic SQL, Microsoft SQL Server, SQLServerPedia Syndication Tagged: backups, dynamic sql, language sql, microsoft sql server

Two years!

$
0
0

Two years ago today I began my blog with a post about the DEFAULT keyword. I set out with the goal of building a blog I could be truly proud of in three years. One that was well liked and provided value to the community.

Little did I imagine how much fun I would have in the process and how much satisfaction I would get from the result. In order to get where I wanted to go I started out with a goal of writing at least one post a week and then later expanded that to two posts a week. I wanted the majority of my posts to be technical and would provide value, either by answering a question or discussing a topic I felt was important. Obviously I’ve done a few fluff pieces (Who’s on call is my favorite) but for the most part I feel like I’ve stuck to technical posts pretty well. Along the way I planned on writing a few articles and working on my certifications.

As of my today (my two year anniversary) I have done the following:

  • I have posted 190 pieces on my blog.
  • Written sp_dbpermissions & sp_srvpermissions
  • Written 4 articles for SSC (bringing my total to 6)
  • Earned my MCITP in both Administration & Development for SQL 2008

 

And because it’s my two year anniversary I’m going to brag a bit. At this point I’ve achieved the following amazing (at least to me) statistics:

  • 145,000 views to date
  • A daily high of 2015 views
  • A monthly high of ~16,000 views
  • ~600 comments, the vast majority of which have been positive.
  • 140 followers of my blog and 180 followers on twitter
  • Earned the very first SQL Pro of the month from Toadworld (and boy do I stand in some awesome company)
  • I’ve been listed four times on the Brent Ozar Unlimited’s Weekly links (Thanks Kendra!) (FYI this was one of those “wouldn’t it be cool if” goals when I started.)

 
I’ll admit it, I’m a geek. I love this stuff. Once I got over some “stage fright” I found that I really really enjoy writing about SQL Server and my experiences with it. Some of the best feelings in the world are when someone you know says “I love your blog” completely out of the blue. In fact in the space of one day I had a guy I know tell me he loves my blog and a co-worker tell me that he really likes the fact that when he reads my blog I give him answers to specific problems.

 
So what are my plans for the next year?

  • Continue to try to reach a goal of 2 posts a week for the year of 2014 (probably 2015 too).
  • Write one or two more articles for SSC
  • Write an article on ways to use sp_dbpermissions and sp_srvpermissions
  • Continue to improve sp_dbpermissions and sp_srvpermissions
  • Finally get my MCSA and maybe my MCSE

 
After that, well hopefully we will both be around to see. Thanks for reading my blog and helping to make it all worthwhile!


Filed under: Blogging, Microsoft SQL Server, SQLServerPedia Syndication Tagged: blogging, microsoft sql server

The amazing never shrinking heap

$
0
0

This is a quick demo of a little “trick” with heaps I’ve known about for a couple of years. However until recently I could never duplicate it on purpose. (You can read that as I’ve had a production problem bite me in the …. repeatedly.) At least I couldn’t duplicated it until I watched Kendra Little’s (b/t) video on heaps. Kendra goes into a great deal more detail on heaps than I will be here. Fair warning though if you are a beginner or dabbler it may be a bit tough in spots. She is using a lot of DMOs and some undocumented commands as well. If you feel comfortable with the skill level however, I highly recommend watching it.

On to the demo:

-- Create a test table.
CREATE TABLE HeapSpace (Id int NOT NULL identity(1,1), 
	Code char(1), Col1 varchar(1000), Col2 varchar(1000));
GO
-- Load the test table with some values and check the table size.
INSERT INTO HeapSpace (Code, Col1) VALUES ('A', REPLICATE('A',50));
GO 10000
EXEC sp_spaceused 'HeapSpace'; 
GO

The system stored procedure sp_spaceused will return to us (among other things) the amount of space reserved by the table and how much of it is free.

name rows reserved data index_size unused
HeapSpace 10000 712 KB 704 KB 8 KB 0 KB

-- Add some more values to the table and check the table size again.
INSERT INTO HeapSpace (Code, Col1) VALUES ('B', REPLICATE('A',50));
GO 10000
EXEC sp_spaceused 'HeapSpace';
GO
name rows reserved data index_size unused
HeapSpace 20000 1416 KB 1408 KB 8 KB 0 KB

Note that the amount of space used has about doubled and the unused space is still 0.
 

-- Do some processing on one of our data sets.
UPDATE HeapSpace SET Col2 = REPLICATE('B',50);
GO
EXEC sp_spaceused 'HeapSpace';
GO
name rows reserved data index_size unused
HeapSpace 20000 2696 KB 2648 KB 8 KB 40 KB

Processing the data has added almost another 50% to the size of the table.
 

-- Get rid of the first block of data.
DELETE FROM HeapSpace WHERE Code = 'A';
GO
EXEC sp_spaceused 'HeapSpace';
GO
name rows reserved data index_size unused
HeapSpace 10000 1928 KB 1872 KB 8 KB 48 KB

Here you can see we have deleted half the data and only reduced the reserved space of the table by 700KB (say a quarter of the total space).
 

-- Get rid of the rest of the data.
DELETE FROM HeapSpace WHERE Code = 'B';
GO
EXEC sp_spaceused 'HeapSpace';
GO
name rows reserved data index_size unused
HeapSpace 0 1224 KB 1096 KB 8 KB 120 KB

Now we have 1224KB reserved for the table and only 120KB of that is “unused”. That means there should be 1104 KB of data right? But wait just a minute. I have zero rows so no data!

Now to be fair if I truncate the table it will all go back to 0. However in my case only part of the data is cleared out at any point in time. Over and over data is loaded, processed, and cleared out. Millions of rows at a time. Every couple of month or so I hear “I’ve run out of space again.” I clear it out, tell them they really need a clustered index, and in a another month I’m clearing space up again.


Filed under: Microsoft SQL Server, Problem Resolution, SQLServerPedia Syndication Tagged: heap, language sql, microsoft sql server, problem resolution

Two simple commands that can be a big help in performance tuning.

$
0
0

The first thing that always comes to mind when discussing performance tuning is query plans and rightly so. They are the best information about what a query is doing and so how to improve it. However there are a couple of little commands that can be a big help too. SET STATISTICS TIME ON and SET STATISTICS IO ON can give you some quick information about the performance of a query that can in its own way be a huge help.
 

SET STATISTICS TIME ON

When trying to tune a query it’s frequently helpful to know precisely how long the query took. When you SET STATISTICS TIME ON SQL will return the CPU and elapsed time spent on parsing, compiling and executing the query. You might think “Oh big deal, I can see how long my query took in the bottom right hand corner of the window.” Well yes, but that’s in seconds and this is in milliseconds. Not a big deal if you are tuning down from 1 hr but if you are starting at 2 seconds it’s pretty helpful. And one last benefit, because the time is printed to the message pane, if you are running time trials you can easily copy and paste these results to a text file.

SQL Server parse and compile time: 
   CPU time = 0 ms, elapsed time = 0 ms.

(5 row(s) affected)

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 55 ms.

 

SET STATISTICS IO ON

The output of this simple command is possibly one of the most useful tuning tools I’ve used (other than query plans of course). This command simply displays the IO used. This is particularly useful since IO tends to be one of the biggest bottlenecks in any query. Take a look at the output of the view AdventureWorks2012.Sales.vStoreWithContacts.

SET STATISTICS IO ON
SELECT * FROM Sales.vStoreWithContacts
(753 row(s) affected)
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'PersonPhone'. Scan count 753, logical reads 1630, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'EmailAddress'. Scan count 753, logical reads 1635, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Person'. Scan count 0, logical reads 2315, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'BusinessEntityContact'. Scan count 20, logical reads 43, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'ContactType'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Store'. Scan count 1, logical reads 103, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'PhoneNumberType'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

If you look you can see the 7 tables in the view and a worktable. For each of these there is the following:

  • Scan Count – Number of seeks/scans started after reaching the leaf level
  • logical reads – Number of pages read from the cache
  • physical reads – Number of pages read from the disk
  • read-ahead reads – Number of pages placed into cache for this query
  • lob logical reads – Same as above but for lob pages
  • lob physical reads – Same as above but for lob pages
  • lob read-ahead reads – Same as above but for lob pages

 
So how does this help? Typically I’ll look for the table with the highest scans and or logical + physical reads. You never want to rely on just logical or physical since that is totally dependent on what is and is not in cache. Once I’ve identified a table with a large number of reads/scans I can target that table for a closer look. Typically I check these tables to see if additional indexes will help or if the way it’s used in the query could be changed. If I were tuning this particular view I would start with Person, PersonPhone, and EmailAddress. Note that the schema is not listed as part of the output so you will need to look at that up from the query. In this particular case the view & table structures are well designed (it is AdventureWorks after all) so there isn’t really much to do.
 

But Ken the output is rather confusing.

If you find this output a bit daunting/confusing Richie Rump(b/t) has created http://statisticsparser.com/ to help us out. Simply paste the output into the box provided and hit the parse button.

StatisticsParser1

And you get an easy to read table that looks like this:

StatisticsParser2

Both TIME & IO outputs placed into nice easy to read tables. For the IO Richie even included the % of the total IO that each table took up.
 

Summary

Remember that the query plan will give you a great deal more information so you don’t want to neglect it. But these two short commands will give you some very helpful information and are quite a bit easier to read. I find them a good place to start and if nothing obvious turns up then I turn to the query plan for additional help. You will also want to keep a copy of the STATISTICS output from before and after any tuning efforts so you can get an accurate view of any improvements you’ve made.


Filed under: Microsoft SQL Server, Performance, SQLServerPedia Syndication, System Functions and Stored Procedures, T-SQL Tagged: code language, language sql, microsoft sql server, Performance, T-SQL

Pausing an MSSQL Instance

$
0
0

I’m sure most of you have looked at the control options of the SQL services right? Start an instance, stop an instance, pause an instance. Start, stop, pause. Wait just a minute! Is this SQL Server or an mp3? (I just want to point out I started with a record, thought about a tape, then went to a CD. I actually had to think for a minute before coming up with MP3s. Talk about showing your age.) Most DBAs have had the opportunity to start and/or stop an instance. What most DBAs haven’t done is Pause or Resume an instance.

Actually Pausing or Resuming is pretty easy. There are several ways to do it but probably the easiest is to right click on the instance name in the Configuration Manager (or SSMS) and select Pause (or Resume depending). Note: In SSCM the list of instance services is under SQL Server Services and in SSMS it’s the connection in the Object Explorer.

Pause4

You can also use windows net commands.

net pause MSSQL$instancename
net continue MSSQL$instancename

Now that you know how, you still probably shouldn’t pause or resume an SQL Server instance if you don’t know what it actually does. From BOL:

When you pause an instance of Microsoft SQL Server, users that are connected to the server can finish tasks, but new connections are not allowed.

Interesting. So existing connections are unaffected but new connections are not allowed. Sounds useful. Useful how you might ask? Well lets say I need to run maintenance on a server. I let everyone who is currently connected know that they need to get out. One of the devs comes to me and let’s me know he has a batch process that is almost finished and could I please give him 10 more minutes. The problem is that if I wait I’m going to have a whole new group of people logged in. So what do I do?

Pause the instance!

Open 3 seperate query windows. Run the following code in 2 of them.

WAITFOR DELAY '00:01:30'

Next pause your instance.

Pause2

Then run this code in the 3rd window.

SELECT * FROM sys.dm_exec_requests WHERE session_id > 50

Pause5

You can see that not only do the running queries continue to run but new queries executed within an existing connection also run.

Now try to open a new connection.

Pause1

Now we see an error letting us know that the server is in fact paused and we can’t create new connections.

At this point we can shut down connections as they complete their tasks and once everyone is cleared out finish our maintenance.

Once we are done, if we didn’t actually stop and restart the instance, we should go ahead and resume it.

Pause3

As always make sure you have tried this out and are comfortable with how it works before trying it in production.


Filed under: Microsoft SQL Server, Settings, SQL Services, SQLServerPedia Syndication Tagged: microsoft sql server, SQL Services

T-SQL Tuesday #58: Passwords

$
0
0

T-SQL Tuesday

It’s that time again. The second Tuesday of each month we have a blog party called T-SQL Tuesday. The host picks a subject and we all blog about it. It was originally started by Adam Mechanic (b/t) almost 5 years ago. This month Sebastian Meine (b/t) is hosting and he’s picked passwords as our subject.

Let’s start by saying that P@ssw0rd1 is not a good password.
For anything.
Ever.

That has absolutely nothing to do with what I wanted to talk about but it seemed worth pointing out.

Over the last couple of years I’ve written about transferring SQL Server passwords using the password hash a couple of different times. (Here and Here) and of course I use it in my sp_SrvPermissions script. So SQL Server stores the passwords for SQL Logins as a hash. But what exactly is the password hash?

Hashing is the transformation of a string of characters into a usually shorter fixed-length value or key that represents the original string.

Hash algorithms are also one way. That means that even though you have the hash value and the hashing algorithm it would be all but impossible to re-construct the original value. So because SQL Server only stores the passwords as hashes it’s all but impossible to retrieve someone’s password. I can think of one very specific way to get out a clear text password (cc Argenis Fernandez (b/t)), but it only works under certain circumstances and I’m not going to describe it here.

So if SQL only saves the HASH how does it know you’ve typed in the correct password? Well, every time you type in your password it gets hashed. That hash value is then compared to the stored hash value and if there is a match then you’re in.

SQL Server uses one of the SHA hash algorithms. You can tell which one by using the system function LOGINPROPERTY (”,’PasswordHashAlgorithm’). I’m not sure how it determines which to uses but I would guess it’s a matter of SQL Server version.

Here is an example of one of the hashes on my SQL 2012 instance using SHA-2:

0x0200029F2AA1A1242AF60B3EE3432C6D6E1343E0D96180430DAC2B5FD25C79A106F56B7476D197228C503CCDA3C72574FEA48D24382F007AB5399C2C19324CA39DC77986FB01

As you can see it’s fairly long, and given that the output of a hash algorithm is fixed-length all SHA-2 hashes are going to be this long. Of course there are several SHA-2 algorithms with different lengths but I’m fairly sure SQL Server only uses one of them.

There is of course always the possibility of a collision, or two strings generating the same hash value. But as I understand it the possibility of two strings generating the same hash is amazingly small. In fact here is a really good answer on stackoverflow that discusses it.

Hashing is one of the simplest methods of encryption because of course there is no need to store an encryption or decryption key. It’s encrypted using the hash algorithm and there is no decryption it. This does however make it perfect for storing passwords. Once the password is stored as a hash it’s all but impossible to decrypt it. At that point it’s all on you to pick a good password.

123456 is another really bad idea for a password. Just saying.


Filed under: Microsoft SQL Server, Security, SQLServerPedia Syndication, T-SQL Tuesday Tagged: Argenis Fernandez, hashing, microsoft sql server, security, T-SQL Tuesday

What are trace flags?

$
0
0

Trace flags are one of those things that I’ve heard about more and more over the last five or six years. But only in the past year or so have I started to understand what they are and how to use them. I want to start out by saying that they are a fairly advanced tool and they should only be used with great care, after much testing and only if you are sure you need them.

Per BOL

Trace flags are used to temporarily set specific server characteristics or to switch off a particular behavior

For example trace flag 7806 enables the DAC in SQL Server Express. While trace flags 1211 and 1224 disable lock escalation in different ways. You can even turn the new SQL 2014 cardinality estimator on or off with trace flags 2312 and 9481.

In the BOL entry for trace flags there are currently 20 different flags listed. This is by no means a complete list. Kendra Little for example has recently posted about two more that she feels should be added to the list and Paul Randal uses trace flag 3604 when working with the undocumented DBCC IND and DBCC PAGE commands.

I want to say again that before you use a trace flag you should be 100% certain you know what it does and carefully test it before implementing it on a production box.

That being said:
 

Using a trace flag

There are 3 different ways you can use a trace flag.

  • As a query hint

    From lowest scope to highest we start by using a trace flag for just a single query. Some trace flags can be enabled at a query level using the QUERYTRACEON query hint. The list of trace flags available at a single query level are listed in the previous link.

    -- Trace flag 9481 turns off the new
    -- cardinality estimator in 2014
    
    SELECT *
    FROM TableName
    OPTION(QUERYTRACEON 9481);
  • Turn it on for this session

    A trace flag can also be turned on for any query in the current session using DBCC TRACEON

    -- Same Trace Flag as before. Trace flag 9481 turns 
    -- off the new cardinality estimator in 2014
    
    DBCC TRACEON(9481)
  • Turn it on for the instance

    Trace flags can also be turned on for the entire instance. This can be done using the DBCC TRACEON command with the optional -1 parameter, or by using the -T trace# command-line startup option. BOL recommends the -T option. If you enable traceflags using the startup options then it will remain enabled after any instance restart.

    -- Trace flag 7806 will turn on the DAC in SQL Express
    
    DBCC TRACEON(7806, -1)

 

Turning off a trace flag

Having turned on a trace flag you will probably want to know how to turn it back off again. DBCC TRACEOFF will turn off a trace flag that has been previously turned on. It also has the -1 optional parameter to turn off the flag at a session or instance level. Remember that parameter. If you have a trace flag turned on at the instance level and forget to include it then the trace flag will remain on. The same is true if you include the -1 parameter when trying to turn off a trace flag at the session level.

-- Turn the trace flag 7806 back off
-- to disable the DAC in SQL Express
DBCC TRACEOFF(7806, -1)

 

Check the status of a trace flag

Once you have started turning trace flags on and off it’s probably a good idea to be able to check their status. You can do this with DBCC TRACESTATUS. DBCC TRACESTATUS is a little different in that it doesn’t have to be passed a trace flag number. If you do pass one in then you get back the status of that flag only. If you do not pass a flag number in then you get all active trace flags. There is a -1 parameter on this one also but it doesn’t seem to do anything.

DBCC TRACESTATUS

TraceFlag
 

A couple of final points:
  • All of the above commands can be passed multiple trace flags at once.
  • If you plan on getting either the MCSA or MCSE for SQL Server you should at least be familiar with trace flags. They will probably show up at some level.
  • Because this can never been said to much, one more reminder, these things are dangerous. Don’t use them unless you are on a test box or know what you are doing.

Filed under: Microsoft SQL Server, Settings, SQLServerPedia Syndication, System Functions and Stored Procedures, T-SQL Tagged: code language, language sql, microsoft sql server, system functions, T-SQL, trace flags
Viewing all 450 articles
Browse latest View live