Overblog Follow this blog
Administration Create my blog
Balavardhan Reddy Narani

Posts with #sql server2005 tag

Interview Questions of SQLServer

March 17 2013 , Written by Balavardhan Reddy Published on #SQL Server2005

Do you know what are User Defined functions.

Scalar, Inline Table-Valued and Multi-statement Table-valued.

A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. You pass in 0 to many parameters and you get a return value.

An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables.

A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement.
 

Can I return a table from a user defined function?
Yes

If I return a table from user defined function, can I join that table with another table?
Yes

What are differences between a function and SP
Return Value, and not exec
Not mandatory to return value, executable

 

What are views
Virtual Tables

What are advantages of a View?
More Performance

Why would I need to go for a view?
Instead of writing query in sp, you just create a view and call that obj in sp.. good performance

Can I create a view with Order by clause
No, because View is also a virtual table, so you can order the result set of View

 

Diff between Delete and Truncate table
TRUNCATE is a DDL command whereas DELETE is a DML command.

TRUNCATE is much faster than DELETE.
Reason:When you type DELETE.all the data get copied into the Rollback Tablespace first.then delete operation get performed.Thatswhy when you type ROLLBACK after deletinga table you can get back the data(The system get it for you from the Rollback Tablespace).All this process take time.But when you type TRUNCATE it removes data directly without copying it into the Rollback Tablespace.Thatswhy TRUNCATE is faster.Once you Truncate you cann't get back the data.
You cann't rollback in TRUNCATE but in DELETE you can rollback.TRUNCATE removes the record permanently.
In case of TRUNCATE Trigger doesn't get fired.But in DML commands like DELETE .Trigger get fired.
You cann't use conditions(WHERE clause) in TRUNCATE.But in DELETE you can write conditions using WHERE clause.

 

What is a temporary table
table with a pound sign (#). This tells SQL Server that this table is a local temporary table. This table is only visible to this session of SQL Server. When I close this session, the table will be automatically dropped. You can treat this table just like any other table with a few exceptions. The only real major one is that you can't have foreign key constraints on a temporary table
If stored procedure A creates a temporary table and calls stored procedure B, then B will be able to use the temporary table that A created
If I create a temp table the memory allocated in Tempdb

what is the diff between temp table and Variable table
    Table variables don't need to be dropped when you are done with them.
If I want to creaate a where condition dynamically and then assign it to Select statement how do I do it and how can I execute it

If you have less than 100 rows generally use a table variable.  Otherwise use  a temporary table.  This is because SQL Server won't create statistics on table variables.
If you need to create indexes on it then you must use a temporary table.
When using temporary tables always create them and create any indexes and then use them.  This will help reduce recompilations.  The impact of this is reduced starting in SQL Server 2005 but it's still a good idea.
 

what are differrent contraints

Domain Integerity Constraints:
Check: Limits the range of values or domain, checks the values which a col can have.
Referntial Integrity Constraints:
Foreign Key: Referntial integrity b/w 2 tables, one table col values based on other table col values.
Entity Integrity Constraints:
Default Constraint: defines the default value for a column.
Null: allows null value.
NOT null:Donot allow null value.
Primary Key: Ensure no duplicate data and not allow nulls.
Unique Key: donot allow duplicate values, but allows one null per col.Can have multiple unique constraints on a table

 

What is rank keyword in SQL Server.
Rank() Applies to monotonically increasing number for each value in the set
Rank() and Row_Number differ only if there are ties. (rank same for two rows.. next rows has incremental)
In Dense_Rank() Method the rank is increasing number for each value in the set.
NTILE(n)- Which splits the records into partition based on the Column in the Over() and assigns the Rank for the partition.

 

What is OLAP?
Online Analytical Processing (OLAP) allows us to access aggregated and organized data from business data sources, such as data warehouses, in a multidimensional structure called a cube. The arrangement of data into cubes avoids a limitation of relational databases which are not well suited for near instantaneous analysis of large amounts of data.

What is OLTP?
Online Transaction Processing (OLTP) relational databases are optimal for managing changing data. When several users are performing transactions at the same time, OLTP databases are designed to let transactional applications write only the data needed to handle a single transaction as quickly as possible.

 

What is SQL Server Agent?
SQL Server Agent is a Microsoft Windows service that executes scheduled administrative tasks called jobs.
we have roles created for using SQL Server agents.
·         SQLAgentUserRole
·         SQLAgentReaderRole
·         SQLAgentOperatorRole

 

Replication is a set of technologies for copying and distributing data and database objects from one database to another and then synchronizing between databases to maintain consistency.

Database Mirroring – Database Mirroring is moving the transactions of database from one SQL Server database to another SQL server database on a different SQL Server.

 

CheckPoint
When we done operation on SQL SERVER that is not commited directly to the database example:we are deleting bulk records from server but in between power shut down or some thing happened then that transaction is not committed from our side if we create check point on that delete statement after deleting some records it will save as check point next no need to delete those records again if check point is not there means we have to start form scratch.
All operation must be logged in to Transaction Log files after that they should be done on to the main database.CheckPoint are the point which alert Sql Server to save all the data to main database if no Check point is there then log files get full we can use Checkpoint command to commit all data in the SQL SERVER.
When we stop the SQL Server it will take long time because Checkpoint is also fired.

Read more

Sqlserver - Query to Get Size of the data

March 17 2013 , Written by Balavardhan Reddy Published on #SQL Server2005

Query to get the size of a column data for each row.

Select DATALENGTH(<<ColumnName>>) from <<TableName>>

Query to get the size of a column data for each group.

Select DATALENGTH(<<ColumnName>>), <<Column Name>> from <<TableName>> Group by <<Column Name>>

Read more

Query to remove the matching expression SQLServer

October 12 2012 , Written by Balavardhan Reddy Published on #SQL Server2005

A simple query to remove the particular patreen from the entire string, its just like reggular expression in the ASP.Net

Query to strip the matching expression charectors from string , 

Declare @myExpression varchar(50)

Declare @queryString varchar(max)

SET @myExpression =  '%['+@myExpression +']%'

    WHILE PatIndex(@myExpression, @queryString ) > 0

        SET @queryString = Stuff(@queryString , PatIndex(@myExpression, @queryString ), 1, '')

Select @queryString

Ex : 

set @queryString = '2fsfse54646vdf00_*gf,,dg45$'

set @myExpression = 'a-z*$,'

Result
25464600_45
Read more

FOR XML - SQL Server

June 9 2010 , Written by Balavardhan Reddy Published on #SQL Server2005

Introduction

In Microsoft SQL Server 2000, we introduced the FOR XML clause to the SELECT statement. This clause provides the ability to aggregate the relational rowset returned by the SELECT statement into XML. FOR XML on the server supports three modes—RAW, AUTO, and EXPLICIT—that provide different transformation semantics.


RAW
mode generates single elements, which are named row, for each row returned.


The AUTO mode employs a heuristic to infer a simple, one element name-per-level hierarchy based on the lineage information and the order of the data in a SELECT statement.


Finally, the EXPLICIT mode requires a specific rowset format that can be mapped into almost any XML shape, while still being formulated by a single SQL query.

The EXPLICIT mode format is highly successful in achieving its goals. There are only a few things it cannot do (arbitrary recursive part-list trees are one of them). The preferred SQL expression to generate the rowset format is a formidable "query from hell," however.

Unfortunately, FOR XML results in SQL Server 2000 can only be consumed on the client side, and authoring complex XML structures using FOR XML EXPLICIT is a complex exercise.

 

In SQL Server 2005, we now have alternatives to the "query from hell." In the following, we will take a look at the main features added to FOR XML in SQL Server 2005: Integration with the XML data type, assignability and nesting of expressions, and the new PATH mode.

Integration with the XML Data Type

With the introduction of the XML data type, we wanted to also give FOR XML the ability to generate an instance of XML directly (more precisely, it generates a single row, single column rowset where the cell contains the XML data type instance).

 
SELECT * FROM Employee FOR XML AUTO, TYPE


Returns the Employee elements as an XML data type instance, instead of the nvarchar(max) instance that would have been the case without the TYPE directive.

This result is guaranteed to conform to the well-formedness constraints provided by the XML data type. Since the result is an XML data type instance, you can also use XQuery expressions to query and reshape the result. For example, the following expression retrieves the Employee Name into a new Person element.

 
SELECT (SELECT * FROM Employee FOR XML AUTO, TYPE).query(
 '<doc>{ 
  for $c in /Employee 
  return  
  <Person name="{data($c/@employeename)}"/> 
  }</doc>') 

returns (only first elements shown),

 <doc> 
  <Person name="Maria Anders" /> 
  <Person name="Ana Trujillo" /> 
  <Person name="Antonio Moreno" /> 
  ... 
 </doc> 

Assigning FOR XML Results

The result of a FOR XML query can be assigned to a variable, or inserted into a column.

 
DECLARE @cust XML;
 SET @cust = (SELECT * FROM Employee FOR XML AUTO, TYPE) 
 CREATE TABLE T(i int, x XML) 
 INSERT INTO T SELECT 1, (SELECT * FROM Employee FOR XML AUTO, TYPE) 
   

Nesting of FOR XML Expressions

FOR XML, in SQL Server 2005, recognizes XML data type columns, and will inline them as sub-elements. Thus, we can nest FOR XML queries to generate hierarchies, instead of having to rely on the AUTO mode heuristic, or writing an EXPLICIT mode query.

Let's look at the example.

 
SELECT 1 as TAG,
  NULL as Parent, 
  CustomerID as "Customer!1!CustomerID", 
  NULL as "Order!2!OrderID", 
  NULL as "Employee!3!LastName" 
 FROM Customers 
 UNION ALL 
 SELECT 2, 
  1, 
  Customers.CustomerID, 
  Orders.OrderID, 
  NULL 
 FROM Orders  
 JOIN Customers ON Orders.CustomerID = Customers.CustomerID 
 UNION ALL 
 SELECT DISTINCT 3, 
  1, 
  Customers.CustomerID, 
  NULL, 
  Employees.LastName 
 FROM Customers  
 JOIN Orders ON Customers.CustomerID = Orders.CustomerID 
 JOIN Employees ON Orders.EmployeeID = Employees.EmployeeID 
 ORDER BY "Customer!1!CustomerID","Employee!3!LastName","Order!2!OrderID" 
 FOR XML EXPLICIT 
   

returns (only first customer shown),

 
<Customer CustomerID="ALFKI">
  <Order OrderID="10643" /> 
  <Order OrderID="10692" /> 
  <Order OrderID="10702" /> 
  <Order OrderID="10835" /> 
  <Order OrderID="10952" /> 
  <Order OrderID="11011" /> 
  <Employee LastName="Davolio" /> 
  <Employee LastName="Leverling" /> 
  <Employee LastName="Peacock" /> 
  <Employee LastName="Suyama" /> 
 </Customer> 
 ... 

As you can see, we need a select statement for each element. We also repeat the parent's identification with every child, so that the order by groups the children with their parents. The streaming serialization from the rowset to XML depends on this grouping to get the correct nesting.

Now let's look at how we can rewrite this by using nesting FOR XML expressions. We can leverage the new TYPE directive to generate XML data type instances (otherwise, you will get a textual result that will be entitized if it is embedded in another FOR XML query) and nest sub selections to define the hierarchy. We now rewrite the previous EXPLICIT mode query using the AUTO mode and nesting:

 
SELECT CustomerID as "CustomerID",
  (SELECT OrderID as "OrderID" 
  FROM Orders "Order" 
  WHERE "Order".CustomerID = Customer.CustomerID 
  FOR XML AUTO, TYPE), 
  (SELECT DISTINCT LastName as "LastName" 
  FROM Employees Employee 
  JOIN Orders "Order" ON "Order".EmployeeID = Employee.EmployeeID 
  WHERE Customer.CustomerID = "Order".CustomerID 
  FOR XML AUTO, TYPE) 
 FROM Customers Customer 
 FOR XML AUTO, TYPE 


This returns the same result as the EXPLICIT mode query. Obviously, this query is easier to write, understand, and maintain, even though it contains basically the same number of select statements and join conditions.

The New PATH Mode

The PATH mode allows you to use an XPath-like syntax as a column name, which then is mapped into an attribute (e.g., "@a"), element (e.g., "e"), sub element structure ("e1/e2"), element content ("*"), text node ("text()"), or data value ("data()"). As with the RAW mode, the default name for the row element is row and can be overwritten with an NCName (a name without a prefix).

Let us give the PATH mode formulation of the above EXPLICIT mode query, now we know why the EXPLICIT mode is sometimes called the "query from hell."

 
SELECT CustomerID as "@CustomerID",
  (SELECT OrderID as "@OrderID" 
  FROM Orders 
  WHERE Orders.CustomerID = Customers.CustomerID 
  FOR XML PATH('Order'), TYPE), 
  (SELECT DISTINCT LastName as "@LastName" 
  FROM Employees 
  JOIN Orders ON Orders.EmployeeID = Employees.EmployeeID 
  WHERE Customers.CustomerID = Orders.CustomerID 
  FOR XML PATH('Employee'), TYPE) 
 FROM Customers 
 FOR XML PATH('Customer') 


This example is similar to the AUTO mode version and returns the same result.

 
SELECT CustomerID as "@CustomerID",
  CompanyName, 
  Address as "address/street", 
  City as "address/city", 
  Region as "address/region", 
  PostalCode as "address/zip", 
  Country as "address/country", 
  ContactName as "contact/name", 
  ContactTitle as "contact/title", 
  Phone as "contact/phone",  
  Fax as "contact/fax" 
 FROM Customers 
 FOR XML PATH('Customer'), ROOT('doc') 

This query results in the following document

 <doc> 
  <Customer CustomerID="ALFKI"> 
  <CompanyName>Alfreds Futterkiste</CompanyName> 
  <address> 
  <street>Obere Str. 57</street> 
  <city>Berlin</city> 
  <zip>12209</zip> 
  <country>Germany</country> 
  </address> 
  <contact> 
  <name>Maria Anders</name> 
  <title>Sales Representative</title> 
  <phone>030-0074321</phone> 
  <fax>030-0076545</fax> 
  </contact> 
  </Customer> 
  ... 
 </doc> 


How would this query look like if it used the EXPLICIT mode? We need four select clauses—one for each non-leaf element—instead of only one selection.

Last but not least, the following gives us an example of generating a value list, and shows the use of a text node.

 
SELECT CustomerID as "@ID",
  (SELECT OrderID as "data()" 
  FROM Orders 
  WHERE Customers.CustomerID=Orders.CustomerID 
  FOR XML PATH('') 
  ) as "@OrderIDs", 
  CompanyName, 
  ContactTitle as "ContactName/@ContactTitle", 
  ContactName as "ContactName/text()", 
  PostalCode as "Address/@ZIP", 
  Address as "Address/Street", 
  City as "Address/City" 
 FROM Customers 
 FOR XML PATH('Customer') 
   

This creates a result of the form (showing one example customer),

 <Customer ID="HUNGC" OrderIDs="10375 10394 10415 10600 10660"> 
  <CompanyName>Hungry Coyote Import Store</CompanyName> 
  <ContactName  
  ContactTitle="Sales Representative">Yoshi Latimer</ContactName> 
  <Address ZIP="97827"> 
  <Street>City Center Plaza 516 Main St.</Street> 
  <City>Elgin</City> 
  </Address> 
 </Customer> 


The sub query, which generates the OrderIDs attribute list, maps the OrderID column values as atomic values (using the path data()). These values will then be serialized as a text node by adding a space between sibling atomic values, which are provided in the next cell in the rowset. We then avoid generating a name for the row by using the zero-length string as the PATH mode argument, so that we get a single string (note, there is no TYPE directive!) as the result of the FOR XML PATH expression. That string gets mapped to the OrderIDs attribute by the containing FOR XML expression.

The CompanyName gets mapped to a sub element of the same name.

The ContactTitle produces the ContactTitle attribute of the ContactName element, while the ContactName column value is mapped into the text node of the same element. Note that in this case, the same result would have been achieved by mapping ContactName to the ContactName element directly.

Finally, the properties for the Address element parts are put together.

More FOR XML Extensions

Besides the new features listed above, SQL Server 2005 provides the following new features:

RAW mode can be combined with the ELEMENTS directive, and take a parameter to overwrite the row element name. For example,

 
SELECT * FROM Customers FOR XML RAW('Customer'), ELEMENTS

returns (only showing the first customer),

 
<Customer>
  <CustomerID>ALFKI</CustomerID> 
  <CompanyName>Alfreds Futterkiste</CompanyName> 
  <ContactName>Maria Anders</ContactName> 
  <ContactTitle>Sales Representative</ContactTitle> 
  <Address>Obere Str. 57</Address> 
  <City>Berlin</City> 
  <PostalCode>12209</PostalCode> 
  <Country>Germany</Country> 
  <Phone>030-0074321</Phone> 
  <Fax>030-0076545</Fax> 
 </Customer> 
   

The ELEMENTS directive provides an XSINIL option to map NULL values to an element with an attribute xsi:nil="true". For example,

 
SELECT * FROM Customers WHERE Region is null
 FOR XML PATH('Customer'), ELEMENTS XSINIL 

returns (only showing the first customer),

 
<Customer xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <CustomerID>ALFKI</CustomerID> 
  <CompanyName>Alfreds Futterkiste</CompanyName> 
  <ContactName>Maria Anders</ContactName> 
  <ContactTitle>Sales Representative</ContactTitle> 
  <Address>Obere Str. 57</Address> 
  <City>Berlin</City> 
  <Region xsi:nil="true" /> 
  <PostalCode>12209</PostalCode> 
  <Country>Germany</Country> 
  <Phone>030-0074321</Phone> 
  <Fax>030-0076545</Fax> 
 </Customer> 
   

A new in-line schema inference directive XMLSCHEMA that takes a target namespace URI as an optional argument has been added for the RAW and AUTO modes. For example,

 
SELECT * FROM Customers FOR XML RAW('Customer'), XMLSCHEMA('urn:example.com')

returns (only showing parts of the schema and data),

 
<xsd:schema targetNamespace="urn:example.com"
  xmlns:xsd="http://www.w3.org/2001/XMLSchema"  
  xmlns:sqltypes= 
  "http://schemas.microsoft.com/sqlserver/2004/sqltypes"  
  elementFormDefault="qualified"> 
  <xsd:import namespace= 
  "http://schemas.microsoft.com/sqlserver/2004/sqltypes" /> 
  <xsd:element name="Customer"> 
  ... 
  </xsd:element> 
 </xsd:schema> 
 <Customer xmlns="urn:example.com" CustomerID="ALFKI" CompanyName="Alfreds Futterkiste" ContactName="Maria Anders" ContactTitle="Sales Representative" Address="Obere Str. 57" City="Berlin" PostalCode="12209" Country="Germany" Phone="030-0074321" Fax="030-0076545" /> … 
Read more

SQL Server Rank functions, RANK, NTILE, DENSE_RANK, ROW_NUMBER

April 20 2010 , Written by Balavardhan Reddy Published on #SQL Server2005

Transact-SQL provides the following ranking functions:

-RANK
-NTILE
-DENSE_RANK
-ROW_NUMBER

Rank() - Applies to monotonically increasing number for each value in the set
Row_Number() - simply assigns sequential numbering to the records of a result-set or to the records within groups of a result-set
Rank() and Row_Number differ only if there are ties.
Dense_Rank() - Method the rank is increasing number for each value in the set.
NTILE(n)- Which splits the records into partition based on the Column in the Over() and assigns the Rank for the partition.

OVER clause is required in all the ranking functions and with that you specify the partitioning and ordering of records before the ranking functions are evaluated. If you don't specify it, you will get an error similar to "Incorrect syntax near 'ROW_NUMBER', expected 'OVER'."
ORDER BY clause is mandatory and if you don't specify it you will get an error similar to "The ranking function "ROW_NUMBER" must have an ORDER BY clause.". With this clause you specify a column or a set of columns which will be used to order the records within a result-set or within record groups/partitions of a result-set. Please note you can specify only those columns which are being made available by the FROM clause of the query.

Please find the below Image for the example:

sqlrank
Click to get full Image

Read more

Time Delay to Execute a T-SQL Query

March 17 2010 , Written by Balavardhan Reddy Published on #SQL Server2005




A New feature in SQL Server is Time delay to execute a T-SQL Query
SQL Server will wait for perticular time to excute the query.

Ex :
i) SELECT GETDATE() CurrentTime
WAITFOR DELAY '00:00:20' ---- 5 Second Delay
SELECT GETDATE() CurrentTime


Ex:
ii)
DECLARE @delaytime DATETIME
SET @delaytime = DATEADD(s,20,GETDATE())
SELECT GETDATE() CurrentTime
WAITFOR TIME @delaytime
SELECT GETDATE() CurrentTime


--

Read more

Defragmenting Indexes in SQL Server

February 5 2010 , Written by Balavardhan Reddy Published on #SQL Server2005

When data is inserted, deleted, or updated in a SQL Server tables, the indexes which are defined for that tables are automatically updated to reflect those changes. When these indexes are modified, the information stored in them becomes fragmented, resulting in the information being scattered across the data files. When this occurs, the logical ordering and physical order data doen't match, which can lead to a deterioration of query performance.

To fix this problem, you should ReOrganize or Rebuild the indexes periodically i.e. defragmented, so the physical order of the leaf-level pages matches the logical order of the leaf nodes.This means that you should analyze your indexes.

To analyze the Indexes in SQL Server 2005 or 2008, you should use the system function called as  sys.dm_db_index_physical_stats to determine which indexes are fragmented and the extent of that fragmentation. You can use the function to analyze all the indexes in SQL Server 2005 or 2008, all indexes in a database, all indexes defined on a table, or a specific index.

The sys.dm_db_index_physical_stats function takes the following parameters:
    Database ID
    Object ID
    Index ID
    Partition number
    Mode
  • Database ID: Represents the ID number of a database (Amallint value). If null is specified, the function retrieves index-related data of all databases from that SQL Server instance. If you specify null, you must also specify null for the object ID, index ID, and partition number.
  • Object ID: Represents the ID number of a table or view (Int value). If null is specified, the function retrieves index-related data for all the tables and views of specificed database or SQL Server instance. If you specify null, you must also specify null for the index ID and partition number.
  • Index ID: Represents the ID number of an index (int value). If null is specified, the function retrieves index-related data for all indexes defined on the specified table or view. If you specify null, you must also specify null for the partition number. Also, if the object ID refers to a heap, use 0 as the index ID.
  • Partition number: Represents the partition number of an index or heap (Int value). If null is specified, the function retrieves index-related information for all partitions related to a specific object.
  • Mode: The scan level used to obtain index-related information. Valid inputs include NULL, DEFAULT, or one of the following three modes:
You must specify all five parameters, even if their values are null.

The sys.dm_db_index_physical_stats function returns a number of values that provide details about the indexes you specify
  • avg_fragmentation_in_percent: Percentage of the logical index that is fragmented.
  • fragment_count: Number of fragments in the leaf level.
  • avg_fragment_size_in_pages: Average number of pages in a leaf-level fragment.
  • page_count: Number of index or data pages. etc.
Based on these index analysis, we can determine what action to take. As per Microsoft recommends that we reorganize when avg_fragmentation_in_percent  value of index is less than or equal to 30%, and rebuild the index if the value is greater than 30%.

We should still analyze the indexes regularly, and the sys.dm_db_index_physical_stats function is the best tool to use. So let’s take a look at an example of how to use the function to retrieve index-related statistics

SELECT
    object_id AS ObjectID, index_id AS IndexID, avg_fragmentation_in_percent AS PercentFragment,
    fragment_count AS TotalFrags, avg_fragment_size_in_pages AS PagesPerFrag,  page_count AS NumPages
    FROM
    sys.dm_db_index_physical_stats(DB_ID('CSCD'), NULL, NULL, NULL , 'DETAILED')
    WHERE
    avg_fragmentation_in_percent > 0


the result like below

ObjectID            IndexID      PercentFragment         otal Frags      PagesPerFrag                NumPages
76579361            1                46.8271621091195            3594             2.12659988870339           7643
76579361            1                39.2857142857143            28                1                                     28
76579361            2                60.071359065845             1865             1.65308310991957            3083
76579361            2                44.4444444444444            31                1.16129032258065            36
252579988          1                44                                    13                1.92307692307692            25

etc....


Here I used the sys.dm_db_index_physical_stats function to retrieve the four values described above, plus the object_id and index_id values. in the result you will get number of indexes, it grow quite large.
you might want to narrow down the result set by using the Object ID and other parameter values

SELECT
    b.name AS IndexName, a.avg_fragmentation_in_percent AS PercentFragment,
    a.fragment_count AS TotalFrags,a.avg_fragment_size_in_pages AS PagesPerFrag,
    a.page_count AS NumPages
    FROM
    sys.dm_db_index_physical_stats(DB_ID('CSCT'),OBJECT_ID('Data_collection'), NULL, NULL , 'DETAILED') AS a
    JOIN sys.indexes AS b
    ON a.object_id = b.object_id
    AND a.index_id = b.index_id
    WHERE a.avg_fragmentation_in_percent > 0



Result Like


IndexName                PercentFragment     TotalFrags       PagesPerFrag            NumPages
IX_01                           60.071359065845        1865               1.65308310991957        3083
IX_01                           44.4444444444444       31                  1.16129032258065        36
PK_DataEvaluation       46.8271621091195       3594               2.12659988870339        7643
PK_DataEvaluation       39.2857142857143       28                  1                                  28



Reorganizing Indexes

When we reorganize an index, SQL Server physically reorders the leaf-level pages to match the logical order of the leaf nodes. The process uses the existing pages only and does not allocate new ones, but it does compact the index pages. In addition, reorganization uses minimal resources and is automatically performed online, without blocking queries or updates. You should reorganize indexes only if they’re lightly fragmented, otherwise, you should rebuild them.

To reorganize an index, run an ALTER INDEX statement and include the keyword REORGANIZE

ALTER INDEX PK_DataEvaluation
  ON CSCT.dbo.Data_evaluation
REORGANIZE




Rebuilding Indexes

Rebuilding an index is generally more effective than reorganizing it because it drops the original index and builds a new one. As a result, the index starts clean with minimal fragmentation and the pages are compacted, and new pages are allocated as needed. In addition, you can choose to rebuild an index offline (the default) or online.

Rebuilding an index can be as simple as reorganizing one. Once again, use the ALTER INDEX statement, but specify the REBUILD keyword, rather than REORGANIZE,



ALTER INDEX IX_01
  ON CSCT.dbo.Data_evaluation
REBUILD


When you rebuild an index, you can include a WITH clause after the REBUILD keyword. In the WITH clause, you can specify one or more options that define how the index will be rebuilt

ALTER INDEX IX_01
  ON CSCT.dbo.Data_evaluation
REBUILD WITH (
  FILLFACTOR = 70,
  ONLINE = ON
)


In this case, the FILLFACTOR option is set to 70, which determines the percentage of how much the leaf level of each index page will be filled during index creation or alteration. As a result, the leaf level will be filled to only 70%. The ONLINE option is set to ON, which means that the StoreContact table and IX_StoreContact_ContactTypeID index will be available for queries and data modification during the index operation.

Another method you can use to rebuild an index is the CREATE INDEX statement. To use this statement, you must create an index by the same name as the original and include the DROP_EXISTING option in the WITH clause, as shown in the following example:

CREATE UNIQUE NONCLUSTERED IX_01
ON CSCT.dbo.Data_evalaution
WITH (
DROP_EXISTING = ON,
FILLFACTOR = 70,
ONLINE = ON
) ON [PRIMARY]

In this case, I’m rebuilding the AK_StoreContact_rowguid index, which is a unique, nonclustered index. Notice that the DROP_EXISTING option is set to ON, which means that the original index will be dropped and a new one created






Read more

Should we make SQL queries case sensitive?

December 8 2009 , Written by Balavardhan Reddy Published on #SQL Server2005

Should we make SQL queries case sensitive? Yes, we should.

We have so many ways to get the data from the SqlServer with Case Sensitive.

Ex :
Create table mytab
(id int,
 Name varchar(50))


Insert into mytab (1,'aaaa')
Insert into mytab (1,'AAaa')
Insert into mytab (1,'aAaa')
Insert into mytab (1,'aaAA')
Insert into mytab (1,'aaaA')


If you installed SQL Server with the default collation options you will get the same result for the below queries


Select * from mytab where Name = 'aaaa'    Output : 5 Rows
Select * from mytab where Name = 'AAaa'    Output : 5 Rows
Select * from mytab where Name = 'aAaa'    Output : 5 Rows


Alter above queries by forcing collation at the column level to get the Case sensitive results

Select * from myTab where Name COLLATE Latin1_General_CS_AS = 'aaaa'

Return only one row by Case Sensitive


If you want set the collation to entire Table you need to alter the above table like.


i) Need to find the Collate Identifier for that column
    Ex : Exec sp_help 'myTab'
   
The Output will be

    Column Name   Type           Collation
    Id            Int               NULL
    Name         Varchar    SQL_Latin1_General_CP1_CI_AS


Alter your table with Collate identifier

ALTER TABLE myTab
    ALTER COLUMN Name VARCHAR(50)
    COLLATE SQL_Latin1_General_CP1_CI_AS   

After Alter you table, you will get the Case sensitive result for the below queries also
   

Select * from mytab where Name = 'aaaa'    Output : 1 Row
Select * from mytab where Name = 'AAaa'    Output : 1 Row
Select * from mytab where Name = 'aAaa'    Output : 1 Row
   







Read more

CASE Statement in Update Queries

September 7 2009 , Written by Balavardhan Reddy Published on #SQL Server2005

We can implement CASE statement in Update Queries. Instead of multiple update queries, you can implement multile condition by CASE statement to update a table or get the customized result.

Ex : We have a table called Tmp_Tab with column as (PRIORITY,IsUpdated ,IsNeedtoDelete)

Current Data In Tmp_Tab

PRIORITY       IsUpdated       IsNeedtoDelete
HIGH               YES               NO
LOW                NO                1
HIGH               0                 YES
0                  NO                1


Execute the Below Update  statement :

UPDATE Tmp_Tab SET PRIORITY =
                        (CASE
                            WHEN PRIORITY = 'HIGH' THEN '0'
                            WHEN PRIORITY = 'MEDIUM' THEN '1'
                            WHEN PRIORITY = 'LOW' THEN '2'
                            ELSE PRIORITY  END
                             ),
                      IsPremium =
                        (CASE
                            WHEN IsUpdated   = 'YES' THEN '1'
                            WHEN IsUpdated   = 'NO' THEN '0'
                            ELSE IsUpdated   END
                             ),
                      IsSubscription =
                        (CASE
                            WHEN IsNeedtoDelete = 'YES' THEN '1'
                            WHEN IsNeedtoDelete = 'NO' THEN '0'
                            ELSE IsNeedtoDelete END
                             )



After Execution of the Query

PRIORITY               IsUpdated               IsNeedtoDelete
0                        1                          0
2                        0                          1
0                        0                          1
0                        0                          1
Read more

Best Tips to write a Stored Procedure

August 11 2009 , Written by Balavardhan Reddy Published on #SQL Server2005


Good Tips to write stored procedure, to improve the performance. Below are the general rules to write good stored procedure.
 

  • The sp_ prefix :  Don't use the "sp_" prefix in a User created stored procedure name as the "sp_" prefix is reserved for system stored procedures. Any stored procedure that has the "sp_" prefix will be lookup in the MASTER database first, If a stored procedure uses same name in both the user database and a system database, the stored procedure in the System Database only get executed.

  • Use variable as few as possible, if you use more variables it can required more Cache space.

  • SET NOCOUNT OFF This returns the message that shows number of rows affected by executing that SQL statement. This can cause extra traffic and can have some impact on performance when the procedure is called frequently.

  • Provide Fully Qualified Names while calling the Objects like <DbName>.<SchemaName>.<ObjectName>
    instead of direcltly provind the Object Name, because it will causes an extra trip to the procedure cache to get the execution plan for execution. Also try to use the schema name while creating a procedure

  • SELECT statements - Try to use only the required number of columns in the SELECT clause instead of using *, it will return all columns, including un-necessary columns

  • SELECT vs SET - A single SELECT statement can assign values to different variables and is much faster than multiple SET statements assigning values to multiple different variables.

    Declare @a int
    Declare @b int

    Select @a=emp_id,@b=Job_id from emplloyee where email_id = '<search string>'

    instead of

    Set @a = (select emp_id from emplloyee where email_id = '<search string>')
    set @b = (select job_id from emplloyee where email_id = '<search string>')

  • SQL-92 - Always try to use ANSI 92 syntax. Till now the old syntax is working the old syntax will be deprecated in the next release of MS SQL server. As an example, for joining, use

    ex : select * from employee E
           inner join Jobs J
           on E.Job_id = J.Job_id

          instead of
          
          select * from employee E, Jobd J Where E.Job_id = J.Job_id

  • Try to avoid IN clause. While checking the existence of some values, then use EXISTS instead of IN. Because IN counts the NULL values also, and slower than EXISTS. EXISTS returns only Boolean(Yes/No) value, but IN returns all result set.
  • Avoid DISTINCT/ORDER BY clause : If you don't need the DISTINCT/ORDER BY clause, then try to avoid them. Unnecessary DISTINCT or ORDER BY clauses will cause extra work for the database. so it making performance slower.
  • WHERE clauses - In a WHERE clause, we use different operators to campare and get the result, these also directly affect how fast a query can run. Here are the operators which used in the WHERE clause, ordered by their performance.
        =, >, <, >=, <=, <>, !=, !>, !<
  • Avoid unnecessary conditions in the WHERE Clause.
  • Avoid using cursors - Try to use temporary table/table variables for required result set, then iterate by using while loop.
  • Subquery vs JOINs - more sub queries can be expressed as an equivalent form of JOIN. subquery is faster when we have to retrieve data from large number of tables because it becomes tedious to join more tables. JOIN is faster to retrieve data from database when we have less number of tables. But try to avoid correlated sub queries because it makes the query much slower.
  • Try to use table variables instead of Temporary Tables - Temp tables can cause stored procedures to recompile. But table variables were designed specifically to guard against stored procedure recompiles during execution.
  • Try to minimize the Dynamic Queries.
  • Last but Not least. Use proper indexes - Indexes is most importent to retrive data as fast as possible. Correct Indexes will give good result. Index scans are much faster than table scans. So identify the table scans from the execution plans. But when a table returns smaller rows, then it is better to use a table scan.

--
Read more
1 2 3 4 > >>