Learn MS SQL Development from the Best Tutors
Search in
Lesson Posted on 03/08/2018 Learn MS SQL Development
What's the best way to learn SQL Server?
Manoj Kumar Vishwakarma
I have MCA ( Master of Computer Application ) regular and have 20 year IT Teaching Experience in technical...
Just read the book, type and run the code, and work the chapter examples. If you work the book, you will be ahead of many SQL Server programmers.
You’ll understand declarative set-based interaction with SQL Server and know many critical language elements and built-in functions.
The first couple chapters address SQL Server internals, DDL, and basic normalisation of relational data for applications. It doesn’t cover other relational data structures such as star schema.
After that, it dives into solid querying patterns and eventually gets into DML.
Some object-oriented programmers have difficulty writing good declarative set-based T-SQL code. But programmers that are fluent in object-oriented programming and scripting, and also fluent in T-SQL declarative set-based programming fundamentals, often build apps with fewer performance bottlenecks and require less rework. In my experience.
SQL Server administrator
The Microsoft SQL Server organisation subjects page gives assets to the undertaking SQL Server people group that incorporate the most recent themes on SQL Server execution, and tuning, reinforcement and recuperation, security, establishment and the sky is the limit from there. Peruse the SQL Server organisation subtopics underneath to find the most recent news and master specialised guidance on the best way, to begin with, SQL Server and database organisation, with fundamental instructional exercises and tips on database demonstrating and outline. I suggest svr technologies is the best place to learn SQL server.
Installation
Performance Tuning
Tools and Utilities
Backup and Recovery
Availability and Scalability
Interoperability
Replication
Security
Stored Procedures
T-SQL
SQL Server developer
The SQL Server database advancement theme page gives assets to the undertaking DB improvement group that incorporate the most recent points on putting away systems, SQL and T-SQL, .NET and the sky is the limit from there. You'll likewise discover master exhortation on database displaying an outline for Microsoft SQL Server, and also tips on business insight (BI) systems and information warehousing. Peruse the Microsoft SQL Server online training improvement subtopics beneath to find the most recent news, master specialised guidance, and that's only the tip of the iceberg.
Net Development
Database Design and Modeling
XML
SQL Server management issues
The Microsoft SQL Server administration issues subjects page gives assets to big business database supervisors including themes on SQL Server online relocation systems and permitting contemplations. You'll likewise discover subtle elements on SQL Server virtualisation with Microsoft Hyper-V and database solidification.
Business intelligence and data warehousing
The requirement for associations to have the sound business knowledge and information warehousing arrangements is winding up more basic by the day. Microsoft has perceived this by including a few SQL Server Business Intelligence (BI) improvements to the most recent releases of its database administration framework, SQL Server training online.
Data Visualization
Analysis Services
Integration Services
Reporting Services
BI Strategies
Data Warehousing
Answered on 21/04/2018 Learn MS SQL Development
Abdul
Trainer
Lesson Posted on 19/12/2017 Learn MS SQL Development
What Is WorkTable In SQL Server?
Amitava Majumder
I am an experienced Trainer and IT professional with over 14 years of experience in IT Sector and more...
USE AdventureWorks2014GOSET STATISTICS IO ONGOSELECT *FROM Production.Product pCROSS JOIN Production.Product p1GO |
Learn MS SQL Development from the Best Tutors
Lesson Posted on 26/10/2017 Learn MS SQL Development
Vivek Grover
I am proficient in MS SQL Development. With over 6 years of experience in SQL Development, I have trained...
Following are the signs those qualify a query to be a derived table:
A derived table is an example of a Sub Query that is used in the FROM clause of a SELECT statement to retrieve a set of records. A derived table is a virtual table that is created within the scope of a query.
Case Scenario:
Create a table for trainers:
CREATE TABLE CodePicksTrainer(
ID INT IDENTITY(1,1) PRIMARY KEY,
FirstName VARCHAR(50),
LastName VARCHAR(50),
DateOfBirth DATE,
JoiningDate DATE
)
Insert trainer details:
INSERT INTO CodePicksTrainer(FirstName, LastName, DateOfBirth, JoiningDate)
SELECT 'Vivek', 'Grover', '1989-09-01', '2017-10-01'
UNION ALL
SELECT 'Ujjwal', 'Grover', '1994-11-27', '2017-09-10'
UNION ALL
SELECT 'Sahil', 'Arora', '1988-09-28', '2016-05-01'
Create a table for batches taken by trainers:
CREATE TABLE CodePickBatches(
ID INT IDENTITY(1,1),
Name VARCHAR(50),
StartDate DATE,
TrainerID INT FOREIGN KEY REFERENCES CodePicksTrainer(ID)
)
Insert Batches Detail:
INSERT INTO CodePickBatches (Name, StartDate, TrainerID)
SELECT 'SQL Programming', '2017-01-01', (SELECT ID FROM CodePicksTrainer WHERE FirstName = 'Vivek' AND LastName = 'Grover')
UNION ALL
SELECT 'Java Programming', '2017-01-01', (SELECT ID FROM CodePicksTrainer WHERE FirstName = 'Vivek' AND LastName = 'Grover')
UNION ALL
SELECT 'Corel Draw', '2017-01-01', (SELECT ID FROM CodePicksTrainer WHERE FirstName = 'Ujjwal' AND LastName = 'Grover')
UNION ALL
SELECT 'ASP.Net', '2016-06-01', (SELECT ID FROM CodePicksTrainer WHERE FirstName = 'Sahil' AND LastName = 'Arora')
UNION ALL
SELECT 'SQL Programming', '2016-07-01', (SELECT ID FROM CodePicksTrainer WHERE FirstName = 'Sahil' AND LastName = 'Arora')
UNION ALL
SELECT 'R Programming', '2017-04-01', (SELECT ID FROM CodePicksTrainer WHERE FirstName = 'Sahil' AND LastName = 'Arora')
Requirement – One of your client demands a report with detail in following format containing Trainer Name, Joining date, Date of Birth, Total number of batches being run by trainer.
Code Build: Our result relies firstly on CodePickBatches table from where we can do aggregation to find number of batches being run my each Trainer. Once, we find the number of batches being run my each trainer, then we can make a join with CodePickTrainer table to find the detail of each trainer as shown in Requirement section. So, it requires the use of a derived table or on the fly table having aggregated data.
SELECT
T.LastName + ', '+ T.FirstName AS TrainerName,
T.DateOfBirth,
T.JoiningDate,
TS.TotalRunningBatches
FROM (
SELECT
COUNT(1) AS TotalRunningBatches, B.TrainerID
FROM CodePickBatches B
GROUP BY B.TrainerID
) AS TS
INNER JOIN CodePicksTrainer T ON T.ID = TS.TrainerID
ORDER BY TotalRunningBatches DESC
In above code, a derived table named TS is created which makes its join with CodePicksTrainer table to find expected data. Total running batches are calculated in derived table (TS) itself. (It is surrounded by round brackets and followed by AS clause).
read lessLesson Posted on 16/09/2017 Learn MS SQL Development
Redbush Technologies Pvt.Ltd
At RedBush technologies,we specialize in providing training on Hadoop, Big Data, SQL DBA, SQL Developer,...
Lesson Posted on 26/08/2017 Learn MS SQL Development
Understanding Indexes In SQL Server
Redbush Technologies Pvt.Ltd
At RedBush technologies,we specialize in providing training on Hadoop, Big Data, SQL DBA, SQL Developer,...
Learn MS SQL Development from the Best Tutors
Lesson Posted on 31/07/2017 Learn MS SQL Development
Rename The Logical Name Of SQL Server Database Files Using T-SQL
Amitava Majumder
I am an experienced Trainer and IT professional with over 14 years of experience in IT Sector and more...
Rename the logical name of SQL Server database files using T-SQL:
1. Problem:
SQL Server database files have two names: one is the logical file name and the other is the physical file name. The logical file name is used with some T-SQL commands, such as DBCC SHRINKFILE. If you want to change the logical file names of your database files then this tip will be very useful. This way you can keep your logical database file names unique on your SQL Server instance.
Since you may want to move this database on a production server, or change the database name because of any reason, you may also want to change the names of the database files. You may want the database names consistent with the database names or ant other rules you obey for standardization.
2. Solution:
It is not mandatory to keep logical file names unique on a SQL Server instance, but ideally we should keep them unique to avoid any confusion.
When we restore a database, the logical database file names don't change and are the same as the source database. So if you are restoring to multiple databases with the same backup file, the logical file names will be the same for all of the restored databases.
Let's start with creating a database having three data files and a log file then we will change the logical file name of these database files to show the step by step method.
Rename the logical SQL Server database file name using T-SQL
Step 1: First we will create a database named "Test_DB" with 1 data files and 5 log files and one log file by running the below T-SQL code. You can change the name of the database, file path, file names, size and file growth according to your needs.
USE MASTER;
GO
CREATE DATABASE Test_DB
ON
( NAME = Test_DB,
FILENAME = 'E:\Data\Test_DB.mdf',
SIZE = 204800KB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 102400KB )
LOG ON
( NAME = Test_DB_log1,
FILENAME = 'E:\Data\Test_DB_log1.ldf',
SIZE = 204800KB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 102400KB ),
( NAME = Test_DB_log2,
FILENAME = 'E:\Data\Test_DB_log2.ldf',
SIZE = 204800KB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 102400KB ),
( NAME = Test_DB_log3,
FILENAME = 'E:\Data\Test_DB_log3.ldf',
SIZE = 204800KB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 102400KB ),
( NAME = Test_DB_log4,
FILENAME = 'E:\Data\Test_DB_log4.ldf',
SIZE = 204800KB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 102400KB ),
( NAME = Test_DB_log5,
FILENAME = 'E:\Data\Test_DB_log5.ldf',
SIZE = 204800KB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 102400KB ) ;
GO
Step 2: Next I created a backup of the database.
BACKUP DATABASE Test_DB TO DISK = 'D:\Data\Backup\Test_DB.bak'
GO
I ran the below T-SQL command to get the logical file names of database file for the database TEST_DB.
USE Test_DB
GO
SELECT file_id, name as [logical_file_name],physical_name
from sys.database_files
file_id | logical_file_name | physical_name |
1. | Test_DB | E:\Data\Test_DB.mdf |
2. | Test_DB_log1 | E:\Data\Test_DB_log1.ldf |
3. | Test_DB_log2 | E:\Data\Test_DB_log2.ldf |
4. | Test_DB_log3 | E:\Data\Test_DB_log3.ldf |
5. | Test_DB_log4 | E:\Data\Test_DB_log4.ldf |
6. | Test_DB_log5 | E:\Data\Test_DB_log5.ldf |
Step 3: Now we will change the logical file name for the log data files for database "Test_DB" from "Test_DB_log" to "Test_DB_New2_log.ldf" by running an ALTER DATABASE statement using MODIFY FILE according to their file_id. To modify the logical name of a data file or log file, we will specify the logical file name to be renamed in the NAME clause, and specify the new logical name for the file in the NEWNAME clause. Run the below command to create a procedure apply this change.
CREATE PROCEDURE Rename_Logical_Name
(
@Database sysname, @New_name sysname
)
AS
BEGIN
DECLARE @cmd1 nvarchar(200)
DECLARE @cmd2 nvarchar(200)
DECLARE @ldffile sysname
DECLARE @id sysname
DECLARE theCursor CURSOR
FOR (SELECT name,FILE_ID FROM sys.database_files WHERE type = 1)
OPEN theCursor
FETCH NEXT FROM theCursor INTO @ldffile,@id
WHILE @@FETCH_STATUS = 0
BEGIN
SET @cmd2 = 'ALTER DATABASE ' +@Database+' MODIFY FILE (NAME= '''+ @ldffile + ''', NEWNAME = '''+@ New_name +@ID+'_log.ldf'')'
EXEC (@cmd2)
FETCH NEXT FROM theCursor INTO @ldffile,@id
END
CLOSE theCursor
DEALLOCATE theCursor
--PRINT @cmd1
--PRINT @cmd2
--EXEC (@cmd1)
EXEC (@cmd2);
END
EXEC Rename_Logical_Name @Database = 'Test_DB',@New_name = 'Test_DB_New'
Step 4: Now we will check the logical file names to verify our change we made in the last step. Run the command below to check the logical names.
SELECT file_id, name as [logical_file_name],physical_name
from sys.database_files WHERE type = 1
We can see the logical file names has been changed without any downtime of the database.
file_id | logical_file_name | physical_name |
2. | Test_DB_New2_log.ldf | E:\Data\Test_DB_log1.ldf |
3. | Test_DB_New3_log.ldf | E:\Data\Test_DB_log2.ldf |
4. | Test_DB_New4_log.ldf | E:\Data\Test_DB_log3.ldf |
5. | Test_DB_New5_log.ldf | E:\Data\Test_DB_log4.ldf |
6. | Test_DB_New6_log.ldf | E:\Data\Test_DB_log5.ldf |
NOTE: Do not change logical names of any data file of your production database until it is required or needed.
read lessLesson Posted on 18/07/2017 Learn MS SQL Development
SQL SERVER: How To Pass Parameters To The Stored Procedure?
Amitava Majumder
I am an experienced Trainer and IT professional with over 14 years of experience in IT Sector and more...
SQL SERVER: How To Pass Parameters To The Stored Procedure?
After so many years of existence of the stored procedures, I still see developers struggling to execute the stored procedure.
The most common mistakes developers do is to execute stored procedures as follows which generates the error.
EXEC NameofSP (FirstParam,SecondParam)
I can clearly understand why the above script preferred by developers. As most of the programming languages use the above syntax for calling a function, the developers often think it is the way to call stored procedures. However, if you try to call an SP with above syntax, it will give you an error.
There are two different methods to call stored procedures. Let us see them here, however before we do that, let us first create a sample SP which accepts two parameters.
Creating Stored Procedure
-- Create Stored Procedure
create PROCEDURE [dbo].[ShippedOrder] @salesid VARCHAR(50),@custid VARCHAR(50)
AS
SELECT a.[SalesOrderID], a.CustomerID, c.[Name],b.[OrderQty],a.[ShipDate]
FROM [SalesLT].[SalesOrderHeader] a INNER JOIN [SalesLT].[SalesOrderDetail] b
ON a.[SalesOrderID] = b.[SalesOrderID]
INNER JOIN [SalesLT].[Product] c ON b.ProductID = c.ProductID and a.[SalesOrderID]=@salesid and a.CustomerID=@custid
GO
Now let us see two different methods to call SP.
Method 1:
Let us run following statements, where the order of the parameters is a different order.
-- Execute SP
EXEC [AdventureWorksLT2012].[dbo].ShippedOrder 71780,30113
GO
EXEC [AdventureWorksLT2012].[dbo].ShippedOrder 30113,71780
GO
When you run above script, it will show us result where the order of the columns is changed as SP assigns the first params to the first parameters inside.
However, if you want to assign your passed parameters to specific variable inside SQL Server, you will have to mention that parameter as mentioned in the second method.In the first statement it will show the result but in second statement no output will come.
Method 2:
In this method when we execute the stored procedure, we also include the variable to which we want to assign parameter. This way is the certain way to assign value to parameter inside a SP.
-- Execute SP
EXEC AdventureWorksLT2012.dbo.ShippedOrder @salesid = 71780, @custid = 30113
GO
EXEC AdventureWorksLT2012.dbo.ShippedOrder @custid = 30113,@salesid = 71780
GO
Now when you execute this SP, you will get results in the same order in both the case as we have specified the parameters to assign when we passed variable values.
read lessLesson Posted on 16/06/2017 Learn MS SQL Development
How To Minimize The Page Splits In Sqlserver To Improve The Performane Of Database?
Amitava Majumder
I am an experienced Trainer and IT professional with over 14 years of experience in IT Sector and more...
How to minimize the page splits in sqlserver to improve the performane of database?
Page Splits:
A page is 8Kbytes of data which can be index related, data related, large object binary (lob’s) etc...
When you insert rows into a table they go on a page, into ‘slots’, your row will have a row length and you can get only so many rows on the 8Kbyte page. What happens when that row’s length increases because you entered a bigger product name in your varchar column for instance,well,SQL Server needs to move the other rows along in order to make room for your modification, if the combined new length of all the rows on the page will no longer fit on that page then SQL Server grabs a new page and moves rows to the right or left of your modification onto it – that is called a ‘page split’.
Page splits arise when records from one memory page are moved to another page during changes to your table. Suppose a new record (Martin) being inserted, in sequence, between Adam and Rony. Since there’s no room in this memory page, some records will need to shift around. The page split occurs when Irene’s record moves to the second page.
This creates page fragmentation and is very bad for performance and is also reported as page split.
Page splits are considered very bad for performance, and there are a number of techniques to reduce, or even eliminate, the risk of page splits.
Example code for tracking Page Splits :
We can find the bad page splits using the event sql_server.transaction_log. This event monitors all the activities in the transaction log, because that we need to use with caution. We can filter the ‘operation’ field looking for the value 11, which means LOP_DELETE_SPLIT. This is the deletion of rows that happens when SQL Server is moving rows from one page to another in a page split, a bad page split.
Extended Events for SQL Server provides a generic tracing and troubleshooting framework which allows deeper and more granular level control of tracing which was not possible using earlier methods like DBCC, SQL Trace, Profiler, etc... These earlier methods still exist and Extended Events is not a replacement.
For this We need to create the session by t-sql. The code to create the session will be this:
IF EXISTS (SELECT 1
FROM sys.server_event_sessions
WHERE name = 'PageSplits_Tracker')
DROP EVENT SESSION [PageSplits_Tracker] ON SERVER
CREATE EVENT SESSION PageSplits_Tracker
ON SERVER
ADD EVENT sqlserver.transaction_log(
WHERE operation = 11 -- LOP_DELETE_SPLIT
)
--Description for transaction_log event is: “Occurs when a record is added to the SQL Server transaction log.
--This is a very high volume event that will affect the performance of the server. Therefore, you should use
--appropriate filtering to reduce the number of events, and only use this event for targeted troubleshooting
--during a short time period.”
-- LOP_DELETE_SPLIT : A page split has occurred. Rows have moved physically.
ADD TARGET package0.histogram(
SET filtering_event_name = 'sqlserver.transaction_log',
source_type = 0,source = 'database_id');
GO
--package0.histogram : You can use the histogram target to troubleshoot performance issues.
--filtering_event_name : Any event present in the Extended Events session.
--source_type : The type of object that the bucket is based on.
--0 for an event
--1 for an action
--source : The event column or action name that is used as the data source.
-- Start the Event Session
ALTER EVENT SESSION PageSplits_Tracker
ON SERVER
STATE=START;
GO
-- Create the database
CREATE DATABASE Performance_Tracker
GO
USE [Performance_Tracker]
GO
-- Create a bad splitting clustered index table
CREATE TABLE PageSplits
( ROWID UNIQUEIDENTIFIER NOT NULL DEFAULT NEWID() PRIMARY KEY,
Data INT NOT NULL DEFAULT (RAND()*1000),
Change_Date DATETIME2 NOT NULL DEFAULT CURRENT_TIMESTAMP);
GO
-- This index should mid-split based on the DEFAULT column value
CREATE INDEX IX_PageSplitsPk_Data ON PageSplits (Data);
GO
-- This index should end-split based on the DEFAULT column value
CREATE INDEX IX_PageSplitsPk_ChangeDate ON PageSplits (Change_Date);
GO
-- Create a table with an increasing clustered index
CREATE TABLE PageSplits_Index
( ROWID INT IDENTITY NOT NULL PRIMARY KEY,
Data INT NOT NULL DEFAULT (RAND()*1000),
Change_Date DATETIME2 NOT NULL DEFAULT DATEADD(mi, RAND()*-1000, CURRENT_TIMESTAMP))
GO
-- This index should mid-split based on the DEFAULT column value
CREATE INDEX IX_PageSplits_Index_ChangeDate ON PageSplits_Index (Change_Date);
GO
-- Insert the default values repeatedly into the tables
WHILE 1=1
BEGIN
INSERT INTO PageSplits DEFAULT VALUES;
INSERT INTO PageSplits_Index DEFAULT VALUES;
WAITFOR DELAY '00:00:00.005';
END
GO
--If we startup this workload and allow it to run for a couple of minutes, we can then query the histogram target
--for our session to find the database that has the mid-page splits occurring.
-- Query the target data to identify the worst splitting database_id
with cte as
(
SELECT
n.value('(value)[1]', 'int') AS database_id,
DB_NAME(n.value('(value)[1]', 'int')) AS database_name,
n.value('(@count)[1]', 'bigint') AS split_count
FROM
(SELECT CAST(target_data as XML) target_data
FROM sys.dm_xe_sessions AS s
JOIN sys.dm_xe_session_targets t
ON s.address = t.event_session_address
WHERE s.name = 'PageSplits_Tracker'
AND t.target_name = 'histogram' ) as tab
CROSS APPLY target_data.nodes('HistogramTarget/Slot') as q(n)
)
select * from cte
database_id | database_name | split_count |
16 | Performance_Tracker | 123 |
--With the database_id of the worst splitting database, we can then change our event session configuration
--to only look at this database, and then change our histogram target configuration to bucket on the alloc_unit_id
--so that we can then track down the worst splitting indexes in the database experiencing the worst mid-page splits
-- Drop the Event Session so we can recreate it
-- to focus on the highest splitting database
DROP EVENT SESSION [PageSplits_Tracker]
ON SERVER
-- Create the Event Session to track LOP_DELETE_SPLIT transaction_log operations in the server
CREATE EVENT SESSION [PageSplits_Tracker]
ON SERVER
ADD EVENT sqlserver.transaction_log(
WHERE operation = 11 -- LOP_DELETE_SPLIT
AND database_id = 16 -- CHANGE THIS BASED ON TOP SPLITTING DATABASE!
)
ADD TARGET package0.histogram(
SET filtering_event_name = 'sqlserver.transaction_log',
source_type = 0, -- Event Column
source = 'alloc_unit_id');
GO
-- Start the Event Session Again
ALTER EVENT SESSION [PageSplits_Tracker]
ON SERVER
STATE=START;
GO
--With the new event session definition, we can now rerun our problematic workload for more than 10 minutes period
-- and look at the worst splitting indexes based on the alloc_unit_id’s that are in the histogram target:
WHILE 1=1
BEGIN
INSERT INTO PageSplits DEFAULT VALUES;
INSERT INTO PageSplits_Index DEFAULT VALUES;
WAITFOR DELAY '00:00:00.005';
END
GO
-- Query Target Data to get the top splitting objects in the database:
SELECT
o.name AS table_name,
i.name AS index_name,
tab.split_count,indexstats.index_type_desc AS IndexType,
indexstats.avg_fragmentation_in_percent,
i.fill_factor
FROM ( SELECT
n.value('(value)[1]', 'bigint') AS alloc_unit_id,
n.value('(@count)[1]', 'bigint') AS split_count
FROM
(SELECT CAST(target_data as XML) target_data
FROM sys.dm_xe_sessions AS s
JOIN sys.dm_xe_session_targets t
ON s.address = t.event_session_address
WHERE s.name = 'PageSplits_Tracker'
AND t.target_name = 'histogram' ) as tab
CROSS APPLY target_data.nodes('HistogramTarget/Slot') as q(n)
) AS tab
JOIN sys.allocation_units AS au
ON tab.alloc_unit_id = au.allocation_unit_id
JOIN sys.partitions AS p
ON au.container_id = p.partition_id
JOIN sys.indexes AS i
ON p.object_id = i.object_id
AND p.index_id = i.index_id
JOIN sys.objects AS o
ON p.object_id = o.object_id
JOIN sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats
ON i.object_id = indexstats.object_id
AND i.index_id = indexstats.index_id
WHERE o.is_ms_shipped = 0
ORDER BY indexstats.avg_fragmentation_in_percent DESC
table_name | index_name | split_count | IndexType | avg_fragmentation_in_percent | fill_factor |
PageSplits_Index | IX_PageSplits_Index_ChangeDate | 286 | NONCLUSTERED INDEX | 99.57894737 | 0 |
PageSplits | PK__PageSpli__97BD02EBEA21A6BC | 566 | CLUSTERED INDEX | 99.37238494 | 0 |
PageSplits | IX_PageSplitsPk_Data | 341 | NONCLUSTERED INDEX | 98.98989899 | 0 |
PageSplits | IX_PageSplitsPk_ChangeDate | 3 | NONCLUSTERED INDEX | 1.747572816 | 0 |
--With this information we can now go back and change our FillFactor specifications and retest/monitor the impact
-- to determine whether we’ve had the appropriate reduction in mid-page splits to accommodate the time between
-- our index rebuild operations:
-- Change FillFactor based on split occurences to minimize page splits
Using Fill Factor we can minimize the page splits :
Fill Factor :When an index is created with a fill factor percentage, this leaves a percentage of the index pages free after the index is created, rebuilt or reorganized. This free space is used to hold additional pages as page splits occur, reducing the change of a page split in the data page causing a page split in the index structure as well, but even with your Fill Factor set to 10% to 20%, index pages eventually fill up and are split the same way that a data page is split.
A page is the basic unit of data storage in SQL server. Its size is 8KB(8192 bytes). Data is stored in the leaf-level pages of Index. The percentage of space to be filled with data in a leaf level page is decided by fill factor. The remaining space left is used for future growth of data in the page. Fill factor is a number from 1 to 100. Its default value is 0, which is same as 100. So when we say fill factor is 70 means, 70% of space is filled with data and remaining 30% is vacant for future use. So higher the fill factor, more data is stored in the page. Fill factor setting is applied when we create/rebuild index.
ALTER INDEX PK__PageSpli__97BD02EBEA21A6BC ON PageSplits REBUILD WITH (FILLFACTOR=70)
ALTER INDEX IX_PageSplitsPk_Data ON PageSplits REBUILD WITH (FILLFACTOR=70)
ALTER INDEX IX_PageSplits_Index_ChangeDate ON PageSplits_Index REBUILD WITH (FILLFACTOR=80)
GO
-- Stop the Event Session to clear the target
ALTER EVENT SESSION [PageSplits_Tracker]
ON SERVER
STATE=STOP;
GO
-- Start the Event Session Again
ALTER EVENT SESSION [PageSplits_Tracker]
ON SERVER
STATE=START;
GO
--Do the workload once again
WHILE 1=1
BEGIN
INSERT INTO PageSplits DEFAULT VALUES;
INSERT INTO PageSplits_Index DEFAULT VALUES;
WAITFOR DELAY '00:00:00.005';
END
GO
--With the reset performed we can again start up our workload generation and
--begin monitoring the effect of the FillFactor specifications on the indexes with our code.
--After another 2 minute period, the following splits were noted.
--Once again Query Target Data to get the top splitting objects in the database:
--At present there is no page splits are found in indexes IX_PageSplitsPk_ChangeDate, PK__PageSpli__97BD02EBEA21A6BC, IX_PageSplitsPk_Data
table_name | index_name | split_count | IndexType | read less