Database Training

Database Training

Trending Questions and Lessons

Follow 6,666 Followers

Ask a Question

Feed

All

All

Lessons

Discussion

Answered on 08 Apr CBSE/Class 10/Mathematics/Quadratic Equations/ Class 10 Relationship between discriminants and nature of roots Functional Training/Data Analytics IT Courses/Hadoop/Hadoop Testing +5 IT Courses/Database Training IT Courses/ETL Testing CBSE/Class 10/Mathematics CBSE/Class 10/Science/Chemical Reactions and Equations CBSE/Class 6/Maths/Introduction to Algebra/Choosing the value of a variable to make a true statement less

Naresh Jangir

Tutor

Hi Varsha, As far as skill development is concerned I might think and say now that you have rather lower opportunity at Urbanpro. Please call me or reach at nkr Dot jgr At gmail Dot com. I can provide you with teaching opportunities a new platform. Visit us at tutemark.com. Thanks again, Naresh Ja... read more

Hi Varsha,

As far as skill development is concerned I might think and say now that you have rather lower opportunity at Urbanpro. 

Please call me or reach at nkr Dot jgr At gmail Dot com.

I can provide you with teaching opportunities a new platform. Visit us at tutemark.com.

 

Thanks again,

Naresh Jangir

read less
Answers 9 Comments
Dislike Bookmark

Lesson Posted on 15/12/2017 IT Courses/Database Training

Overview Of Database

Sandeep Tiwari

I am an Oracle Certified Java Professional & Google Certified Online Marketer with 10+years of experience...

A Database is a collection of related data organised in a way that data can be easily accessed, managed andupdated. Any piece of information can be a data, for example name of your school. Database is actualy a placewhere related piece of information is stored and various operations can be performed... read more

A Database is a collection of related data organised in a way that data can be easily accessed, managed and
updated. Any piece of information can be a data, for example name of your school. Database is actualy a place
where related piece of information is stored and various operations can be performed on it.

1. DBMS:

A DBMS is a software that allows creation, definition and manipulation of database. Dbms is actualy a tool used
to perform any kind of operation on data in database. Dbms also provides protection and security to database. It
maintains data consistency in case of multiple users. Here are some examples of popular dbms, MySql, Oracle,
Sybase, Microsoft Access and IBM DB2 etc.

2. Components of Database System:

The database system can be divided into four components.

i. Users: Users may be of various type such as DB administrator, System developer and End users.

ii. Database application: Database application may be Personal, Departmental, Enterprise and Internal

iii. DBMS: Software that allow users to define, create and manages database access, Ex: MySql, Oracle etc.

iv. Database: Collection of logical data.

3. Functions of DBMS:

i. Provides data Independence.

ii. Concurrency Control.

iii. Provides Recovery services.

iv. Provides Utility services.

v. Provides a clear and logical view of the process that manipulates data.

4. Advantages of DBMS:

i. Segregation of applicaion program.

ii. Minimal data duplicacy.

iii. Easy retrieval of data.Reduced development time and maintainance need.

5. Disadvantages of DBMS:

i. Complexity.

ii. Costly.

iii. Large in size.

read less
Comments
Dislike Bookmark

Lesson Posted on 20/11/2017 IT Courses/BI Reporting IT Courses/Big Data Functional Training/Business Analysis Training +11 IT Courses/Data Analysis IT Courses/Data Modeling IT Courses/Database Training IT Courses/MS Office Software Training/MS Access IT Courses/MS Office Software Training IT Courses/MS SQL IT Courses/Microsoft Training/Microsoft BI (Business Intelligence) Tools IT Courses/MS Office Software Training/Microsoft Excel Training IT Courses/Microsoft Training/Microsoft BI (Business Intelligence) Tools/Microsoft Power BI IT Courses/MS Office Software Training/Microsoft PowerPoint Training IT Courses/Microsoft Training less

Mail Merge In Word

iTech Analytic Solutions

"iTech Analytic Solutions" (iTAS) is ranked as No. 1 Analytic Training Center in Bangalore by ThinkVidya.com "iTech...

Mail Merge is a useful tool that allows you to produce multiple letters, labels, envelopes, name tags, and more user information stored in a list, database, or spreadsheet. Mail Merge is most often used to print or email form letters to multiple recipients. Using Mail Merge, you can easily customize... read more

Mail Merge is a useful tool that allows you to produce multiple letters, labels, envelopes, name tags, and more user information stored in a list, database, or spreadsheet.

Mail Merge is most often used to print or email form letters to multiple recipients. Using Mail Merge, you can easily customize form letters for individual recipients. Mail merge is also used to create envelopes or labels in bulk.

Mail merge is a feature within most data processing applications that enables users to send a similar letter or document to multiple recipients. It enables connecting a single form template with a data source that contains information about the recipient’s name, address and other pre-defined and supporting data.

Mail merge primarily enables automating the process of sending bulk mail to customers, subscribers or general individuals. Mail merge works when a data file is stored that includes the information of the recipients to whom the letter will be sent. This file can be a spreadsheet or database file containing separate fields for each different type of information to be merged within the letter.

The second file is the word document or the letter template. The recipient’s information on the letter template is kept empty. When the mail merge process is initiated, the recipient's data from spreadsheet or database is fetched and placed within the empty field in the letter, one by one, until all letters are created.

read less
Comments
Dislike Bookmark

Looking for Database Training classes

Find best Database Training classes in your locality on UrbanPro.

FIND NOW

Lesson Posted on 20/11/2017 IT Courses/BI Reporting IT Courses/Big Data Functional Training/Business Analysis Training +10 IT Courses/Data Analysis IT Courses/Data Modeling IT Courses/Database Training IT Courses/MS Office Software Training/MS Access IT Courses/MS Office Software Training IT Courses/MS SQL IT Courses/Microsoft Training/Microsoft BI (Business Intelligence) Tools IT Courses/MS Office Software Training/Microsoft Excel Training IT Courses/Microsoft Training/Microsoft BI (Business Intelligence) Tools/Microsoft Power BI IT Courses/MS Office Software Training/Microsoft PowerPoint Training less

Microsoft Outlook

iTech Analytic Solutions

"iTech Analytic Solutions" (iTAS) is ranked as No. 1 Analytic Training Center in Bangalore by ThinkVidya.com "iTech...

Microsoft Outlook is the preferred email client used to access Microsoft Exchange Server email. Not only does Microsoft Outlook provide access to Exchange Server email, but it also includes contact, calendaring and task management functionality. Companies can also integrate Outlook with Microsoft’s... read more

Microsoft Outlook is the preferred email client used to access Microsoft Exchange Server email. Not only does Microsoft Outlook provide access to Exchange Server email, but it also includes contact, calendaring and task management functionality. Companies can also integrate Outlook with Microsoft’s SharePoint platform to share documents, project notes, collaborate with colleagues, send reminders and much more. 

Microsoft Outlook may be used as a standalone application, but is also part of the Microsoft Office suite. Outlook’s current version is Microsoft Outlook 2010. Outlook is also available for the Apple Mac; its current version is Outlook 2011. 

Outlook 2013 includes a few new improvements:

An Unread button: Allows the end user to easily see only those messages marked as unread.

Message preview: Allows the end user to preview the first line of an email from the message list view.

A Zoom slider: Allows the end user to easily increase the font size for individual emails.

Attachment reminders: Reminds the end user when an attachment is referenced in the body of a message.

The Outlook 2013 weather bar: Weather reports for locations selected by the end user.

Outlook 2013 may be used in conjunction with Microsoft SharePoint as long as Exchange 2013 and SharePoint 2013 are properly configured. Additionally, administrators can now control OST file size via the Outlook 2013 sync slider and startup time is improved via the Exchange Fast Access feature.

read less
Comments
Dislike Bookmark

Lesson Posted on 31/08/2017 IT Courses/Datastage IT Courses/Database Training

Datastage: DB2 Indirect Privileges For Stored Procedures

Kriti C.

I have 12+ years of experience as a working professional and trainer. I have expertise in Analytics domain...

1. I recently updated a Datastage job to use a direct Update query instead of calling a Stored Procedure(SP) through the job which had the Update query.2. The result: The job failed with the error that the 'userid' did not have Update privilege on the table.3. I was using the same userid to execute the... read more

1. I recently updated a Datastage job to use a direct Update query instead of calling a Stored Procedure(SP) through the job which had the Update query.

2. The result: The job failed with the error that the 'userid' did not have Update privilege on the table.

3. I was using the same userid to execute the SP as well.

4. It took me sometime to understand the reason for this.

5. The reason is that DB2 userid needs execute privileges for a SP.

6. The Execute privilege in turn grants indirect privilege to the userid executing the SP for the SQL statement specified in the procedure.

7. So, as I had the Update SQL statement in my procedure. And my userid had execute privilege on the procedure.

8. t was granted indirect privilege to update the table for the purpose of executing the SP. These are temporary privileges and last only till the SP is executing.

9. Now, I tried to update the table directly using the same userid. And I got an Update access error because that user id did not have explicit Update privilege on the table.

10. So, a point to keep in mind while deploying changes that replace stored procedures with direct SQL statements.

read less
Comments
Dislike Bookmark

Lesson Posted on 16/06/2017 IT Courses/MS SQL/MS SQL Development IT Courses/Microsoft Training/Microsoft BI (Business Intelligence) Tools/SQL Server IT Courses/Database Training +1 IT Courses/SQL Programming less

How To Minimize The Page Splits In Sqlserver To Improve The Performane Of Database?

Amitava Majumder

I am an experienced Trainer and IT professional with over 10 years of experience in IT Sector and more...

How to minimize the page splits in sqlserver to improve the performane of database? Page Splits: A page is 8Kbytes of data which can be index related, data related, large object binary (lob’s) etc... When you insert rows into a table they go on a page, into ‘slots’, your row will... read more

How to minimize the page splits in sqlserver to improve the performane of database?

Page Splits:

A page is 8Kbytes of data which can be index related, data related, large object binary (lob’s) etc...

When you insert rows into a table they go on a page, into ‘slots’, your row will have a row length and you can get only so many rows on the 8Kbyte page. What happens when that row’s length increases because you entered a bigger product name in your varchar column for instance,well,SQL Server needs to move the other rows along in order to make room for your modification, if the combined new length of all the rows on the page will no longer fit on that page then SQL Server grabs a new page and moves rows to the right or left of your modification onto it – that is called a ‘page split’.

Page splits arise when records from one memory page are moved to another page during changes to your table. Suppose a new record (Martin) being inserted, in sequence, between Adam and Rony. Since there’s no room in this memory page, some records will need to shift around. The page split occurs when Irene’s record moves to the second page.

This creates page fragmentation and is very bad for performance and is also reported as page split.

Page splits are considered very bad for performance, and there are a number of techniques to reduce, or even eliminate, the risk of page splits.

 Example code for tracking Page Splits :

We can find the bad page splits using the event sql_server.transaction_log. This event monitors all the activities in the transaction log, because that we need to use with caution. We can filter the ‘operation’ field looking for the value 11, which means LOP_DELETE_SPLIT. This is the deletion of rows that happens when SQL Server is moving rows from one page to another in a page split, a bad page split.

Extended Events for SQL Server provides a generic tracing and troubleshooting framework which allows deeper and more granular level control of tracing which was not possible using earlier methods like DBCC, SQL Trace, Profiler, etc... These earlier methods still exist and Extended Events is not a replacement.

 For this We need to create the session by t-sql. The code to create the session will be this:

 IF EXISTS (SELECT 1

            FROM sys.server_event_sessions

            WHERE name = 'PageSplits_Tracker')

    DROP EVENT SESSION [PageSplits_Tracker] ON SERVER

 CREATE EVENT SESSION PageSplits_Tracker

ON    SERVER

ADD EVENT sqlserver.transaction_log(

    WHERE operation = 11  -- LOP_DELETE_SPLIT

)

--Description for transaction_log event is: “Occurs when a record is added to the SQL Server transaction log.

--This is a very high volume event that will affect the performance of the server. Therefore, you should use

--appropriate filtering to reduce the number of events, and only use this event for targeted troubleshooting

--during a short time period.”

 -- LOP_DELETE_SPLIT : A page split has occurred. Rows have moved physically.

ADD TARGET package0.histogram(

    SET filtering_event_name = 'sqlserver.transaction_log',

        source_type = 0,source = 'database_id');

GO

--package0.histogram : You can use the histogram target to troubleshoot performance issues.      

--filtering_event_name : Any event present in the Extended Events session.

--source_type : The type of object that the bucket is based on.

--0 for an event

--1 for an action

--source : The event column or action name that is used as the data source.

-- Start the Event Session

ALTER EVENT SESSION PageSplits_Tracker

ON SERVER

STATE=START;

GO

-- Create the database

CREATE DATABASE Performance_Tracker

GO

USE [Performance_Tracker]

GO

-- Create a bad splitting clustered index table

CREATE TABLE PageSplits

( ROWID UNIQUEIDENTIFIER NOT NULL DEFAULT NEWID() PRIMARY KEY,

  Data INT NOT NULL DEFAULT (RAND()*1000),

  Change_Date DATETIME2 NOT NULL DEFAULT CURRENT_TIMESTAMP);

GO

--  This index should mid-split based on the DEFAULT column value

CREATE INDEX IX_PageSplitsPk_Data ON PageSplits (Data);

GO

--  This index should end-split based on the DEFAULT column value

CREATE INDEX IX_PageSplitsPk_ChangeDate ON PageSplits (Change_Date);

GO

-- Create a table with an increasing clustered index

CREATE TABLE PageSplits_Index

( ROWID INT IDENTITY NOT NULL PRIMARY KEY,

Data INT NOT NULL DEFAULT (RAND()*1000),

Change_Date DATETIME2 NOT NULL DEFAULT DATEADD(mi, RAND()*-1000, CURRENT_TIMESTAMP))

GO

-- This index should mid-split based on the DEFAULT column value

CREATE INDEX IX_PageSplits_Index_ChangeDate ON PageSplits_Index (Change_Date);

GO

-- Insert the default values repeatedly into the tables

WHILE 1=1

BEGIN

    INSERT INTO PageSplits DEFAULT VALUES;

    INSERT INTO PageSplits_Index DEFAULT VALUES;

    WAITFOR DELAY '00:00:00.005';

END

GO

--If we startup this workload and allow it to run for a couple of minutes, we can then query the histogram target

--for our session to find the database that has the mid-page splits occurring.

-- Query the target data to identify the worst splitting database_id

with cte as

(

SELECT

    n.value('(value)[1]', 'int') AS database_id,

    DB_NAME(n.value('(value)[1]', 'int')) AS database_name,

    n.value('(@count)[1]', 'bigint') AS split_count

FROM

(SELECT CAST(target_data as XML) target_data

 FROM sys.dm_xe_sessions AS s

 JOIN sys.dm_xe_session_targets t

     ON s.address = t.event_session_address

 WHERE s.name = 'PageSplits_Tracker'

  AND t.target_name = 'histogram' ) as tab

CROSS APPLY target_data.nodes('HistogramTarget/Slot') as q(n)

)

select * from cte

database_id

database_name

split_count

16

Performance_Tracker

123

--With the database_id of the worst splitting database, we can then change our event session configuration

--to only look at this database, and then change our histogram target configuration to bucket on the alloc_unit_id

--so that we can then track down the worst splitting indexes in the database experiencing the worst mid-page splits

-- Drop the Event Session so we can recreate it

-- to focus on the highest splitting database

DROP EVENT SESSION [PageSplits_Tracker]

ON SERVER

-- Create the Event Session to track LOP_DELETE_SPLIT transaction_log operations in the server

CREATE EVENT SESSION [PageSplits_Tracker]

ON    SERVER

ADD EVENT sqlserver.transaction_log(

    WHERE operation = 11  -- LOP_DELETE_SPLIT

      AND database_id = 16 -- CHANGE THIS BASED ON TOP SPLITTING DATABASE!

)

ADD TARGET package0.histogram(

    SET filtering_event_name = 'sqlserver.transaction_log',

        source_type = 0, -- Event Column

        source = 'alloc_unit_id');

GO

-- Start the Event Session Again

ALTER EVENT SESSION [PageSplits_Tracker]

ON SERVER

STATE=START;

GO

--With the new event session definition, we can now rerun our problematic workload for more than 10 minutes period

-- and look at the worst splitting indexes based on the alloc_unit_id’s that are in the histogram target:

WHILE 1=1

BEGIN

    INSERT INTO PageSplits DEFAULT VALUES;

    INSERT INTO PageSplits_Index DEFAULT VALUES;

    WAITFOR DELAY '00:00:00.005';

END

GO

-- Query Target Data to get the top splitting objects in the database:

SELECT

    o.name AS table_name,

    i.name AS index_name,

    tab.split_count,indexstats.index_type_desc AS IndexType,

indexstats.avg_fragmentation_in_percent,

    i.fill_factor

FROM (    SELECT

            n.value('(value)[1]', 'bigint') AS alloc_unit_id,

            n.value('(@count)[1]', 'bigint') AS split_count

        FROM

        (SELECT CAST(target_data as XML) target_data

         FROM sys.dm_xe_sessions AS s

         JOIN sys.dm_xe_session_targets t

             ON s.address = t.event_session_address

         WHERE s.name = 'PageSplits_Tracker'

          AND t.target_name = 'histogram' ) as tab

        CROSS APPLY target_data.nodes('HistogramTarget/Slot') as q(n)

) AS tab

JOIN sys.allocation_units AS au

    ON tab.alloc_unit_id = au.allocation_unit_id

JOIN sys.partitions AS p

    ON au.container_id = p.partition_id

JOIN sys.indexes AS i

    ON p.object_id = i.object_id

        AND p.index_id = i.index_id

JOIN sys.objects AS o

    ON p.object_id = o.object_id

JOIN sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats

ON i.object_id = indexstats.object_id

AND i.index_id = indexstats.index_id

WHERE o.is_ms_shipped = 0

ORDER BY indexstats.avg_fragmentation_in_percent DESC

table_name

index_name

split_count

IndexType

avg_fragmentation_in_percent

fill_factor

PageSplits_Index

IX_PageSplits_Index_ChangeDate

286

NONCLUSTERED INDEX

99.57894737

0

PageSplits

PK__PageSpli__97BD02EBEA21A6BC

566

CLUSTERED INDEX

99.37238494

0

PageSplits

IX_PageSplitsPk_Data

341

NONCLUSTERED INDEX

98.98989899

0

PageSplits

IX_PageSplitsPk_ChangeDate

3

NONCLUSTERED INDEX

1.747572816

0

--With this information we can now go back and change our FillFactor specifications and retest/monitor the impact

-- to determine whether we’ve had the appropriate reduction in mid-page splits to accommodate the time between

-- our index rebuild operations:

-- Change FillFactor based on split occurences to minimize page splits

Using Fill Factor we can minimize the page splits :

Fill Factor :When an index is created with a fill factor percentage, this leaves a percentage of the index pages free after the index is created, rebuilt or reorganized. This free space is used to hold additional pages as page splits occur, reducing the change of a page split in the data page causing a page split in the index structure as well, but even with your Fill Factor set to 10% to 20%, index pages eventually fill up and are split the same way that a data page is split.

 A page is the basic unit of data storage in SQL server. Its size is 8KB(8192 bytes). Data is stored in the leaf-level pages of Index.  The percentage of space to be filled with data in a leaf level page is decided by fill factor. The remaining space left is used for future growth of data in the page. Fill factor is a number from 1 to 100. Its default value is 0, which is same as 100. So when we say fill factor is 70 means, 70% of space is filled with data and remaining 30% is vacant for future use. So higher the fill factor, more data is stored in the page. Fill factor setting is applied when we create/rebuild index.

ALTER INDEX PK__PageSpli__97BD02EBEA21A6BC ON PageSplits REBUILD WITH (FILLFACTOR=70)

ALTER INDEX IX_PageSplitsPk_Data ON PageSplits REBUILD WITH (FILLFACTOR=70)

ALTER INDEX IX_PageSplits_Index_ChangeDate ON PageSplits_Index REBUILD WITH (FILLFACTOR=80)

GO

-- Stop the Event Session to clear the target

ALTER EVENT SESSION [PageSplits_Tracker]

ON SERVER

STATE=STOP;

GO

-- Start the Event Session Again

ALTER EVENT SESSION [PageSplits_Tracker]

ON SERVER

STATE=START;                           

GO

--Do the workload once again

WHILE 1=1

BEGIN

    INSERT INTO PageSplits DEFAULT VALUES;

    INSERT INTO PageSplits_Index DEFAULT VALUES;

    WAITFOR DELAY '00:00:00.005';

END

GO

--With the reset performed we can again start up our workload generation and

--begin monitoring the effect of the FillFactor specifications on the indexes with our code.

--After another 2 minute period, the following splits were noted.

--Once again Query Target Data to get the top splitting objects in the database:

--At present there is no page splits are found in indexes IX_PageSplitsPk_ChangeDate, PK__PageSpli__97BD02EBEA21A6BC,   IX_PageSplitsPk_Data

read less
Comments
Dislike Bookmark

Looking for Database Training classes

Find best Database Training classes in your locality on UrbanPro.

FIND NOW

Lesson Posted on 26/04/2017 IT Courses/Database Training IT Courses/Manual Testing IT Courses/QA/QTP Training +3 IT Courses/QA/Selenium IT Courses/Software Testing IT Courses/XML Webservices less

Defect Management

Learn Testing

Learn Testing is started by working professionals who have 14+ years of experience in Software Testing...

Defect Management Terms: Application Life Cycle (ALM) Development Phase Testing Phase Production Phase ------------------------------------------------------------------ Error Defect Failure Mistake Bug ... read more
Defect Management
 
Terms:
 
Application Life Cycle (ALM)
 
Development Phase    Testing Phase        Production Phase
------------------------------------------------------------------
Error                         Defect                 Failure   
 
Mistake                     Bug
   
                                Fault
 
--------------------------------------------------------------
We have 3 phases in Software Application Life Cycle
 
a)  Development Phase  
 
In this phase If developers find any mismatch, they call it as Error or Mistake.
 
b) Testing Phase  
 
In this phase If Testers find  any mismatch, they call it as Defect or Bug or Fault.
 
 
c) Production Phase
 
In this phase If End users find  any mismatch, they call it as Failure.
 
Note: Terminology vary from one phase to another.
------------------------------------------------------------------
 
Defect Management:
 
Defect Reporting, Defect Tracking, and Status Tracking is called Defect Management.
 
Some companies use Manual Process (Excel workbook), and some companies use Tool based process for Defect Management.
 
Tools Examples:
 
Bugzilla / Issue-Tracker / PR-Tracker etc...
 
Jira, QC
 
 
---------------------------------------------------------------------
 
Model Defect Report Template:
---------------------------------------------------------------------
 
i) Defect Id: any unique name for Identifying the Defect (Alfa numeric)
 
ii) Defect Description: Details about the Defect
 
iii) Test Case Id: Corresponding Test Case Id for tracking
 
iv) Tester: Tester's name (who found the Defect)
 
v) Product Version: Version of the Product on which defect was found
 
vi) Build Version: Version of the Build on which defect was found
 
vii) Priority: Importance of the Defect based on Business /Customer
 
viii) Severity: Importance of the Defect based on Functionality
 
ix) Status: Status of Defect
 
x) Reproducible or not: Yes / No
 
    If Reproducible:
      Steps:
 
    If not Reproducible:
      Attachments
 
xi) Reporting to: Corresponding Developer
 
xii) Remarks : Comments (Optional)
 
------------------------------------------------------
 
Status: Status of Defect
 
New: Tester provides new status while Reporting (for the first time)
 
 
Open: Developer / Dev lead /DTT opens the Defect
 
Rejected: Developer / Dev lead /DTT rejects if the defect is invalid or defect is duplicate.
 
Fixed: Developer provides fixed status after fixing the defect
 
Deferred: Developer provides this status due to time etc...
 
Closed: Tester provides closed status after performing confirmation Testing
 
 
Re-open: Tester Re-opens the defect with valid reasons and proofs
 
------------------------------------------------------------------
Note: Defect Reporting Template vary from one company to another
 
If we use Tool for Defect management, every tool provides their own template.
 
Defect Reporting Process
------------------------
Defect  Reporting Process vary from one company to another.
 
a) Small scale Company
 
Tester -> Developer
 
b) Medium scale Company
 
Tester -> Test Lead -> Development Lead -> Developer
 
c) Large scale Company
 
Tester -> Test Lead -> DTT -> Development Lead -> Developer
 
 
A Sample Defect Report
---------------------
 
i) Defect Id: FR_Usr_Df001
 
ii) Defect Description: Agent Name accepting Numeric values
 
iii) Test Case Id: FR_Usr_Tc-004
 
iv) Tester: Kanaka Rao
 
v) Product Version: 1.0
 
vi) Build Version: 1.0
 
vii) Priority: Medium
 
viii) Severity: High
 
 
ix) Status: New
 
x) Reproducible or not: Yes
 
           Steps:
        1) Launch the Application
        2) Enter Numeric values into Agent Name field
        3) Enter valid Password
        4) Click on default(OK) button
       
xi) Reporting to: xyz
 
xii) Remarks : Comments (Optional)
 
-----------------------------------
Severity: 
 
Severity Levels depends on Company strategy
 
a) 5 Level Severity 
   
     Urgent
 
    Very High
 
    High
read less
Comments
Dislike Bookmark

Lesson Posted on 16/02/2017 IT Courses/Microsoft Training/Microsoft BI (Business Intelligence) Tools/SQL Server IT Courses/Database Training

New Features Worth Exploring in SQL Server 2016

Amitava Majumder

I am an experienced Trainer and IT professional with over 10 years of experience in IT Sector and more...

New Features Worth Exploring in SQL Server 2016 There is a lot of buzz around SQL Server 2016. Microsoft announced the release of SQL Server 2016 at the Microsoft Ignite Conference during the first week of May 2015. In this article I will be exploring, at a very high level, 10 of those new features. Always... read more

New Features Worth Exploring in SQL Server 2016

There is a lot of buzz around SQL Server 2016. Microsoft announced the release of SQL Server 2016 at the Microsoft Ignite Conference during the first week of May 2015.

In this article I will be exploring, at a very high level, 10 of those new features.

Always Encrypted- With the Always Encrypted feature enabled your SQL Server data will always be encrypted within SQL Server. Access to encrypted data will only be available to the applications calling SQL Server. Always Encrypted enables client application owners to control who gets access to see their applications confidential data. It does this by allowing the client application to be the one that has the encryption key. That encryption key is never passed to SQL Server. By doing this you can keep those nosey Database or Windows Administrators from poking around sensitive client application data In-Flight or At-Rest. This feature will now allow you to sleep at night knowing your confidential data stored in a cloud managed database is always encrypted and out of the eyes of your cloud provider.

Dynamic Data Masking- If you are interested in securing your confidential data so some people can see it, while other people get an obscured version of confidential data then you might be interested in dynamic data masking. With dynamic data masking you can obscure confidential columns of data in a table to SQL Server for users that are not authorized to see the all the data. With dynamic data masking you can identify how the data will be obscured. For instance say you accept credit card numbers and store them in a table, but you want to make sure your help desk staff is only able to see the last four digits of the credit card number. By setting up dynamic data masking you can define a masking rules so unauthorized logins can only read the last four digits of a credit card number, whereas authorized logins can see all of the credit card information.

JSON Support- JSON stands for Java Script Object Notation. With SQL Server 2016 you can now interchange JSON data between applications and the SQL Server database engine. By adding this support Microsoft has provided SQL Server the ability to parse JSON formatted data so it can be stored in a relation format. Additionally, with JSON support you can take relational data, and turn it into JSON formatted data. Microsoft has also added some new functions to provided support for querying JSON data stored in SQL Server. Having these additional JSON features built into SQL Server should make it easier for applications to exchange JSON data with SQL Server.

Multiple TempDB Database Files- It has been a best practice for a while to have more than one tempdb data file if you are running on a multi-core machine. In the past, up through SQL Server 2014, you always had to manually add the additional tempdb data files after you installed SQL Server. With SQL Server 2016 you can now configure the number of tempdb files you need while you are installing SQL Server. Having this new feature means you will no longer need to manually add additional tempdb files after installing SQL Server.

PolyBase- PolyBase allows you to query distributed data sets. With the introduction of PolyBase you will be able to use Transact SQL statements to query Hadoop, or SQL Azure blob storage. By using PolyBase you can now write adhoc queries to join relational data from SQL Server with semi-structured data stored in Hadoop, or SQL Azure blob storage. This allows you to get data from Hadoop without knowing the internals of Hadoop. Additionally you can leverage SQL Server’s on the fly column store indexing to optimize your queries against semi-structured data. As organizations spread data across many distributed locations, PolyBase will be a solution for them to leverage SQL Server technology to access their distributed semi-structured data.

Query Store- If you are into examining execution plans than you will like the new Query Store feature. Currently in versions of SQL Server prior to 2016 you can see existing execution plans by using dynamic management views (DMVs). But, the DMVs only allow you to see the plans that are actively in the plan cache. You can’t see any history for plans once they are rolled out of the plan cache. With the Query Store feature, SQL Server now saves historical execution plans. Not only that but it also saves the query statistics that go along with those historical plans. This is a great addition and will allow you to now track execution plans performance for your queries over time. 

Row Level Security- With Row Level Security the SQL database engine will be able to restrict access to row data, based on a SQL Server login. Restricting rows will be done by filter predicates defined in inline table value function. Security policies will ensure the filter predicates get executed for every SELECT or DELETE operation. Implementing row level security at the database layer means application developers will no longer need to maintain code to restrict data from some logins, while allowing other logins to access all the data. With this new feature, when someone queries a tables that contains row level security they will not even know whether or not any rows of data were filtered out.

R Comes to SQL Server- With Microsoft’s purchase of Revolution Analytics they are now able to incorporate R to support advance analytics against big data right inside of SQL Server. By incorporating R processing into SQL Server, data scientists will be able to take their existing R code and run it right inside the SQL Server database engine. This will eliminate the need to export your SQL server data in order to perform R processing against it. This new feature brings R processing closer to the data.

Stretch Database- The Stretch Database feature provides you a method to stretch the storage of your On-Premise database to Azure SQL Database. But having the stretch database feature allows you to have your most frequently accessed data stored On-Premise, while your less accessed data is off-site in an Azure SQL databases. When you enable a database to “stretch” the older data starts moving over to the Azure SQL database behind the scenes. When you need to run a query that might access active and historical information in a “stretched” database the database engine seamlessly queries both the On-Premise database as well as Azure SQL database and returns the results to you as if they had come from a single source. This feature will make it easy for DBA’s to archive information to a cheaper storage media without having to change any actual application code. By doing this you should be able to maximize performance on those active On-Premise queries.

Temporal Table- A temporal table is table that holds old versions of rows within a base table.  By having temporal tables SQL Server can automatically manage moving old row versions to the temporal table every time a row in the base table is updated.  The temporal table is physically a different table then the base table, but is linked to the base table.  If you’ve been building or plan to build your own method to managing row versioning then you might want to check out the new temporal tables support in SQL server 2016 before you go forth and build your own row versioning solution.

read less
Comments
Dislike Bookmark

Looking for Database Training classes

Find best Database Training classes in your locality on UrbanPro.

FIND NOW

Answered on 25/07/2017 Tuition/BTech Tuition IT Courses/Database Training

Sagar V

Database is organized way of storing the large amount of data
Answers 3 Comments
Dislike Bookmark

About UrbanPro

UrbanPro.com helps you to connect with the best Database Training classes in India. Post Your Requirement today and get connected.

Overview

Lessons 47

Total Shares  

Top Contributors

Connect with Expert Tutors & Institutes for Database Training

x

Ask a Question

Please enter your Question

Please select a Tag

table_name

index_name

split_count

IndexType

Please enter your full name.

Please enter institute name.

Please enter your email address.

Please enter a valid phone number.

Please enter a pincode or area name.

Please select your gender.

By signing up, you agree to our Terms of Use and Privacy Policy.

Already a member?

UrbanPro.com is India's largest network of most trusted tutors and institutes. Over 25 lakh students rely on UrbanPro.com, to fulfill their learning requirements across 1,000+ categories. Using UrbanPro.com, parents, and students can compare multiple Tutors and Institutes and choose the one that best suits their requirements. More than 6.5 lakh verified Tutors and Institutes are helping millions of students every day and growing their tutoring business on UrbanPro.com. Whether you are looking for a tutor to learn mathematics, a German language trainer to brush up your German language skills or an institute to upgrade your IT skills, we have got the best selection of Tutors and Training Institutes for you. Read more