#sys.dm_exec_query_plan
Explore tagged Tumblr posts
Text
Find top CPU consuming SQL Queries in SQL Server
Find top CPU consuming SQL Queries in SQL Server
Check top CPU consuming in SQL Queries in SQL Server ;WITH eqs AS ( SELECT [execution_count] ,[total_worker_time]/1000 AS [TotalCPUTime_ms] ,[total_elapsed_time]/1000 AS [TotalDuration_ms] ,query_hash ,plan_handle ,[sql_handle] FROM sys.dm_exec_query_stats ) SELECT TOP 10 est.[text], eqp.query_plan AS SQLStatement ,eqs.* FROM eqs OUTER APPLY sys.dm_exec_query_plan(eqs.plan_handle) eqp OUTER…

View On WordPress
0 notes
Text
Diagnosing Slow-Running Stored Procedures
This week, I had a stored procedure taking 2 to 3 seconds to execute from a .NET application, where it previously took under 40ms. A few seconds doesn't seem that long, but the application was looping through a collection of items and calling the procedure around a hundred times. Meaning for the user, the time to complete the workflow had increased from around 4 seconds to over 4 minutes. Nothing had changed with the tables being used and index fragmentation was not an issue, thanks to fairly aggressive maintenance plans.
What was more baffling was the same process looping the same stored procedure used in the same application installed in a test environment took the expected four seconds or so. In SSMS, in both production and test environments, everything was well-behaved. Firing up the profiler, I could see that the sproc was reading around a thousand records when executed from SSMS (regardless of environment), but when the .NET application ran it, it was reading 250,000.
The culprit was the execution plan. Or rather plans. Plural. SSMS was getting one plan and the .NET application was getting another. To figure out why SQL was using two separate plans, you need to get the handles for the plans for the stored procedure:
SELECT sys.objects.object_id, sys.dm_exec_procedure_stats.plan_handle, [x].query_plan FROM sys.objects INNER JOIN sys.dm_exec_procedure_stats ON sys.objects.object_id = sys.dm_exec_procedure_stats.object_id CROSS APPLY sys.Dm_exec_query_plan(sys.dm_exec_procedure_stats.plan_handle) [x] WHERE sys.objects.object_id = Object_id('StoredProcName')
From there, consulting the SET options for the plans revealed something interesting: one plan had ARITHABORT ON, the other ARITHABORT OFF. SSMS, by default, has it set to on. Client applications have it set to off. The result was that SQL was using a horrifically bad execution plan for the .NET application and a reasonable one for SSMS. This is usually tied to issues with the parameter sniffing that SQL uses when compiling a plan. The solution I decided to go with was to queue the sproc for a plan recompile:
EXEC sp_recompile N'StoredProcName'
The next time the sproc was executed, SQL generated a new plan for it. The result was a drop from 250k reads and 3 seconds per execution to around a thousand reads and under 40ms. Exactly what it should be.
0 notes
Text
Tame that query, part 2
In part one, I discussed the performance issues that we were having at KCF with one of our SQL databases. In this post, I’ll discuss how we identified the queries that were most problematic.
Finding problematic queries
The first we needed to do was identify if there 2 or 3 problematic queries that might be causing the bulk of the load on the server. If we could identify a handful of bad queries, we could then either optimize the queries or possibly cache them in Redis.
To find the problem queries, we ran the following SQL query.
SELECT TOP 10 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads, qs.last_logical_reads, qs.total_logical_writes, qs.last_logical_writes, qs.total_worker_time, qs.last_worker_time, qs.total_elapsed_time/1000000 total_elapsed_time_in_S, qs.last_elapsed_time/1000000 last_elapsed_time_in_S, qs.last_execution_time, qs.creation_time, DATEDIFF(MINUTE, qs.creation_time, GETDATE()) as plan_age, qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC
This query will bring back a table that shows the most expensive queries sorted by total_worker_time. Total_worker_time is the total CPU time spent running the query in microseconds. Looking through the list, you'll quickly identify the most expensive queries that you should focus on.
Analyzing the bad queries
If you run the above query in SQL Server Management Studio (SSMS), you’ll see a link in the final column to the actual query plan like so:
By clicking on one of these links, you can see the actual query plan that SQL server will use to execute the query. This is where specific performance issues can be identified. Perhaps I’ll discuss reading query execution plans in another post, but below are a couple things to look for.
SELECT TOP 1000 Hierarchy FROM Groups WHERE Hierarchy LIKE '/a%'
The query plan above shows a query that takes advantage of an index called IX_Hierarchy to return the requested data. Because it is an Index Seek, it uses the index efficiently without any excess disk or CPU I/O. However, an innocent enough change in the query like the following can have significant performance ramifications.
SELECT TOP 1000 Hierarchy FROM Groups WHERE LOWER(Hierarchy) LIKE '/a%'
By simply changing the predicate to have the LOWER() function, we remove the ability of SQL to seek the IX_Hierarchy index, it now must instead do an Index Scan which in this case takes 10 times as long as the seek.
To compare the effects of changes to queries, place the original query and the changed query in a new query window in SSMS and click the Actual Execution Plan button.
After executing a query, the query plans used to execute them will be shown in a new tab in the Results window. This will show the query plans and importantly, the Query cost relative to eachother, a lower cost is better.
Reading and optimizing query plans could be the topic of an entire series of blog posts and spending time understanding them can be quite beneficial.
In the next post, I’ll talk about some things we did at KCF to optimize some of our worst queries.
0 notes
Text
SQL de neler oluyor? CPU yoran sorguları görüntülemek
Merhaba,
Yazılım veya web sitesi vb. fazla olarak yavaşladı. Çeşitli kontroller yaptınız problem SQL üzerinden görünüyor. CPU %100 olarak çalışıyor. Görev yöneticisinde ilginç bir şeyler var... Bir nedeni de sitenize SQL sorgusu atılıyor olabilir (atak yapılıyor)
Bu tarz problemler yaşadığınızda SQL üzerinden yapacağınız kontrollerin querylerini vereceğim. İlk olarak bunları kontrol etmeniz gerekiyor.
CPU yu en çok yoran 10 sorguyu bulmak için:
SELECT TOP 10
QT.TEXT AS STATEMENT_TEXT,
QP.QUERY_PLAN,
QS.TOTAL_WORKER_TIME AS CPU_TIME
FROM SYS.DM_EXEC_QUERY_STATS QS
CROSS APPLY SYS.DM_EXEC_SQL_TEXT (QS.SQL_HANDLE) AS QT
CROSS APPLY SYS.DM_EXEC_QUERY_PLAN (QS.PLAN_HANDLE) AS QP
ORDER BY TOTAL_WORKER_TIME DESC
CPU yu en çok yoran disk yazma okuma sorgularını bulmak için:
top 10 sorgu
SELECT TOP 10
TOTAL_LOGICAL_READS,
TOTAL_LOGICAL_WRITES,
EXECUTION_COUNT,
TOTAL_LOGICAL_READS+TOTAL_LOGICAL_WRITES AS [IO_TOTAL],
QT.TEXT AS QUERY_TEXT,
DB_NAME(QT.DBID) AS DATABASE_NAME,
QT.OBJECTID AS OBJECT_ID
FROM SYS.DM_EXEC_QUERY_STATS QS
CROSS APPLY SYS.DM_EXEC_SQL_TEXT(SQL_HANDLE) QT
WHERE TOTAL_LOGICAL_READS+TOTAL_LOGICAL_WRITES > 0
ORDER BY [IO_TOTAL] DESC
CPU yu yoran en çok sorgu çalıştıran işlemler
SELECT QS.EXECUTION_COUNT,
QT.TEXT AS QUERY_TEXT,
QT.DBID,
DBNAME= DB_NAME (QT.DBID),
QT.OBJECTID,
QS.TOTAL_ROWS,
QS.LAST_ROWS,
QS.MIN_ROWS,
QS.MAX_ROWS
FROM SYS.DM_EXEC_QUERY_STATS AS QS
CROSS APPLY SYS.DM_EXEC_SQL_TEXT(QS.SQL_HANDLE) AS QT
ORDER BY QS.EXECUTION_COUNT DESC
Umarım işinize yarar. Kolay gelsin.
from Aybar Dumlu - Blog, Friends, Familiar https://ift.tt/2qXGZls via IFTTT
0 notes
Text
Tweeted
#Data #SqlServer #Tips Query plan returns NULL when using SQL Server DMV sys.dm_exec_query_plan https://t.co/VyElwwgzkE via @mssqltips
— SQL Joker (@sql_joker) June 16, 2017
0 notes
Text
How to identify the most costly SQL Server queries using DMV’s
See on Scoop.it - Digital Analytics
The query returns both the SQL Text from the sys.dm_exec_sql_text DMV and the XML Showplan data from the sys.dm_exec_query_plan DMV.
0 notes
Text
SQL Server - Disk Usage Monitoring
SQL Server – Disk Usage Monitoring
[bs url=http://social.technet.microsoft.com/wiki/contents/articles/3214.monitoring-disk-usage.aspx] (more…)
View On WordPress
0 notes
Text
Find Top Most Expensive Cached Queries (sys.dm_exec_query_stats)
[bs url=http://www.codeproject.com/Articles/579593/How-to-Find-the-Top-Most-Expens] sys.dm_exec_query_stats DMV (Dynamic Management View) is described at http://msdn.microsoft.com/en-us/library/ms189741.aspx
Top 10 Total CPU Consuming Queries
SELECT TOP 10 QT.TEXT AS STATEMENT_TEXT, QP.QUERY_PLAN, QS.TOTAL_WORKER_TIME AS CPU_TIME FROM SYS.DM_EXEC_QUERY_STATS QS CROSS APPLY SYS.DM_EXEC_SQL_TEXT…
View On WordPress
#DMV#Dynamic Management View#SQL Server#sys.dm_exec_query_plan#sys.dm_exec_query_stats#sys.dm_exec_sql_text
0 notes
Text
SQL Server - most costly queries in terms of Total CPU
SQL Server – most costly queries in terms of Total CPU
[bs url=http://www.johnsansom.com/how-to-identify-the-most-costly-sql-server-queries-using-dmvs/]
SELECT TOP 20 qs.sql_handle, qs.execution_count, qs.total_worker_time AS Total_CPU, total_CPU_inSeconds = --Converted from microseconds qs.total_worker_time/1000000, average_CPU_inSeconds = --Converted from microseconds (qs.total_worker_time/1000000) / qs.execution_count, qs.total_elapsed_time,…
View On WordPress
0 notes