Menu

#1542 [HHQ-2088] Additional metrics for SQL Server 2005

Bug
open
None
5
2012-10-09
2008-05-14
No

http://jira.hyperic.com/browse/HHQ-2088
SQL Server Performance Metrics


Note, some metrics may be listed twice in order to represent what type of analysis they are useful for.



Memory Bottleneck Analysis


Object: - Memory


Counter: - Free System Page Table Entries


Preferred Value: - > 7000


Description: - Free System Page Table Entries is the number of page table entries not currently in use by the system. If < 7000, consider removing /3GB.


Reference: - KB 311901



Object: - SQL Server:Buffer Manager


Counter: - Page Life Expectancy


Preferred Value: - > 300


Description: - This performance monitor counter tells you, on average, how long data pages are staying in the buffer. If this value gets below 300 seconds, this is a potential indication that your SQL Server could use more memory in order to boost performance.


Reference: -



Object: - SQL Server:Buffer Manager


Counter: - Lazy Writes/Sec


Preferred Value: - < 20


Description: - This counter tracks how many times a second that the Lazy Writer process is moving dirty pages from the buffer to disk in order to free up buffer space. Generally speaking, this should not be a high value, say more than 20 per second or so. Ideally, it should be close to zero. If it is zero, this indicates that your SQL Server's buffer cache is plenty big and SQL Server doesn't have to free up dirty pages, instead waiting for this to occur during regular checkpoints. If this value is high, then a need for more memory is indicated.


Reference: -



Object: - SQL Server:Buffer Manager


Counter: - Checkpoint Pages/Sec


Preferred Value: - This value is relative, it varies from server to server, we need to compare the average to a base line capture to tell if the value is high or low.


Description: - When a checkpoint occurs, all dirty pages are written to disk. This is a normal procedure and will cause this counter to rise during the checkpoint process. What you don't want to see is a high value for this counter over time. This can indicate that the checkpoint process is running more often than it should, which can use up valuable server resources. If this has a high figure (and this will vary from server to server), consider adding more RAM to reduce how often the checkpoint occurs, or consider increasing the "recovery interval" SQL Server configuration setting.


Reference: -



Object: - SQL Server:Buffer Manager


Counter: - Page reads/sec


Preferred Value: - < 90


Description: - Number of physical database page reads issued. 80 - 90 per second is normal, anything that is above indicates indexing or memory constraint.


Reference: -



Object: - SQL Server:Buffer Manager


Counter: - Page writes/sec


Preferred Value: - < 90


Description: - Number of physical database page writes issued. 80 - 90 per second is normal, anything more we need to check the lazy writer/sec and checkpoint counters, if these counters are also relatively high then, it's memory constraint.


Reference: -



Object: - SQL Server:Buffer Manager


Counter: - Free pages


Preferred Value: - > 640


Description: - Total number of pages on all free lists.


Reference: -



Object: - SQL Server:Buffer Manager


Counter: - Stolen pages


Preferred Value: - Varies. Compare with baseline


Description: - Number of pages used for miscellaneous server purposes (including procedure cache).


Reference: -



Object: - SQL Server:Buffer Manager


Counter: - Buffer Cache hit ratio


Preferred Value: - > 90%


Description: - Percentage of pages that were found in the buffer pool without having to incur a read from disk.


Reference: -



Disk Bottleneck Analysis


Object: - SQL Server:Buffer Manager


Counter: - Page reads/sec


Preferred Value: - < 90


Description: - Number of physical database page reads issued. 80 - 90 per second is normal, anything that is above indicates indexing or memory constraint.


Reference: -



Object: - SQL Server:Buffer Manager


Counter: - Page writes/sec


Preferred Value: - < 90


Description: - Number of physical database page writes issued. 80 - 90 per second is normal, anything more we need to check the lazy writer/sec and checkpoint counters, if these counters are also relatively high then, it's memory constraint.


Reference: -



Object: - SQL Server:Buffer Manager


Counter: - Free pages


Preferred Value: - > 640


Description: - Total number of pages on all free lists.


Reference: -



Object: - SQL Server:Buffer Manager


Counter: - Stolen pages


Preferred Value: - Varies. Compare with baseline


Description: - Number of pages used for miscellaneous server purposes (including procedure cache).


Reference: -



Processor Bottleneck Analysis


Object: - System


Counter: - Context Switches/sec


Preferred Value: - < 3000


Description: - 1500 - 3000 per processor Excellent - Fair


> 6000 per processor Poor


Upper limit is about 40,000 at 90 % CPU per CPU


NOTE: Remember to divide by number of processors


Reference: -



Object: - System


Counter: - Processor Queue Length


Preferred Value: - < 4 per CPU


Description: - For standard servers with long Quantums


<= 4 per CPU Excellent


< 8 per CPU Good


< 12 per CPU Fair


Reference: -



Object: - SQLServer:Access Methods


Counter: - Full Scans / sec


Preferred Value: - < 1


Description: - If we see high CPU then we need to investigate this counter, otherwise if the full scan are on small tables we can ignore this counter. Values greater than 1 or 2 indicate that we are having table / Index page scans. We need to analyze how this can be avoided.


Reference: -



Object: - SQLServer:Access Methods


Counter: - Worktables Created/Sec


Preferred Value: - < 20


Description: - Number of worktables created in tempdb per second. Worktables are used for queries that use various spools (table spool, index spool, etc).


Reference: -



Object: - SQLServer:Access Methods


Counter: - Workfiles Created/Sec


Preferred Value: - < 20


Description: - Number of work files created per second. Tempdb workfiles are used in processing hash operations when the amount of data being processed is too big to fit into the available memory. They may be able to reduce this number by making the queries more efficient by adding/changing indexes, adding additional memory, etc.


Reference: -



Object: - SQLServer:Access Methods


Counter: - Page Splits/sec


Preferred Value: - < 20


Description: - Interesting counter that can lead us to our table / index design. This value needs to be low as possible. If you find out that the number of page splits is high, consider increasing the fillfactor of your indexes. An increased fillfactor helps to reduce page splits because there is more room in data pages before it fills up and a page split has to occur.


Reference: -



Overall SQL Server Bottleneck Analysis


Object: - SQLServer:General Statistics


Counter: - Logins/sec


Preferred Value: - < 2


Description: - > 2 per second indicates that the application is not correctly using connection pooling.


Reference: -



Object: - SQLServer:General Statistics


Counter: - Logouts/sec


Preferred Value: - < 2


Description: - > 2 per second indicates that the application is not correctly using connection pooling.


Reference: -



Object: - SQLServer:SQL Statistics


Counter: - Batch Requests/Sec


Preferred Value: - < 1000


Description: - Over 1000 batch requests per second indicate a very busy SQL Server.


Reference: -



Object: - SQLServer:SQL Statistics


Counter: - SQL Compilations/sec


Preferred Value: - < 10% of the number of Batch Requests / sec


Description: - The number of times per second that SQL Server compilations have occurred. This value needs to be as low as possible. If you see a high value such as over 100, then it's an indication that there are lots or adhoc queries that are running, might cause CPU


Reference: -



Object: - SQLServer:SQL Statistics


Counter: - SQL Re-Compilations/sec


Preferred Value: - < 10% of the number of SQL Compilations/sec


Description: - This needs to be nil in our system as much as possible. A recompile can cause deadlocks and compile locks that are not compatible with any locking type.


Reference: -












Confidential - ? 2008 Hyland Software, Inc.













Discussion

Anonymous
Anonymous

Add attachments
Cancel





Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.