When the developers at my current site started to use SQL Monitoring more often, they would occasionally contact me to ask why a statement didn’t appear in this screen, even though they knew for certain that they had run it 3 or 4 hours ago and had selected ‘All’ or ’24 Hours’ from the ‘Active in last’ drop-down list.
!– s9ymdb:329 –
I noticed when I investigated that some of our busiest test systems only displayed statements from the past hour or two, even when selecting ‘All’ from the drop-down. I asked some friends at Oracle about this and they informed me that there is a configurable limit on how many SQL plans will be monitored that is controlled by the
_sqlmon_max_plans hidden parameter. It has a default value of the number of CPUs * 20 and controls the size of a memory area dedicated to SQL Monitoring information. This is probably a sensible approach in retrospect because who knows how many long-running SQL statements might be executed over a period of time on your particular system?
I included this small snippet of information in my SQL Monitoring presentations earlier this year because it’s become a fairly regular annoyance and planned to blog about it months ago but first I wanted to check what memory area would be increased and whether there would be any significant implications.
Now that I’ve suggested to my client that we increase it across our systems I had a dig around in various V$ views to try to identify the memory implications but didn’t notice anything obvious. My educated guess is that the additional memory requirement is unlikely to be onerous on modern systems but would still like to know for sure and so I’ll keep digging but, if anyone knows already, I’d be very interested …