I heard an amount about the new Features of the Oracle 10G server. A little bit set in my opinion is the Revolution of it's new Metering model concerning the performance monitoring.
Up to Oracle 9 there were decisively two (most famous ;-)) methods to get on to performance bottlenecks.
Statspack - goodies
- Sampling of global database statistics
- Top SQL's with buffer gets
- Top SQL's with disk reads
- Top SQL's with executions
- HTML-Reporting with www.oraperf.com
Statspack - not so good
- missing connection of the wait events to the executed SQL and to the session
values mislead in interpretation of report (If for example 1000 users
each is waiting 1 second on db file scattered read then this event
would have 1000 seconds!)
- Hard to decide to an unknown System what a normal load is and what not
- No time lines - minimum interval 10 - 15 minutes
Extended trace event 10046 - goodies
- Excellent wait time diagnostics on SQL/Session level
- generate text report with tkprof
- Easy to find the root cause of response time problem
- Excellent timing tree of recursive SQL with hotsos viewer
Extended trace event 10046 - not so good
- could be get difficult to find the right session to monitor
- Not able to monitor reconnecting sessions
- Sophisticated report tools are very expensive (www.hotsos.com/)
- Focus on only one session - difficult to trace programs which allocate more then one database session.
approaches do not support time lines to monitor - all metering data is
aggregated. So you see not the timing of the real peaks in statement
execution or some waiting gaps due to for example slow log switches.
Oracle 10G - the new approach
seems the fact that Oracle closed the gap between the global approach
and the session-oriented approach. The thousands of Html Pages of the
new database control makes blind to see which really essential step of
Oracle performance engineering innovative this is. Here the a subset of
the description of the new features (copied from metalink note
New Time Model:
In Oracle10g the database introduces a different way of store
statistics and metrics regarding the time consumed by the different
sessions connected. Without it, it would not be possible for the database to clearly identify the root cause
of the problem in an acceptable timeframe. The following describes in
detail the characteristics. Active Session History (ASH): One of the
components of the AWR repository is the Active Session History (ASH) which collects every second samples of all the sessions that are in status "waiting" from v$sessions (inactive sessions are not captured).
This features allows to report a resource costing SQL query with it's wait events (look at V$ACTIVE_SESSION_HISTORY) and with it's time line and with it's effective execution plans and with it’s session context. The data is kept in the Oracle repository (DBA_HIST-Views)
All data is kept in tables and so easy to report. SQL's are identified by a unique SQL_ID and completely stored (CLOB) in the repository - to aggregate - to format - to export.
new Database Control allows a graphical view on this data. And what
looks so colorful and easy to maintain is the new oracle approach - it
makes you even more independent from expensive performance tools of
other non oracle providers!
It is clear that the new performance monitoring features are more then just an internalized statspack.
It's a revolution!
You cannot compare/combine these methods (do not mix the old with the new wine;-)).
Good job Oracle - Please continue!
UPDATE : found a document which goes into deep of Oracle 10G new tuning model.
Automatic Performance Diagnosis with Oracle Database 10g Release 2
UPDATE : Found an interesting link about The AWR :The Automatic Workload Repository