1. Use System Dashboard
Key Points
- Accessible via Management Portal home page
- Real-time system health visualization
- Key metrics: CPU, memory, disk I/O, license units, process count
- Color-coded indicators for quick status assessment
- Drill-down capability for detailed investigation
Detailed Notes
Overview and Access
The System Dashboard provides a centralized, real-time view of InterSystems IRIS health and performance metrics accessible through the Management Portal home page. The dashboard presents critical system information in an intuitive, visual format using charts, graphs, and color-coded indicators that enable rapid assessment of system status.
Dashboard Components
Key dashboard components include system overview metrics showing instance name, version, uptime, and current status; license utilization displaying used versus available license units with visual warnings as capacity approaches; process activity showing active process count and recent trends; CPU utilization with historical graphs and current percentage; memory usage including allocated, used, and available memory; disk I/O statistics showing read/write operations per second; database free space indicators with warnings for databases approaching capacity; journal status including current file and space utilization; network activity for client connections; and service status showing whether critical services (web server, superserver, etc.) are running.
Color Coding and Drill-Down
The dashboard uses color coding for rapid status assessment - green indicates normal operations, yellow signals warnings requiring attention, and red highlights critical conditions demanding immediate action. Each dashboard component typically offers drill-down capability: clicking on a metric navigates to detailed views with comprehensive statistics, historical trends, and management controls. For example, clicking on the license utilization section navigates to detailed license management pages; clicking process count navigates to the Processes page for detailed process examination. The dashboard auto-refreshes periodically (configurable interval) to provide near-real-time monitoring without manual page reload.
Best Practices
Best practices for dashboard use include reviewing the dashboard regularly as part of daily operational procedures, investigating yellow/red indicators promptly, establishing baseline understanding of normal dashboard appearance for your environment to recognize anomalies quickly, and training operational staff on dashboard interpretation for effective first-line monitoring. The dashboard serves as the primary entry point for system monitoring and often the first place to look when users report performance issues or system problems. While the dashboard provides excellent overview monitoring, detailed troubleshooting typically requires navigating to specific management pages or using specialized utilities like ^PERFMON.
Documentation References
2. Monitor global and routine buffer performance
Key Points
- Global buffers cache database blocks in memory
- Routine buffers cache compiled routine code in memory
- Buffer hit ratio is key performance indicator (should be >90%)
- Low hit ratios indicate insufficient buffer allocation
- Configure buffer sizes in [config] section of iris.cpf
Detailed Notes
Understanding Buffer Pools
Buffer performance is fundamental to InterSystems IRIS throughput and response time. The system uses two primary buffer pools: global buffers cache database blocks, while routine buffers cache compiled routine code. When application code accesses a global, IRIS first checks the global buffer pool. If the block is found (a "hit"), the access completes immediately from memory. If not found (a "miss"), IRIS must read from disk (orders of magnitude slower) and cache the block in buffers for future access. The buffer hit ratio (hits divided by total references) measures buffer effectiveness - ratios above 90% generally indicate good performance, while lower ratios suggest insufficient buffer allocation causing excessive disk I/O.
Monitoring Buffer Performance
Monitor buffer performance through multiple interfaces. The System Dashboard shows current buffer hit ratios. The ^PERFMON utility provides real-time buffer statistics including hit count, miss count, hit ratio percentage, buffer pool size, and disk read operations. The Management Portal System Performance page (System Operations > System Performance) offers detailed buffer analysis with historical trending. Key buffer metrics include global buffer hit ratio (target: >90-95%), routine buffer hit ratio (target: >95%), buffer pool utilization percentage (how much of allocated buffer is actively used), disk read rate (physical reads per second - lower is better when hit ratio is high), and buffer pool aging (how quickly buffers are being recycled).
Diagnosing Low Hit Ratios
Low buffer hit ratios indicate performance problems. Resolution typically involves increasing buffer allocation, though root cause analysis should confirm this is appropriate. Very low hit ratios (<70%) might indicate application design issues like scanning entire large globals inefficiently.
Configuring Buffer Sizes
Buffer sizes are configured in the Configuration Parameter File (iris.cpf) [config] section using parameters like globals_buffers (number of global buffer blocks) and routines_buffers (number of routine buffer blocks). Changes require instance restart. When increasing buffers, ensure adequate physical RAM exists - allocating more buffers than available memory causes operating system paging which severely degrades performance. Formula for buffer memory: globals_buffers x database_block_size (default 8KB) + routines_buffers x 8KB = total buffer memory. Monitor operating system memory statistics alongside IRIS buffer metrics to ensure no system-level memory pressure. For optimal performance, size global buffers to hold the working set (frequently accessed data) in memory and routine buffers to hold all compiled code. Periodic buffer monitoring identifies trends - steadily declining hit ratios might indicate growing data volume requiring buffer increases.
Documentation References
3. Track ECP statistics (if applicable)
Key Points
- ECP enables remote database access across IRIS instances
- Monitor ECP connection status and throughput
- Track buffer transfers and network latency
- Application server sends requests; data server responds with blocks
- ECP statistics available via Management Portal and ^PERS utility
Detailed Notes
ECP Overview and Monitoring Areas
Enterprise Cache Protocol (ECP) is InterSystems' protocol for distributed data access, enabling application servers to access databases located on remote data servers. In ECP configurations, comprehensive monitoring ensures optimal distributed system performance and identifies connectivity or latency issues. ECP monitoring focuses on several key areas: connection health, data transfer statistics, network performance, and buffer efficiency.
Monitoring Tools and Key Metrics
The Management Portal provides ECP monitoring through System Operations > ECP Connections, displaying all active ECP connections with status, bandwidth utilization, and error counts. The ^PERS utility offers command-line ECP statistics including detailed transfer counts and performance metrics. Key ECP metrics to monitor include connection status (all configured connections should show "active"), blocks transferred (volume of data blocks sent over ECP), network latency (round-trip time for ECP requests - should be consistently low), ECP buffer hit ratio (how often requested blocks are found in application server's ECP cache), connection errors (network issues, timeouts, or failures), and bandwidth utilization (ensuring network capacity is adequate).
ECP Architecture and Data Flow
ECP architecture has application servers requesting data from data servers. When an application server needs a global block, it first checks its local ECP buffers. On miss, it sends an ECP request to the data server, which retrieves the block from its global buffers (or disk if necessary) and transmits it back. Network latency directly impacts this process, so ECP configurations should use high-speed, low-latency networks (typically dedicated gigabit or faster connections).
Common Issues and Troubleshooting
Common ECP issues include connection failures from network problems or configuration errors, high latency from network congestion or geographic distance, insufficient ECP buffers on application servers causing excessive network requests, and data server overload from too many application servers. Troubleshooting ECP problems involves checking connection status to verify all connections are active, reviewing network statistics for packet loss or high latency, examining ECP buffer hit ratios on application servers (low ratios suggest increasing ECP buffer allocation), analyzing data server load to ensure adequate capacity for all application servers, and correlating ECP metrics with application performance to identify impact. Best practices for ECP monitoring include establishing baseline metrics for normal operation, implementing automated monitoring with alerts for connection failures or performance degradation, sizing ECP buffers appropriately on application servers (based on working set size), and documenting ECP topology and dependencies for troubleshooting.
Documentation References
4. Use System Performance Monitor
Key Points
- Real-time performance monitoring utility
- Access via Terminal: do ^PERFMON
- Displays commands/sec, globals/sec, disk I/O, buffer hits
- Update interval configurable (default 5 seconds)
- Essential for identifying performance bottlenecks
Detailed Notes
Launching and Using ^PERFMON
The System Performance Monitor, accessed via the ^PERFMON utility, is InterSystems IRIS's primary tool for real-time performance analysis and troubleshooting. Unlike the Management Portal dashboard which provides overview monitoring, ^PERFMON delivers detailed, continuously updating metrics essential for in-depth performance investigation. Launch ^PERFMON from Terminal by entering "do ^PERFMON" - the utility displays a continuously updating screen of performance statistics refreshing at configurable intervals (default 5 seconds).
Key Metrics Displayed
Key metrics displayed include activity rates (commands per second, global references per second, routine calls per second), buffer statistics (global buffer hit ratio, routine buffer hit ratio, disk reads per second), disk I/O (physical reads, physical writes, I/O operations per second), lock activity (lock requests per second, lock wait time), journal statistics (journal entries per second, journal bytes written), process information (active process count, waiting processes), memory utilization (buffer pool usage, memory allocation), and network activity (ECP transfers if configured).
Interpreting ^PERFMON Output
Understanding ^PERFMON output enables rapid problem identification. High commands/second with low global references suggests routine-intensive processing. High global references with low buffer hit ratios indicates memory pressure requiring buffer increases. High disk read operations with reasonable hit ratios might suggest poorly designed sequential scans. High lock wait times points to lock contention issues. The utility offers several display modes. The default "standard" mode shows comprehensive statistics on a single screen. Detail modes focus on specific subsystems like disk I/O, lock activity, or ECP statistics. The utility also supports output redirection to log files for historical analysis.
Baseline Comparison and Troubleshooting Workflows
Interpreting ^PERFMON data requires understanding normal performance baselines for your environment. Transaction rates, global reference patterns, and buffer hit ratios vary significantly based on application workload characteristics. Establish baselines during normal operations to recognize abnormal patterns during problems. Common ^PERFMON troubleshooting workflows: user reports slow performance -> run ^PERFMON -> observe low buffer hit ratio and high disk reads -> diagnosis: insufficient global buffers -> resolution: increase globals_buffers configuration. Or: application timeouts -> run ^PERFMON -> observe high lock wait statistics -> examine process locks for contention -> identify blocking processes. ^PERFMON is particularly valuable during live performance problems when immediate diagnosis is needed before symptoms disappear.
Documentation References
5. Configure alerts and notifications
Key Points
- System Monitor provides automated alert generation
- Configure thresholds for disk space, license units, process counts
- Email notifications for alert conditions
- Alert history tracking for trend analysis
- Integration with enterprise monitoring tools possible
Detailed Notes
System Monitor Overview
Proactive monitoring through automated alerts enables administrators to identify and address potential problems before they cause outages or data loss. InterSystems IRIS provides System Monitor functionality for automated monitoring and alerting. Access System Monitor configuration through the Management Portal at System Operations > System Monitor. System Monitor continuously checks configured metrics against defined thresholds and generates alerts when thresholds are exceeded.
Alertable Conditions
Key alertable conditions include database free space dropping below specified percentage, journal directory space approaching capacity, license unit utilization exceeding thresholds, excessive process counts indicating potential runaway processes, service failures (web server, superserver shutdowns), buffer hit ratios dropping below acceptable levels, sustained high CPU utilization, memory allocation approaching limits, and backup failures or delays.
Configuring Alerts
For each metric, configure alert thresholds (warning level and critical level), notification methods (email, SNMP trap, log entry), notification recipients (email distribution lists), and alert suppression rules (prevent alert flooding from transient conditions). System Monitor also implements alert escalation - if a warning condition persists or worsens to critical, it can trigger escalated notifications to additional personnel. Alert history is maintained in the System Monitor database, providing trend analysis capability. Review alert history to identify recurring patterns suggesting underlying issues needing architectural changes. For example, regular disk space alerts might indicate need for larger volumes or more aggressive data archiving.
Email and Enterprise Integration
Email notification configuration requires SMTP server settings configured in System Monitor preferences. Specify SMTP server address, port, authentication credentials (if required), and sender email address. Test email configuration to verify notifications will deliver successfully. Integration with enterprise monitoring platforms (Nagios, Zabbix, SCOM, etc.) is achievable through SNMP trap generation, email forwarding, or custom integration using InterSystems IRIS APIs to query metrics. Such integration provides centralized monitoring across heterogeneous infrastructure.
Best Practices
Best practices for alert configuration include setting thresholds based on your environment's normal operational patterns (too sensitive creates alert fatigue; too lenient misses real issues), testing alert delivery regularly to ensure notifications work when needed, documenting alert response procedures so on-call staff know how to handle each alert type, and reviewing and tuning alert configurations quarterly based on operational experience. Critical production environments should have 24/7 alert monitoring with documented escalation procedures.
Documentation References
6. Review audit logs
Key Points
- Audit database records security events and system changes
- Track user authentication, privilege escalation, configuration changes
- Access via Management Portal: System Operations > Audit Log
- Configure which events to audit
- Regular review required for compliance and security
Detailed Notes
Purpose and Storage
Audit logging provides comprehensive tracking of security-relevant events and system changes for compliance, security monitoring, and forensic investigation. InterSystems IRIS maintains an audit database (typically IRISAUDIT) recording configured event types with detailed information about who performed actions, what was changed, when changes occurred, and from where actions originated.
Event Categories Tracked
The audit log tracks numerous event categories including user authentication (successful and failed logins, logouts, session timeouts), authorization events (privilege escalation using roles, permission denials), configuration changes (system parameter modifications, database configuration changes), user account management (account creation, deletion, password changes, role assignments), database operations (database creation, deletion, mount/dismount), security-sensitive operations (license key changes, security setting modifications), data access (if configured - queries, updates, deletes), and system operations (instance startup, shutdown, service starts/stops).
Reviewing and Configuring Audit Logs
Review audit logs through the Management Portal at System Operations > Audit Log, which provides filtering, searching, and export capabilities. Filters include date range, event type, user, namespace, and outcome (success/failure). Export audit data for long-term archival or import into security information and event management (SIEM) systems. Audit configuration is accessible via System Operations > Audit Configuration, where you enable/disable specific event types, configure audit database size and purging policies, and set audit performance parameters. Comprehensive auditing generates significant data volume, so balance security requirements against performance impact and storage consumption.
Review Practices and Compliance
Common audit review practices include daily review of authentication failures to detect potential intrusion attempts, weekly review of privilege escalation events to ensure appropriate usage, monthly review of configuration changes to maintain change control, and on-demand review during security incidents for forensic investigation. Compliance frameworks (HIPAA, PCI-DSS, SOX, etc.) often mandate specific audit logging capabilities and retention periods. Ensure audit configuration meets applicable requirements, including protecting audit logs from tampering (readonly mounting or export to write-once media), retaining audit data for required periods, and demonstrating regular review processes. The PurgeAudit scheduled task manages audit database size by removing old records based on configured retention policies. Configure this carefully to balance storage management with retention requirements. Best practice is exporting audit records to long-term archival storage before purging from the operational audit database.
Documentation References
7. Interpret key performance indicators
Key Points
- Buffer hit ratio: >90% indicates adequate memory allocation
- Disk I/O wait time: High values suggest storage bottleneck
- License utilization: Track against capacity for planning
- Transaction rate: Commands/sec and globals/sec indicate load
- Response time: End-user experience metric
Detailed Notes
Primary KPIs Overview
Effective system management requires understanding and interpreting key performance indicators that signal system health and performance. Primary KPIs for InterSystems IRIS environments include buffer hit ratios, disk I/O metrics, CPU utilization, license consumption, transaction rates, response times, and error rates.
Buffer Hit Ratios
Global buffer hit ratio is perhaps the single most important performance indicator - values above 90-95% indicate adequate memory allocation for database caching, while lower values signal memory pressure causing excessive disk I/O. Calculate as: (buffer hits / total global references) x 100%. Routine buffer hit ratio (target >95%) indicates whether compiled code is cached effectively.
Disk, CPU, and License Metrics
Disk I/O metrics include operations per second, average wait time, and queue depth. High disk wait times (>10ms average) or queue depths suggest storage subsystem bottlenecks requiring faster storage or I/O workload reduction. CPU utilization should average 60-70% during peak load, leaving headroom for spikes. Sustained utilization >80% suggests need for additional CPU capacity or application optimization. License utilization tracks used versus available license units. Monitor trends to predict when additional licenses will be needed - enterprise growth planning typically starts procurement when utilization consistently exceeds 70-75%.
Transaction Rates and Response Times
Transaction rate metrics (commands per second, global references per second, routine calls per second) characterize workload volume. Establish baselines and monitor for abnormal deviations. Sudden drops might indicate application problems; unexpected spikes could signal runaway processes or usage anomalies. Response time measures end-user experience - web application page load times, query execution duration, or transaction completion time. Unlike internal metrics like buffer hit ratio, response time directly reflects user experience. Monitor response time at application level and correlate with infrastructure metrics to identify causes when degradation occurs.
Error Rates and Network Metrics
Error rates track failed operations, authentication failures, application errors, and system errors. Low error rates (<0.1% of operations) are normal; significant increases warrant investigation. Network metrics for distributed architectures include bandwidth utilization, latency, and packet loss. High latency (>5ms for local networks, >50ms for WAN) impacts distributed application performance.
Interpreting KPIs Effectively
KPI interpretation requires context - absolute values matter less than trends and deviations from established baselines. Implement KPI tracking dashboards showing historical trends alongside current values. Automated threshold-based alerting notifies administrators when KPIs exceed acceptable ranges. Regular KPI review (weekly or monthly) identifies developing issues requiring proactive attention before they impact service.
Documentation References
8. Run ^SystemPerformance utility
Key Points
- Collects comprehensive performance data using predefined profiles
- Run(profile) - Initiates collection; Collect(runId) - Performs data collection
- RunReport() - Generates HTML reports from collected data
- Run interactively from Terminal or schedule via Task Manager
- Captures IRIS metrics plus operating system statistics
- Essential for WRC support cases and capacity planning
Detailed Notes
Purpose and Comparison to ^PERFMON
The ^SystemPerformance utility provides extended performance data collection and reporting capabilities beyond real-time monitoring. While ^PERFMON is ideal for live troubleshooting, ^SystemPerformance excels at capturing comprehensive performance snapshots over time periods for trend analysis, capacity planning, and support case documentation.
Running ^SystemPerformance
Launch ^SystemPerformance interactively from Terminal by entering "do ^SystemPerformance" which presents a menu of available profiles. Select a profile to begin data collection. Profiles define what metrics to collect, collection intervals, and duration. The utility then captures performance metrics at regular intervals throughout the specified period, including IRIS-specific statistics (global references, buffer hit ratios, lock activity, journal I/O, process counts) and operating system metrics (CPU utilization, memory usage, disk I/O, network traffic). The data collection is performed by the Collect() function. Upon completion, the collected data is stored in files within the configured output directory. HTML reports are generated using the RunReport() function, presenting the collected data in organized, readable format with charts and tables suitable for sharing with colleagues or InterSystems support.
Key Functions and Scheduling
Key ^SystemPerformance functions include:
- Run(profile) - Initiates data collection using the specified profile name
- Collect(runId) - Performs the actual data collection for a given run ID
- RunReport() - Generates HTML reports from previously collected data
- GetProfiles() - Lists available collection profiles
- Additional functions for customization: output directory configuration, profile management, and version information
For automated collection, schedule ^SystemPerformance using the Task Manager. Create a scheduled task that calls the Run() method with the desired profile name. This enables regular performance baseline collection (e.g., daily one-hour samples during peak business hours) without manual intervention. The profile defines all collection parameters including duration, interval, and metrics to capture.
Use Cases and Best Practices
The collected data supports multiple use cases: establishing performance baselines for normal operations, identifying trends indicating developing problems, documenting performance for capacity planning decisions, providing comprehensive data for InterSystems WRC support cases, and comparing before/after metrics for change validation. Report contents vary by platform but typically include system configuration summary, CPU and memory statistics, disk I/O metrics, IRIS performance counters, buffer pool statistics, and network activity. Best practices include running regular baseline collections, archiving reports for historical comparison, using consistent collection parameters for comparable results, and including ^SystemPerformance output when opening WRC support cases for performance issues.
Documentation References
9. Determine global sizes
Key Points
- ^%GSIZE utility calculates global storage consumption
- Management Portal System Explorer provides global size information
- ^INTEGRIT integrity checks include global size statistics
- Identify rapidly growing globals for capacity planning
- Essential for database sizing and optimization
- Note: ^GLOSTAT is for buffer pool usage statistics, NOT global sizes
Detailed Notes
Purpose and Available Tools
Understanding global sizes is essential for database capacity planning, storage optimization, and identifying application data growth patterns. InterSystems IRIS provides multiple methods for determining global sizes. It's important to note that ^GLOSTAT is used for monitoring buffer pool usage and performance statistics, NOT for determining global sizes.
Primary Tools for Determining Global Sizes
^%GSIZE Utility: The ^%GSIZE utility is the primary Terminal-based tool for calculating global sizes. To use it: 1. Open Terminal and switch to the namespace containing the globals 2. Run: `do ^%GSIZE` 3. Select globals to analyze (can use wildcards, e.g., `MyApp*`) 4. The utility displays size information including total blocks and bytes for each global
^%GSIZE provides accurate size calculations by traversing global structures and counting actual storage utilization. The output shows:
- Global name
- Total blocks used
- Total bytes consumed
- Pointer blocks vs. data blocks
- Block utilization efficiency
Management Portal - System Explorer: Navigate to System Explorer > Globals for GUI-based global size analysis:
- Browse globals in any namespace
- View global list with size columns showing blocks and MB
- Sort by size to identify largest globals
- Click on individual globals to see detailed size breakdowns
- Export global lists with size information for reporting
The Management Portal provides the most user-friendly interface for global size analysis and is ideal for:
- Quick identification of largest globals
- Periodic capacity reviews
- Generating reports for management
- Visual comparison of global sizes across databases
^INTEGRIT Integrity Check: Running ^INTEGRIT (integrity check utility) provides global size information as part of its comprehensive database analysis: 1. Run: `do ^INTEGRIT` from Terminal 2. Select databases to check 3. The integrity report includes for each global:
- Total blocks used
- Data blocks and pointer blocks
- Global block density (utilization percentage)
- Structural integrity status
The integrity check is particularly valuable because it combines size information with structural validation, helping identify globals that may need compaction or optimization.
Additional Programmatic Methods
$SYSTEM.OBJ.GlobalSize() method: For programmatic access, use: ``` set size = $SYSTEM.OBJ.GlobalSize("^MyGlobal") ``` This returns the size in bytes for the specified global and can be incorporated into monitoring scripts or applications.
%SYS.GlobalQuery class: Query global information including sizes using SQL or ObjectScript: ``` set query = ##class(%SYS.GlobalQuery).%New() do query.NamespaceSet("USER") do query.DatabaseSet("C:\InterSystems\IRIS\mgr\user\") ```
Key Metrics and Analysis
Important metrics from global size analysis:
- Total blocks used: Multiply by block size (typically 8KB) for total bytes
- Data blocks vs. pointer blocks: Ratio indicates global structure efficiency
- High pointer block ratio may indicate inefficient global structure
- Data-heavy globals use primarily data blocks
- Block density/utilization: Percentage of allocated block space actually used
- Low density suggests fragmentation or deletion patterns
- Candidates for global compaction
- Growth rate over time: Compare periodic measurements to project capacity needs
Administrative Uses
Global size information supports critical administrative tasks:
Capacity Planning:
- Project when databases need expansion based on growth rates
- Identify globals growing faster than expected
- Plan storage upgrades before running out of space
- Establish alerts at specific size thresholds
Performance Optimization:
- Identify oversized globals that might benefit from restructuring
- Detect fragmentation through block density analysis
- Plan global compaction operations
- Evaluate whether to split large globals into partitioned structures
Backup Planning:
- Understand data volumes for backup window estimation
- Identify globals that may need separate backup schedules
- Calculate incremental backup sizes based on daily growth
- Plan backup storage capacity
Application Troubleshooting:
- Detect unexpected data growth indicating application issues
- Identify runaway data accumulation (logs, temp data)
- Verify data archival processes are working correctly
- Investigate memory issues related to large in-memory caching
Proactive Capacity Management
Establish a regular global size monitoring process:
1. Baseline Measurement: Document current global sizes across all production namespaces 2. Periodic Monitoring: Run ^%GSIZE weekly or monthly for key globals 3. Trend Analysis: Track growth rates to identify accelerating growth 4. Threshold Alerts: Set alerts when globals reach specific size thresholds 5. Archival Planning: Identify globals requiring data archival or purging
Common Growth Patterns to Monitor:
- Audit/log globals: May grow rapidly and need regular purging
- Transaction globals: Should grow predictably with business volume
- Configuration globals: Should remain stable; sudden growth indicates issues
- Temporary globals: Should not accumulate; growth indicates cleanup problems
Database-Level Size Information: The Management Portal Databases page (System Operations > Databases) shows:
- Total database size and free space
- Growth parameters (expansion size, maximum size)
- Current utilization percentage
- Mount status and journaling state
Understanding the relationship between global sizes and database sizes enables optimal resource allocation. While databases can auto-expand, proactive monitoring of global growth patterns enables better infrastructure planning and prevents unexpected space exhaustion.
Important Distinction: ^GLOSTAT vs. Global Size Tools
^GLOSTAT is frequently confused with global sizing tools but serves a different purpose:
- ^GLOSTAT purpose: Monitors buffer pool performance and global reference activity
- ^GLOSTAT metrics: Global buffer hit ratios, reference counts, block reads/writes
- ^GLOSTAT use case: Performance tuning and cache efficiency analysis
- For global sizes: Use ^%GSIZE, Management Portal, or ^INTEGRIT instead
Exam Preparation Summary
Critical Concepts to Master:
- Dashboard Navigation: Understand how to access and interpret the System Dashboard
- Buffer Performance: Know target buffer hit ratios (>90% global, >95% routine) and how to increase buffers
- ^PERFMON: Memorize how to launch and interpret key ^PERFMON metrics
- ECP Metrics: Understand ECP connection monitoring for distributed architectures
- Alerting: Know how to configure alerts for critical metrics like disk space
- Audit Logging: Understand what events are audited and how to review audit logs
- KPI Interpretation: Recognize normal vs. abnormal values for key performance indicators
Common Exam Scenarios:
- Identifying low buffer hit ratio and recommending buffer increase
- Using ^PERFMON to diagnose performance bottleneck types
- Configuring alerts for proactive monitoring
- Interpreting System Dashboard color-coded indicators
- Reviewing audit logs for security investigation
- Monitoring ECP connections in distributed environments
- Correlating KPIs to diagnose performance problems
Hands-On Practice Recommendations:
- Explore all sections of the System Dashboard
- Run ^PERFMON during various workload conditions
- Configure and test alert notifications
- Practice interpreting buffer hit ratios from ^PERFMON
- Review audit log entries for different event types
- Monitor system during database compaction or backup operations
- Track KPI trends over time to establish baselines
- Test ECP monitoring if distributed environment available
- Configure System Monitor for automated monitoring