1. Selects appropriate error handling strategies (TRY-CATCH, $ZTRAP)
Key Points
- TRY-CATCH: Modern structured exception handling with block-level error processing
- $ZTRAP: Legacy error trap mechanism redirecting control to a specified location
- $ETRAP: Alternative legacy mechanism executing error handling code inline
- Compatibility: TRY-CATCH and $ZTRAP can coexist at different stack levels
- Best practice: Use TRY-CATCH for new development; $ZTRAP prohibited within TRY blocks
- Exception objects: TRY-CATCH uses %Exception.AbstractException for rich error information
Detailed Notes
Overview
InterSystems IRIS provides multiple error handling mechanisms, each suited for different scenarios.
TRY-CATCH Mechanism
The TRY-CATCH mechanism is the recommended modern approach, providing structured exception handling with clear separation between protected code and error handling code. When an error occurs within a TRY block, control transfers to the CATCH block, which receives an exception object containing detailed error information. The exception handler can log errors, perform cleanup, rethrow exceptions using THROW, or allow execution to continue. TRY blocks do not create new stack levels, meaning they don't affect variable scoping with NEW commands.
Legacy $ZTRAP Mechanism
The legacy $ZTRAP mechanism sets a special variable to an entry reference (label or routine) where control transfers on error. Within a routine, you can set $ZTRAP to a local label, external routine, or label within an external routine. The asterisk form (SET $ZTRAP="*location") executes in the error context, while the standard form unstacks to the trap location. $ETRAP is an alternative that executes error handling code inline rather than transferring control. When you set $ZTRAP to a non-empty value, it takes precedence over $ETRAP by implicitly executing NEW $ETRAP and setting $ETRAP to empty string.
Choosing the Right Strategy
Error handling strategies should be selected based on code structure and requirements. TRY-CATCH is ideal for modern class methods and procedures requiring structured exception handling with multiple error types. Use $ZTRAP when maintaining legacy code or when you need error handling at different stack levels. Important restrictions: $ZTRAP may not be used within TRY block protected statements; user-defined errors with THROW are limited to TRY-CATCH only; however, the ZTRAP command may be used with any error processing type. Understanding the execution stack behavior is critical - $ESTACK special variable provides information about relative execution levels, and error handlers execute at the same level as the error unless explicitly redirected.
Documentation References
2. Diagnoses system performance issues
Key Points
- Management Portal: System Operation > Processes page displays real-time process information
- Process metrics: CPU time, device usage, namespace, and execution status
- $SYSTEM.Process utilities: Programmatic access to process and system information
- Performance indicators: Monitor database cache hit ratios, lock waits, and global references
- System Dashboard: Comprehensive view of system health and resource utilization
- SQL Query Performance: Query plan analysis and execution statistics
Detailed Notes
Overview
Diagnosing system performance issues in InterSystems IRIS requires understanding multiple monitoring tools and performance indicators.
Management Portal Process Monitoring
The Management Portal provides comprehensive process monitoring through System Operation > Processes, displaying active processes with key metrics including job number, process ID, total CPU time in milliseconds, username, current device, namespace, and routine location. The Process Details page provides deeper insight into individual processes including execution stack, lock status, and process-specific variables. This information helps identify processes consuming excessive resources or experiencing blocking conditions.
Programmatic Diagnostics and System Dashboard
The %SYSTEM.Process class provides programmatic access to process information, enabling automated monitoring and diagnostics. Methods like $SYSTEM.Process.State() return current process state, and DO ^%SS displays system statistics. The System Dashboard in the Management Portal (System Operation > System Dashboard) presents real-time visualization of critical metrics including CPU utilization, memory consumption, disk I/O rates, and network activity. Monitor database cache efficiency by examining global buffer statistics - poor cache hit ratios indicate insufficient memory allocation or inefficient data access patterns.
Common Performance Bottlenecks
Performance bottlenecks commonly manifest in several areas: excessive lock contention visible through the Locks page (System Operation > Locks), inefficient SQL queries identifiable through query plan analysis, inadequate memory allocation causing frequent disk I/O, and process-level issues such as inefficient algorithms or resource leaks. Use the Terminal to execute diagnostic commands like WRITE $SYSTEM.Process.State() for current process state and DO ^%SS for system statistics. The System Usage pages provide detailed breakdowns of resource consumption including shared memory heap usage, journal activity, and license unit consumption. Regular monitoring establishes performance baselines, enabling quick identification of anomalies requiring investigation.
Documentation References
3. Manages process memory effectively
Key Points
- Database cache: Shared memory buffer for data; allocate 25%+ of system memory for production
- Routine cache: Automatically allocated at 10% of database cache (80MB min, 1020MB max)
- Maximum per-process memory: Configure via bbsiz parameter; recommend -1 (unlimited) for most cases
- Process private memory: Used for symbol tables, I/O buffers; allocated as needed until maximum reached
- Memory not deallocated: Process private memory persists until process exits
errors : Indicate process exceeded maximum memory; increase bbsiz or optimize code
Detailed Notes
Overview
Effective process memory management in InterSystems IRIS requires understanding three distinct memory categories: database cache (shared), routine cache (shared), and process private memory (per-process).
Database Cache Configuration
The database cache, also called the global buffer pool, stores frequently accessed data blocks in memory to minimize disk I/O. When first installed, IRIS allocates 25% of physical memory to the database cache, but this initial setting is inappropriate for production use. Before production deployment or performance testing, manually configure database cache using the Management Portal (System Administration > Configuration > System Configuration > Memory and Startup), selecting "Specify Amount" and entering appropriate megabyte allocation. For systems with multiple block sizes enabled, allocate memory separately for each block size.
Routine Cache (Shared Memory)
The routine cache buffers compiled ObjectScript code in shared memory, accessible by all processes. Automatic allocation defaults to 10% of the database cache for 8KB buffers, bounded by 80MB minimum and 1020MB maximum. For typical production instances with properly configured database cache, automatic routine cache allocation suffices, though applications with extensive code libraries may benefit from manual adjustment.
Per-Process Memory Settings
The Maximum Per-Process Memory setting (bbsiz parameter) controls the private memory ceiling for individual processes - this is completely separate from shared memory like database cache and routine cache. Per-process memory is used for local variables, symbol tables, and I/O buffers. The allowed range is from 256 KB to 2147483647 KB. InterSystems recommends setting this to -1 (which resolves to maximum value) for most circumstances. When a process exhausts its allocated memory, it encounters a <STORE> error. If a process enters an infinite recursive loop, it may run out of frame stack space and get a <FRAMESTACK> error instead.
Process Private Memory Behavior
Process private memory serves multiple purposes including symbol table allocation for local variables, I/O device access structures and buffers, and various runtime requirements. This memory allocates in increasing extents as the application demands until reaching the bbsiz maximum. Once allocated to a process, private memory is never deallocated until process termination - this design decision optimizes performance by avoiding allocation overhead but requires developers to be conscious of memory accumulation in long-running processes. When processes exceed maximum memory, they encounter
4. Implements process management best practices
Key Points
- JOB command: Spawn new processes with proper device and namespace parameters
- Process suspension: Use Suspend/Resume for debugging; avoid in production
- Clean termination: Allow processes to complete gracefully before forcing termination
- Background jobs: Configure with appropriate error traps and logging mechanisms
- Process monitoring: Regularly review process list for hung or runaway processes
- Resource cleanup: Ensure locks released and transactions committed before process exit
Detailed Notes
Overview
Process management in InterSystems IRIS encompasses creation, monitoring, control, and termination of system processes.
Creating Background Jobs
The JOB command creates new processes (background jobs) that execute independently from the parent process. When spawning jobs, specify appropriate parameters including device for I/O redirection, namespace for execution context, and priority for scheduling. Jobs inherit certain environmental settings from parent processes but execute in separate memory spaces with independent error handling contexts. Proper job design includes establishing error traps ($ZTRAP or TRY-CATCH), implementing logging mechanisms for tracking execution and errors, and ensuring resource cleanup on both normal and abnormal termination.
Process Control and Termination
The Management Portal Processes page (System Operation > Processes) provides centralized process control including display, suspension, resumption, and termination capabilities. Process suspension pauses execution for debugging purposes but should be avoided in production environments as it consumes resources without progress. When terminating processes, distinguish between graceful termination (allowing cleanup code to execute) and forced termination with
Best Practices for Long-Running Processes
Process management best practices include regular monitoring to identify performance issues or resource leaks, establishing conventions for background job naming and logging, implementing timeout mechanisms for operations expected to complete within specific timeframes, and documenting process interdependencies to avoid inadvertent disruption. For long-running processes, implement heartbeat mechanisms that periodically update status indicators, enabling monitoring systems to detect hung processes. Use broadcast messaging capabilities sparingly to communicate with active terminal processes. Critical processes should implement robust error handling to prevent cascading failures - if a process encounters errors, it should log comprehensive diagnostic information, clean up resources, and exit gracefully rather than remaining in a failed state consuming resources.
Documentation References
5. Understands system limits and constraints
Key Points
- Database limit: Maximum 15,998 databases per instance
- Database size: Up to 32 terabytes with default 8KB block size
- Process memory: 256 KB to 2,147,483,647 KB per process (bbsiz parameter)
- String length: Maximum string size varies by context and configuration
- Lock resources: System-dependent limits on concurrent lock names
- Routine cache: 80 MB minimum to 1,020 MB maximum when auto-configured
Detailed Notes
Overview
Understanding system limits and constraints is essential for designing scalable InterSystems IRIS applications.
Database Limits
The absolute limit on the number of databases that can be configured within a single IRIS instance is 15,998, given sufficient storage space. This limit accommodates even the largest enterprise deployments with extensive data partitioning requirements. Individual database size limits depend on block size - with the default 8KB blocks, databases can grow to 32 terabytes. Larger block sizes (16KB, 32KB, 64KB) proportionally increase maximum database size, though they also affect memory utilization and I/O patterns. Databases expand dynamically as needed when free storage is available, though administrators can specify maximum size constraints to prevent uncontrolled growth.
Process and Memory Constraints
Process-level constraints include maximum per-process memory configurable from 256 KB to 2,147,483,647 KB through the bbsiz parameter. The default value and recommended setting of -1 (which resolves to maximum) eliminates artificial memory constraints for most applications. However, resource-constrained environments or multi-tenant systems may benefit from explicit limits preventing individual processes from consuming excessive memory. String length limits vary by context - strings stored in globals or local variables can be very large, but specific operations or functions may impose practical limits. Developers should consult documentation for context-specific constraints when working with extremely large strings or binary data.
Concurrency and Cache Constraints
Concurrency constraints include lock resource limits, which are system-dependent but generally sufficient for normal operations. Excessive lock requests within a single process may indicate design issues requiring refactoring. The routine cache has minimum (80 MB) and maximum (1,020 MB) bounds when using automatic allocation, though manual configuration can override these limits if applications require larger code caches. Shared memory heap (gmheap) allocations must accommodate SQL query structures, ECP data structures, and various system needs - applications with large numbers of concurrent SQL queries or extensive ECP configurations require increased gmheap allocation. Understanding these limits during design phase prevents costly refactoring when applications scale. Most constraints are generous and rarely reached in typical applications, but awareness enables proactive planning for exceptional requirements.
Documentation References
Exam Preparation Summary
Critical Concepts to Master:
- Error Handling: Understand when to use TRY-CATCH vs $ZTRAP; know compatibility rules
- Exception Objects: Familiarize with %Exception.AbstractException class and THROW command
- Performance Monitoring: Know how to access process information via Management Portal
- Memory Configuration: Understand database cache, routine cache, and per-process memory settings
- Process Memory Behavior: Remember that private memory never deallocates until process exit
- System Limits: Know maximum database count (15,998) and database size (32TB at 8KB blocks)
- Memory Troubleshooting: Recognize
errors indicate exceeded bbsiz limits - Best Practices: Use TRY-CATCH for new code, monitor processes regularly, clean up resources
Common Exam Scenarios:
- Selecting appropriate error handling mechanism for a given code structure
- Diagnosing performance issues based on process metrics and system indicators
- Calculating appropriate memory allocations for database and routine caches
- Identifying causes of
errors and selecting resolution strategies - Determining if system design exceeds architectural limits
- Troubleshooting hung or runaway processes
- Implementing proper resource cleanup in error handlers
Hands-On Practice Recommendations:
- Write code using both TRY-CATCH and $ZTRAP error handling
- Deliberately trigger errors and observe exception handler behavior
- Monitor processes via Management Portal while running test applications
- Configure database and routine cache allocations and observe performance impact
- Create processes that approach memory limits to observe
errors - Practice using diagnostic tools like ^%SS and $SYSTEM.Process methods
- Experiment with JOB command parameters and background process management
- Review system limits documentation and calculate capacity for hypothetical scenarios
Key Documentation Sections to Review:
- GCOS.pdf Chapter 23: "Using TRY-CATCH" (comprehensive error handling)
- GCOS.pdf Appendix B: "Traditional Error Processing" ($ZTRAP and $ETRAP)
- GSA.pdf Chapter 2: "Memory and Startup Settings" (memory configuration)
- GSA.pdf Chapter 19: "Controlling InterSystems IRIS Processes" (process management)
- GSA.pdf Chapter 5: "Configuring Local Databases" (system limits and constraints)
Important Command Reference:
- TRY/CATCH/THROW: Modern exception handling
- SET $ZTRAP: Legacy error trap configuration
- JOB: Spawn background processes
- LOCK: Concurrency control (related to deadlock avoidance)
- $SYSTEM.Process methods: Programmatic process management
- ^%SS: System statistics utility
Common Pitfalls to Avoid:
- Using $ZTRAP within TRY block protected statements (prohibited)
- Assuming process private memory will be deallocated during execution
- Setting database cache to "Initial" (25%) for production use
- Failing to implement error handling in background jobs
- Ignoring system limits when designing large-scale applications
- Not monitoring for
errors in long-running processes