T1.3: Plans Data Lifecycle

Knowledge Review - InterSystems IRIS Development Professional

1. Evaluates storage and retrieval strategies (MDX, SQL, object access)

Key Points

  • Object Access: Direct object manipulation via %OpenId(), %Save(), and property methods
  • SQL Access: Relational queries with embedded SQL or dynamic SQL for set-based operations
  • MDX Access: Multidimensional queries for OLAP/Business Intelligence pivot tables
  • Strategy Selection: Choose based on use case - transactional vs. analytical vs. navigational
  • Performance Trade-offs: Object access for single records, SQL for sets, MDX for aggregations
  • Hybrid Approaches: Combine methods within applications for optimal performance

Detailed Notes

Overview

InterSystems IRIS provides three primary data access patterns, each optimized for different use cases.

Object Access

  • Programming Model: Most natural for transactional applications
  • Operations: Use %OpenId() to load objects from disk into memory, modify properties via dot syntax, persist changes with %Save()
  • Swizzling: Automatically handles lazy loading of referenced objects
  • Referential Integrity: Maintains integrity across related objects
  • Ideal For: Single-record operations, complex object graphs, business logic encapsulated in class methods

SQL Access

  • Capabilities: Set-based operations optimized for querying, filtering, and aggregating data across multiple records
  • Implementation Options: Embedded SQL (&sql()) and dynamic SQL (%SQL.Statement)
  • Compatibility: Standards-based relational access to the same persistent objects accessible via object methods
  • Query Optimization: SQL query optimizer automatically selects indexes and generates execution plans
  • Ideal For: Reporting, batch updates, data analysis, and integration with external tools via JDBC/ODBC

MDX Access

  • Purpose: Specialized query language for OLAP databases used in Business Intelligence applications
  • Operation: Queries operate against cubes defined in BI models, returning result sets displayed as pivot tables
  • Capabilities: Complex analytical queries involving dimensions, hierarchies, levels, and measures, with automatic aggregation
  • Key Difference: While object access and SQL query individual records, MDX aggregates values across groups and represents them with single computed values

Choosing the Right Strategy

  • Transactional Systems: Favor object access for strong typing and business logic encapsulation
  • Reporting and Analytics: Favor SQL for set-based operations and standard query syntax
  • Business Intelligence: Require MDX for multidimensional analysis
  • High-Performance Applications: Often combine all three approaches for optimal performance

2. Manages CRUD operations effectively

Key Points

  • Create: Use %New() to instantiate, set properties, call %Save() within transactions
  • Read: %OpenId() for single objects, SQL SELECT for sets, swizzling for related objects
  • Update: Modify properties on opened objects, %Save() commits changes atomically
  • Delete: %DeleteId() for single objects, %DeleteExtent() for entire class extents
  • Transaction Control: All CRUD operations support TSTART/TCOMMIT for data consistency
  • Callback Hooks: %OnBeforeSave(), %OnAfterSave(), %OnDelete() for business logic

Detailed Notes

Overview

Effective CRUD operation management in InterSystems IRIS requires understanding the persistence API and transaction semantics.

Create Operations

  • Process: Instantiate with %New(), set properties via dot syntax, call %Save() to persist
  • Return Value: %Save() returns a %Status value indicating success or failure
  • Failure Causes: Invalid property values, violated uniqueness constraints, or failed validation
  • Object ID Assignment: System automatically assigns an Object ID using $Increment by default, or uses user-provided ID based on IDKEY index properties
  • Atomic Transaction: %Save() automatically saves all modified embedded objects, collections, streams, referenced objects, and relationships, rolling back completely if any component fails

Read Operations

  • Primary Method: %OpenId(id, concurrency, .sc) returns an OREF if successful or null string if object doesn't exist
  • Error Information: The sc parameter receives a %Status value for detailed error information
  • Swizzling: Referenced objects are automatically opened (lazy loaded) when first accessed via dot syntax
  • Manual Control: Use propertyGetSwizzled() methods to control swizzling
  • Bulk Reading: SQL SELECT statements offer superior performance through set-based operations
  • Existence Check: Use %ExistsId() or SQL SELECT %ID queries

Update Operations

  • Process: Open with %OpenId(), modify properties, call %Save() to commit
  • Atomicity: Updates are atomic when using appropriate concurrency settings (concurrency > 0)
  • Viewing Original Values: Use propertyGetStored(id) to query database directly without affecting in-memory object
  • Bulk Updates: SQL UPDATE statements for set-based modifications across multiple records

Delete Operations

  • Primary Method: %DeleteId(id, concurrency) removes individual objects including associated stream data
  • Callbacks:
  • %OnDelete(): Called before deletion for validation logic
  • %OnAfterDelete(): Called after successful deletion for cleanup
  • %OnDeleteFinally(): Executes after transaction completion
  • Bulk Deletion: %DeleteExtent() iterates through entire extent invoking %Delete() on each instance
  • Memory Note: %DeleteId() does not affect objects already loaded in memory

All CRUD operations integrate with TSTART/TCOMMIT transaction control for multi-operation consistency.

3. Predicts application performance based on data volumes, user counts, and concurrent processes

Key Points

  • Data Volume Impact: Indexes critical for large tables, query plans vary with row counts
  • Concurrency Levels: Values 0-4 control locking behavior and atomic operation guarantees
  • Lock Escalation: Multiple concurrent users require appropriate concurrency settings
  • Query Optimization: Use EXPLAIN/SHOW PLAN to analyze execution plans and costs
  • Performance Metrics: Monitor prepare time, execute time, globals, commands, disk I/O
  • Scalability Patterns: Design for horizontal scaling with proper transaction boundaries

Detailed Notes

Overview

Predicting application performance requires analyzing the interaction between data volumes, concurrent user counts, and process contention.

Data Volume Impact

  • Small Tables: Hundreds of rows may perform adequately with full table scans
  • Large Tables: Millions of rows require proper indexes on frequently queried columns
  • Query Optimization: SQL query optimizer generates execution plans based on table statistics
  • Plan Analysis: Use EXPLAIN and SHOW PLAN commands to review plans and identify missing indexes or suboptimal join orders
  • Query Plan Information: Display cost values, index usage, and execution strategy for performance predictions

Concurrency Levels

InterSystems IRIS provides five concurrency levels (0-4) that determine locking behavior:

  • Concurrency 0: No locking; maximum performance but no atomic write guarantees (read-only or single-user only)
  • Concurrency 1: Atomic read (default); acquires and releases shared locks as needed with minimal lock retention
  • Concurrency 2: Shared locks; always acquires shared locks when opening objects, preventing modifications by others
  • Concurrency 3: Shared/retained locks; retains locks after saving new objects for consistency
  • Concurrency 4: Exclusive/retained locks; maximum isolation, preventing concurrent access

Set concurrency via method arguments, DEFAULTCONCURRENCY class parameter, or $system.OBJ.SetConcurrencyMode().

Managing Lock Contention

  • Transaction Design: Short transactions with narrow scope reduce contention
  • Bottleneck Cause: Long-running transactions holding locks on frequently accessed objects
  • Swizzling Awareness: Traversing object relationships can trigger cascading lock acquisitions
  • High-Concurrency Strategies: Optimistic locking patterns, queuing mechanisms, or data partitioning

Performance Metrics

The SQL Shell and %SYSTEM.SQL classes expose key metrics:

  • Prepare Time: Statement compilation and cache lookup
  • Execute Time: Elapsed execution duration
  • Global References: Database block accesses
  • Commands: ObjectScript commands executed
  • Disk I/O Operations: Physical disk access count

Capacity Planning

  • Extrapolate from sample metrics (e.g., 100ms for 1,000 rows) to predict behavior at scale
  • Conduct load testing with production-like data volumes and concurrent user counts
  • Monitor production metrics for continuous optimization and proactive scaling decisions

Exam Preparation Summary

Critical Concepts to Master:

  1. Access Pattern Selection: Understand when to use object access vs. SQL vs. MDX
  2. CRUD API Methods: Memorize %Save(), %OpenId(), %DeleteId(), and their parameters
  3. Transaction Semantics: Know that %Save() is atomic and rolls back on failure
  4. Concurrency Values: Understand levels 0-4 and their locking behavior differences
  5. Swizzling Mechanism: Recognize automatic loading of related objects and performance implications
  6. Query Optimization: Use EXPLAIN/SHOW PLAN to predict query performance
  7. Performance Metrics: Interpret prepare/execute time, globals, commands, disk metrics

Common Exam Scenarios:

  • Selecting appropriate access method for reporting vs. transactional operations
  • Designing CRUD operations with proper error handling and transactions
  • Choosing concurrency levels for single-user vs. multi-user applications
  • Predicting performance impact of increasing data volumes on un-indexed queries
  • Analyzing query plans to identify performance bottlenecks
  • Understanding lock contention in concurrent update scenarios
  • Recognizing when swizzling impacts application performance

Hands-On Practice Recommendations:

  • Implement CRUD operations using object methods with proper error handling
  • Write equivalent operations using embedded SQL and compare performance
  • Create MDX queries for Business Intelligence cubes
  • Experiment with different concurrency values and observe locking behavior
  • Use EXPLAIN to analyze query plans for indexed vs. non-indexed queries
  • Monitor SQL Shell performance metrics during query execution
  • Test concurrent updates with multiple terminal sessions
  • Profile application performance with varying data volumes (1K, 10K, 100K rows)
  • Practice reading and interpreting query execution plans
  • Implement callback methods (%OnBeforeSave, %OnDelete) for business logic

Key Documentation Sections to Review:

  • GOBJ.pdf Chapter 15: Complete persistence API reference
  • GSQL.pdf Chapter 8: Dynamic SQL programming patterns
  • GSQL.pdf Chapter 9: SQL Shell commands and performance analysis
  • D2GMDX.pdf Chapter 1-2: MDX fundamentals and query syntax
  • GOBJ.pdf Chapter 16: Concurrency options and locking strategies

Report an Issue