1. Resize databases (compact, truncate)
Key Points
- Compact: Moves free space distributed throughout database to its end by relocating global blocks
- Truncate: Returns free space from the end of database volumes to the underlying file system
- Operations often performed together: compact first, then truncate
- Can cross volume boundaries in multivolume databases
Detailed Notes
Understanding Compact and Truncate
Database space management in InterSystems IRIS involves two complementary operations. Compacting a database reorganizes the physical layout by moving free space blocks to the end of the database file. When you compact, you specify the target amount of free space desired at the end - the operation relocates global blocks to consolidate free space without creating new free space. For example, if a 50 MB database has 15 MB of free space with only 5 MB at the end, compacting with a 15 MB target moves all global blocks forward to position all free space at the end.
Returning Space to the File System
Truncating returns this consolidated free space to the operating system's file system. You specify a target database size when truncating - if sufficient free space exists at the end, the database shrinks to that size. Entering 0 as the target removes all possible free space.
Operational Considerations
These operations run concurrently with normal database activity but consume system resources. InterSystems recommends running them during off-peak hours and executing only one database reorganization operation at a time per system. Both operations are accessible via Management Portal (System Operations > Databases > Database Details page) or the ^DATABASE utility.
Documentation References
2. Use integrity check and run in batch mode
Key Points
- Validates structural integrity of database files
- Can run via Management Portal or ^INTEGRIT utility
- Batch mode available through Task Manager
- Examines data structures, pointers, and block relationships
- Outputs detailed reports including global block density
Detailed Notes
Purpose and Scope
Database integrity checking is a critical maintenance operation that validates the structural soundness of database files. The integrity check examines database structures including global block organization, pointer validity, and data structure consistency.
Running Integrity Checks
Administrators can run integrity checks interactively through the Management Portal (System Operations > Databases, then click Integrity Check button) or programmatically using the ^INTEGRIT utility. For production environments, integrity checks should be scheduled during maintenance windows using the Task Manager. The IntegrityCheck task type is available (though disabled by default) and can be configured to run on specific databases at scheduled intervals.
Interpreting Results
The integrity check produces a detailed report showing the health of each global, including metrics like data level structure and global block density. This density information is valuable when planning compact operations - the report shows current block utilization which helps determine if compacting will yield significant space savings. Regular integrity checking is an essential preventive maintenance practice, helping identify corruption early before it impacts production operations. The output files should be retained and reviewed as part of standard database health monitoring procedures.
Documentation References
3. Defragment a database
Key Points
- Rearranges global blocks so blocks for each global are in consecutive sequence
- Automatically compacts all globals at 70% target density
- Not necessary on regular basis; benefits sequential read workloads
- Requires free space at end of database; may auto-expand if needed
- IRISTEMP database cannot be defragmented
Detailed Notes
How Defragmentation Works
Defragmenting a database reorganizes global blocks within the database so that all blocks containing data for a given global are stored in consecutive physical sequence. This operation differs from basic compacting in that it optimizes for data locality rather than just consolidating free space. The defragmentation process automatically includes global compaction at a 70% target density (compared to the normal 90% default for standalone compact operations).
When to Defragment
InterSystems IRIS data structures are self-balancing and do not inherently suffer performance degradation over time, so defragmentation is generally not required as routine maintenance. However, workloads that perform large sequential scans of databases can benefit from the improved data locality that defragmentation provides.
Space and Resource Requirements
The operation requires a certain amount of free space at the database end to work efficiently. If insufficient free space exists, the system will either expand the database as necessary or inform you that compacting first would reduce the required expansion. An important warning: defragmentation temporarily relocates all data in the database regardless of existing fragmentation level, consuming significant resources. Subsequent runs provide no additional benefit but consume similar resources.
Execution
Execute defragmentation via Management Portal (System Operations > Databases > Database Details > Defragment button) during off-peak hours, and track progress via the Background Tasks page.
Documentation References
4. Mount and dismount databases
Key Points
- Mount: Makes database accessible to the system and users
- Dismount: Makes database temporarily unavailable without removing from configuration
- Status remains until explicitly changed or instance restart
- Permanent dismount requires removing database from configuration
- Accessible via Management Portal Database Details page
Detailed Notes
Mount vs. Dismount
Database mounting and dismounting provides dynamic control over database availability within an InterSystems IRIS instance. A mounted database is active and accessible for reads and writes; a dismounted database remains in the configuration but is unavailable for access. This distinction is important - dismounting does not delete or remove the database from the system configuration; it merely changes the operational state.
Common Use Cases
Common use cases for dismounting include performing file-level backups, moving database files, investigating database corruption, or temporarily reducing system resource utilization.
How to Mount and Dismount
To mount or dismount a database through the Management Portal, navigate to System Operations > Databases, click the database name to access its Database Details page, then click the Mount or Dismount button on the ribbon. The new mount status takes effect immediately and persists until you explicitly change it or restart the IRIS instance.
Permanent Removal
To permanently prevent a database from mounting at startup, you must remove it from the instance configuration entirely, not just dismount it. The Database Details page shows the current mount status along with other important information including whether the database is mounted as read-only and the reason for any read-only status. Database mount operations are fundamental to database lifecycle management and disaster recovery procedures.
Documentation References
5. Compact globals
Key Points
- Consolidates global data into fewer blocks, increasing database free space
- Run via ^DATABASE utility from Terminal
- Default target density is 90%; normal allocation is ~70%
- Can select specific globals or all globals in database
- Temporary database expansion possible during operation
Detailed Notes
Purpose of Global Compaction
Compacting globals is a targeted space management technique that consolidates data within individual globals to increase block utilization and free up database space. InterSystems IRIS normally allocates global data at approximately 70% block capacity to accommodate growth and maintain performance. Over time, particularly with nonsequential deletion patterns, average global block density can decrease significantly. The compact globals operation rearranges data to achieve a specified target density (90% by default), consolidating data that might be spread across three blocks into two blocks, thereby increasing available free space.
Using the ^DATABASE Utility
To execute global compaction, open Terminal, switch to %SYS namespace, run "do ^DATABASE", and select option 7 (Compact globals in a database). You can then specify which database to operate on, choose specific globals or all globals, set the target density, and confirm. The utility provides flexibility to compact multiple databases by entering their numbers from a list.
Important Considerations
An important consideration: if you specify a target density lower than the current density, the database size will not increase. Global compaction can involve temporary database expansion, so if the database reaches its configured maximum size or the storage volume fills, the operation cancels. Like other database reorganization operations, compact globals runs concurrently with normal activity but should be scheduled during off-peak hours, with only one such operation running system-wide at a time for optimal resource utilization.
Documentation References
6. Delete a database
Key Points
- Two-step process: Remove from configuration, then delete physical files
- Must first dismount database before deletion
- Requires appropriate administrative privileges
- No automatic recovery after physical file deletion
- Critical to verify database contents before deletion
Detailed Notes
Deletion Process Overview
Deleting a database from InterSystems IRIS is a permanent, irreversible operation requiring careful planning and execution. The process involves two distinct steps: first removing the database from the instance configuration, then deleting the physical database files from the file system. Before deletion, the database must be dismounted to ensure no active processes are accessing it.
Pre-Deletion Verification
Through the Management Portal, you can view the list of configured databases at System Operations > Databases. However, the actual deletion process typically requires direct configuration file editing or use of system management APIs. It is absolutely critical to verify several factors before proceeding with database deletion: confirm the database contains no valuable data (or that complete, tested backups exist), ensure no namespaces or applications depend on the database, verify no global mappings point to the database, and confirm the database is not required for instance operation (system databases like IRISSYS, IRISLIB, IRISAUDIT, etc., must never be deleted).
Best Practices
Once the database is removed from configuration and the physical files are deleted from the file system, recovery is only possible from backups. Best practices include documenting the reason for deletion, obtaining appropriate approvals, backing up the database before deletion, testing the deletion in non-production environments first, and verifying system functionality after the deletion is complete. Database deletion is an advanced administrative operation that should be approached with appropriate caution and planning.
Documentation References
7. Manage data and routine operations
Key Points
- ^%GO (Global Output): Export globals to sequential files for backup or transfer
- ^%GI (Global Input): Import globals from sequential files into databases
- ^%RO (Routine Output): Export routines (MAC, INT, INC) to sequential files
- ^%RI (Routine Input): Import routines from sequential files
- File formats: GOF (Global Output Format) for globals, standard text format for routines
- Management Portal: System Explorer provides GUI-based import/export options
- Use cases: Data migration, development workflows, selective backups, cross-system transfers
Detailed Notes
Overview
InterSystems IRIS provides powerful utilities for importing and exporting data (globals) and code (routines). These operations are essential for system administration tasks including data migration between environments, creating selective backups, moving code between namespaces, and transferring data across systems. The primary tools are Terminal-based utilities (^%GO, ^%GI, ^%RO, ^%RI) and Management Portal's System Explorer interface.
Global Export and Import Operations
^%GO (Global Output): The Global Output utility exports globals from a database to a sequential file in GOF (Global Output Format). This format preserves the global structure and data integrity during export.
How to use ^%GO: 1. Open Terminal and switch to the namespace containing the globals to export 2. Run: `do ^%GO` 3. Select globals to export (wildcards supported, e.g., `MyApp*` exports all globals starting with "MyApp") 4. Specify output device/file path 5. Confirm export parameters
Common options:
- Export specific globals or ranges (e.g., `^Orders`, `^Patient(100)::(200)` for subscript ranges)
- Export to file or device
- Include or exclude specific subscript levels
^%GI (Global Input): The Global Input utility imports globals from GOF format files created by ^%GO.
How to use ^%GI: 1. Open Terminal and switch to the target namespace 2. Run: `do ^%GI` 3. Specify input file path containing exported globals 4. Confirm namespace and database for import 5. Choose whether to replace existing data or merge
Important considerations:
- Existing globals with the same name are replaced by default unless merge option is selected
- Ensure sufficient database space before importing large globals
- Verify permissions on target database (must have write access)
GOF File Format: The Global Output Format (GOF) is a platform-independent, sequential file format that represents globals as a series of SET commands. GOF files are human-readable and can be edited (with caution) before import. The format preserves:
- Global names and subscript structure
- All data types (strings, numbers, binary data)
- Character encoding information
Routine Export and Import Operations
^%RO (Routine Output): The Routine Output utility exports routines (MAC source, INT compiled code, or INC include files) to sequential text files.
How to use ^%RO: 1. Open Terminal and switch to the namespace containing routines 2. Run: `do ^%RO` 3. Select routines to export (wildcards supported) 4. Specify output file path 5. Choose format: MAC (source), INT (compiled), or INC (include files)
Common use cases:
- Exporting source code for version control
- Creating portable routine backups
- Moving code between development/test/production environments
- Sharing utility routines across systems
^%RI (Routine Input): The Routine Input utility imports routines from files created by ^%RO or from standard text files.
How to use ^%RI: 1. Open Terminal and switch to the target namespace 2. Run: `do ^%RI` 3. Specify input file path 4. Choose whether to compile routines after import 5. Confirm import operation
Important considerations:
- MAC routines can be imported as source and optionally compiled
- INT routines should be recompiled for the target platform
- Existing routines with the same name are replaced
- Compilation errors are reported but don't stop the import process
Routine File Format: Routine export files use a standardized text format with:
- Routine name on first line (e.g., `ROUTINE MyRoutine [Type=MAC]`)
- Routine source code following
- Special markers for routine boundaries in multi-routine exports
Management Portal Methods
System Explorer - Globals: Navigate to System Explorer > Globals to access GUI-based global operations:
- Browse global structure and data
- Export selected globals to GOF or XML format
- Import globals from GOF or XML files
- Search for specific globals or global patterns
- View global properties (size, block distribution)
System Explorer - Routines: Navigate to System Explorer > Routines for routine operations:
- Browse available routines by type (MAC, INT, INC)
- Export routines to files
- Import routines from files
- View routine source code
- Compile/decompile routines
XML Export Format Option: The Management Portal also supports XML format for exports, which provides:
- More verbose, structured representation
- Better support for metadata and documentation
- Integration with external tools and version control systems
- Cross-platform compatibility
Common Use Cases
Data Migration Between Environments: When moving application data from development to test or production: 1. Use ^%GO to export relevant globals from source system 2. Transfer GOF files to target system (secure file transfer) 3. Use ^%GI to import globals into target namespace 4. Verify data integrity after import using integrity checks
Development Workflow: Moving code between namespaces during development:
- Export routines from development namespace using ^%RO
- Import into integration namespace using ^%RI with compile option
- Test functionality in isolation before production deployment
Selective Backups: Creating backups of specific globals or routines:
- Complement full database backups with selective exports
- Export critical configuration globals separately for rapid restore
- Version control of code through routine exports
Disaster Recovery: Extracting data from damaged databases:
- If database is partially accessible, use ^%GO to export intact globals
- Transfer data to recovery system
- Restore to clean database using ^%GI
Cross-System Data Transfer: Sharing reference data or configuration between systems:
- Export lookup tables, configuration globals using ^%GO
- Distribute GOF files to multiple systems
- Import consistently across all target systems
Best Practices
Before Export:
- Verify source data integrity using integrity checks
- Document export scope and parameters
- Ensure sufficient disk space for export files
- Consider database locking/quiescing for consistent exports of related globals
During Import:
- Verify target database has sufficient free space
- Back up target database before importing (allows rollback)
- Test imports in non-production environment first
- Monitor import progress for large datasets
After Import:
- Run integrity check on target database
- Verify data completeness (row counts, key records)
- Test application functionality with imported data/code
- Document import details (source, timestamp, scope)
File Management:
- Use descriptive filenames with timestamps (e.g., `Orders_Export_20260107.gof`)
- Store export files securely with appropriate access controls
- Consider compression for large export files
- Document file format and encoding for cross-platform transfers
Character Encoding:
- Be aware of character set differences between systems
- UTF-8 is recommended for maximum compatibility
- Test with sample data when transferring between different platforms (Windows/Linux)
- Document encoding used in export files
Version Control Integration:
- Export MAC routines regularly to version control systems
- Use XML format for better diff/merge capabilities
- Automate routine exports as part of CI/CD pipelines
- Tag exports with version numbers or release identifiers
Documentation References
Exam Preparation Summary
Critical Concepts to Master:
- Compact vs. Truncate: Understand that compact moves free space to the end; truncate returns it to the file system
- Defragmentation Purpose: Know when defragmentation benefits performance (sequential read workloads)
- Integrity Checks: Understand how to schedule and interpret integrity check results
- Mount/Dismount: Distinguish between dismounting (temporary) and deleting (permanent)
- Global Compaction: Recognize when global compaction is appropriate vs. full database compaction
- Resource Considerations: Remember that all reorganization operations should run during off-peak hours
Common Exam Scenarios:
- Determining correct sequence of operations (compact before truncate)
- Identifying when defragmentation provides performance benefits
- Troubleshooting databases that won't mount
- Planning maintenance windows for integrity checks
- Recovering database space efficiently
Hands-On Practice Recommendations:
- Practice compact and truncate operations on test databases
- Run integrity checks and interpret output reports
- Use ^DATABASE utility for global compaction
- Monitor free space using ^%FREECNT
- Observe database reorganization via Background Tasks page
- Mount and dismount databases through Management Portal