1. Edge Gateway Data Pipeline
Key Points
- Business Services: Inbound components that receive data from external systems (HL7 feeds, FHIR endpoints, CDA documents)
- Business Processes: Routing and transformation components that normalize and route data through the pipeline
- Business Operations: Outbound components that store data in the ECR and communicate with the Hub
- PreProcessor/PostProcessor: Hook points for custom processing before and after standard pipeline stages
Detailed Notes
Overview
The Edge Gateway is the data collection point in a UCR federation. Each facility contributing data has an Edge Gateway that receives clinical data from source systems, transforms it into SDA format, stores it in the local Edge Cache Repository (ECR), and notifies the Hub of data availability. The Edge Gateway production is built on the InterSystems IRIS interoperability framework, consisting of business services, business processes, and business operations working together as a pipeline.
Understanding the flow of data through these business hosts is essential for configuring, troubleshooting, and extending data feeds. The standard pipeline receives raw messages, transforms them to SDA, applies terminology normalization, stores the SDA in the ECR, and sends notifications to the Hub registry.
Business Host Types
- Business Services: Entry points for data into the Edge Gateway. They listen for incoming messages via adapters (TCP for HL7, HTTP for FHIR/CDA, file system for batch). Each service is configured with an adapter class, connection settings, and a target for routing.
- Business Processes: Intermediate components that perform routing, transformation, and orchestration. They apply DTL transformations, invoke translation profiles, and route messages to the appropriate downstream components.
- Business Operations: Terminal components that perform actions such as writing SDA to the ECR, sending notifications to the Hub, or forwarding data to external systems.
Pipeline Flow
1. Source system sends data to an inbound Business Service 2. Business Service passes the message to a routing Business Process 3. Business Process applies transformations (DTL) and normalization 4. Transformed SDA is sent to the ECR storage Business Operation 5. Hub notification Business Operation informs the Registry of new data 6. PreProcessor and PostProcessor hooks execute custom logic at defined stages
PreProcessor and PostProcessor Hooks
- PreProcessor: Executes before the standard processing pipeline. Use for pre-validation, message filtering, or custom routing logic before data enters the main transformation flow.
- PostProcessor: Executes after the standard processing pipeline. Use for post-processing tasks such as sending acknowledgments, triggering alerts, or logging.
- Both hooks are configured as settings on the Edge Gateway production components and reference custom business process or operation classes.
---
Documentation References
2. Inbound Adapters and Routing Rules
Key Points
- HL7 Adapter: TCP-based adapter for HL7 v2 messages, configurable port and acknowledgment settings
- CDA Adapter: HTTP/file-based adapter for CDA documents
- FHIR Adapter: HTTP REST adapter for FHIR resources
- X12 Adapter: File or HTTP adapter for X12 transactions (834, 837)
- Routing Rules: Direct messages to appropriate processing components based on message type and content
Detailed Notes
Overview
Inbound adapters are the mechanism by which the Edge Gateway connects to external source systems. Each adapter type is designed for a specific message format and transport protocol. Routing rules then determine how incoming messages flow through the production based on message characteristics such as type, source facility, or content.
Proper adapter configuration ensures reliable data reception, while well-designed routing rules ensure that each message type is processed by the correct transformation and storage components.
Adapter Configuration
- HL7 v2: Uses EnsLib.HL7.Adapter.TCPInboundAdapter or similar. Configure IP address, port, acknowledgment mode (immediate or application), character encoding, and framing characters (MLLP).
- CDA: Typically uses an HTTP adapter or file drop adapter. Configure endpoint URL, SSL/TLS settings, and file polling paths.
- FHIR: Uses an HTTP REST adapter. Configure endpoint URL, supported FHIR version (R4), authentication, and resource type filtering.
- X12: Uses file-based or HTTP adapters for EDI transactions. Configure file paths, delimiters, and transaction set identification.
Routing Rules
- Routing rules are defined in the business process that acts as the message router
- Rules evaluate message properties (message type, trigger event, source facility) to determine the target component
- Use the TargetConfigNames setting to specify default routing targets
- Complex routing can use rule definitions or custom BPL (Business Process Language) logic
- Routing rules can send a single message to multiple targets for parallel processing
---
Documentation References
3. DTL Transformations
Key Points
- DTL Purpose: Transform source data formats (HL7, CDA, FHIR) into SDA for ECR storage
- Visual Editor: DTL editor in the Management Portal provides drag-and-drop mapping
- Source and Target: Define source schema (e.g., HL7 2.5) and target schema (SDA3)
- Functions: Built-in functions for string manipulation, date formatting, code translation
- Subtransforms: Reusable transformation components that can be called from parent transforms
Detailed Notes
Overview
Data Transformation Language (DTL) is the primary mechanism for mapping source data into SDA format within the Edge Gateway. DTL transformations define field-by-field mappings between source message schemas and the target SDA schema. The visual DTL editor in the Management Portal allows developers to create and test transformations without writing code, though code-based DTL is also supported for complex scenarios.
Every data feed entering the Edge Gateway must ultimately produce SDA-formatted data for storage in the ECR. DTL transformations are the bridge between the diverse formats of source systems and the standardized SDA format used internally by UCR.
Creating DTL Transformations
1. Open the Management Portal and navigate to Interoperability > Build > Data Transformations 2. Create a new transformation, specifying source class/schema and target class/schema 3. Use the visual mapper to drag fields from source to target 4. Apply functions for data conversion (date formatting, string manipulation, lookups) 5. Add conditional logic using IF/THEN/ELSE constructs 6. Test the transformation with sample messages 7. Compile and deploy to the production
Key DTL Concepts
- Direct Mapping: Simple field-to-field copy from source to target
- Computed Mapping: Apply functions or expressions to derive target values
- Conditional Mapping: Map values based on conditions (IF source field = X, THEN set target to Y)
- Iteration: Loop over repeating segments/groups in source messages
- Subtransforms: Encapsulate reusable mapping logic in separate DTL classes that can be called from a parent transform
- Lookup Tables: Reference external lookup tables for code translation within DTL
---
Documentation References
4. Message Routing
Key Points
- TargetConfigNames: Setting that specifies which business host(s) receive output from a component
- Multi-Target Routing: Send to multiple targets by comma-separating names in TargetConfigNames
- Routing Rules: Rule-based routing for conditional message distribution
- Content-Based Routing: Route based on message content, type, or source facility
Detailed Notes
Overview
Message routing in the Edge Gateway production determines how data flows from one business host to another. The primary mechanism is the TargetConfigNames setting, which specifies the downstream component(s) that should receive the output of a given business host. For more complex routing scenarios, business process routers with rule definitions provide conditional routing based on message properties.
Proper routing configuration ensures that each message type follows the correct processing path through the pipeline, receiving appropriate transformations and reaching the correct storage and notification components.
TargetConfigNames Configuration
- Set on business services and business processes to define downstream targets
- Multiple targets are comma-separated: `TargetConfigNames = "Process1,Process2"`
- Messages are sent to all listed targets (fan-out pattern)
- Changes take effect when the production component is restarted or updated
Routing Business Processes
- Use a routing business process when conditional routing is needed
- Define routing rules that evaluate message properties
- Rules can check message type, source, content values, or custom criteria
- Each rule specifies a target component and optional transformation
- Rules are evaluated in order; first match or all matches can be configured
Common Routing Patterns
- Sequential Processing: Service → Transform Process → Storage Operation
- Parallel Processing: Service → Router → (Transform A, Transform B) for different data types
- Conditional Routing: Router sends HL7 ADT to one process, HL7 ORU to another
- Hub Notification: After ECR storage, route to Hub notification operation
---
Documentation References
5. Pipeline and Processing Settings
Key Points
- Pipeline Setting: Controls the processing mode of the Edge Gateway (standard, custom, or hybrid)
- PreProcessor Setting: Class reference for custom processing before the standard pipeline
- PostProcessor Setting: Class reference for custom processing after the standard pipeline
- Processing Modes: Different pipeline configurations for different data handling requirements
Detailed Notes
Overview
The Pipeline setting on the Edge Gateway production controls how data is processed through the system. It determines whether the standard processing pipeline is used, a custom pipeline replaces it, or custom hooks augment the standard flow. PreProcessor and PostProcessor settings allow insertion of custom logic at specific points in the pipeline without replacing the entire standard flow.
These settings provide the flexibility to customize data processing while preserving the standard UCR pipeline behavior for most scenarios.
Pipeline Setting
- The Pipeline setting determines the overall processing strategy for the Edge Gateway
- Standard pipeline handles the full flow: receive → transform → normalize → store → notify
- Custom pipeline allows replacement of the standard flow with organization-specific logic
- Configure via the Edge Gateway production settings in the Management Portal
PreProcessor Configuration
- Specifies a class that executes before the standard pipeline processes a message
- Common uses: message validation, filtering, enrichment, custom logging
- The PreProcessor receives the raw inbound message and can modify it before standard processing
- Configure by setting the PreProcessor property on the appropriate production component
- Must implement the expected interface for the pipeline to invoke it correctly
PostProcessor Configuration
- Specifies a class that executes after the standard pipeline completes processing
- Common uses: sending notifications, triggering downstream systems, audit logging
- The PostProcessor receives the processed output and can perform additional actions
- Configure by setting the PostProcessor property on the appropriate production component
- Does not modify the data stored in the ECR but can trigger additional workflows
---
Documentation References
6. Translation Profiles
Key Points
- InboundCodeSystemProfile: Defines which coding systems are expected from a data source
- TranslationProfile: Maps source codes to target standard codes
- Normalization Goal: Convert facility-specific codes to federation-standard terminology
- Configuration Location: Set on Edge Gateway business hosts or production settings
Detailed Notes
Overview
Translation profiles are a key mechanism for terminology normalization in UCR. When data arrives at the Edge Gateway from different facilities, it may use different coding systems and code values for the same clinical concepts. Translation profiles define how these source codes should be mapped to the federation's standard terminology, ensuring that data from all facilities can be searched, aggregated, and displayed consistently.
The InboundCodeSystemProfile identifies the coding systems used by a particular data source, while the TranslationProfile defines the actual code-to-code mappings. Together, they ensure consistent terminology across the federation.
InboundCodeSystemProfile
- Defines the coding systems expected from a specific data source or facility
- Associates source coding systems with translation maps
- Set as a property on Edge Gateway components processing inbound data
- Allows different facilities to use different source terminologies while achieving the same normalized output
TranslationProfile
- Contains the actual translation maps that convert source codes to target codes
- Each translation map handles a specific coding system or data category
- Maps are typically populated through the Coded Entry Registry or bulk import
- Can map one-to-one or many-to-one (multiple source codes to a single standard code)
Configuration Steps
1. Define the standard coding systems for the federation (e.g., LOINC for labs, SNOMED for diagnoses) 2. Create translation maps for each source facility's local codes 3. Create an InboundCodeSystemProfile that references the appropriate translation maps 4. Set the InboundCodeSystemProfile on the Edge Gateway production components 5. Set the TranslationProfile to activate translation during processing 6. Test with sample messages to verify correct code translation
---
Documentation References
7. HL7 v2 Processing
Key Points
- HL7 Business Service: Receives HL7 v2 messages via TCP/MLLP
- Message Types: ADT (admit/discharge/transfer), ORU (results), ORM (orders), SIU (scheduling), MDM (documents)
- HL7 to SDA Transform: DTL transformation maps HL7 segments to SDA streamlets
- Acknowledgments: Configure immediate or application-level ACK/NAK responses
- Error Handling: Manage bad messages, transformation failures, and connection issues
Detailed Notes
Overview
HL7 v2 is the most common data feed format for UCR Edge Gateways, as it is widely used by hospital information systems, EHRs, and ancillary systems. Configuring HL7 v2 processing involves setting up inbound business services to receive messages, DTL transformations to convert HL7 to SDA, and appropriate routing and error handling to ensure reliable data ingestion.
The Edge Gateway includes standard HL7 processing components that can be configured and extended for specific facility requirements.
Configuring HL7 Business Services
- Add an HL7 TCP business service to the Edge Gateway production
- Configure the TCP port, IP binding, and MLLP framing
- Set character encoding (UTF-8 is recommended)
- Configure acknowledgment behavior: immediate ACK, application ACK, or deferred ACK
- Set TargetConfigNames to route received messages to the appropriate business process
HL7 to SDA Transformation
- Standard DTL transformations map common HL7 message types to SDA
- ADT messages map to Patient and Encounter containers
- ORU messages map to LabOrder, RadOrder streamlets
- ORM messages map to order-related streamlets
- Custom DTL may be needed for site-specific Z-segments or non-standard usage
- Transformations handle repeating segments and complex data types
Error Handling
- Configure error handling for malformed messages, transformation failures, and connectivity issues
- Use the Alert on Error setting to generate alerts for processing failures
- Bad messages can be routed to an error queue for manual review
- Retry logic handles transient connectivity issues with source systems
- Monitor the message trace in the Management Portal for troubleshooting
---
Documentation References
8. FHIR Ingestion
Key Points
- FHIR Business Service: HTTP REST endpoint for receiving FHIR resources
- FHIR R4 Support: UCR supports FHIR R4 resource ingestion
- FHIR to SDA Transform: Maps FHIR resources to SDA streamlets
- Payer-to-Payer Edge: Specialized FHIR edge configuration for payer data exchange
- Authentication: OAuth2 or API key authentication for FHIR endpoints
Detailed Notes
Overview
FHIR (Fast Healthcare Interoperability Resources) is increasingly used as a data feed format for UCR Edge Gateways. UCR supports FHIR R4 resource ingestion, allowing modern EHR systems and health information exchanges to contribute data using FHIR APIs. The FHIR edge configuration includes specialized business hosts for receiving, transforming, and storing FHIR data.
Additionally, the Payer-to-Payer edge configuration supports health plan data exchange scenarios where payer organizations exchange clinical and claims data using FHIR.
Configuring FHIR Business Hosts
- Add FHIR-specific business services to the Edge Gateway production
- Configure the HTTP endpoint path, port, and SSL/TLS settings
- Set up authentication (OAuth2 bearer tokens or API keys)
- Configure supported FHIR resource types (Patient, Condition, Observation, MedicationRequest, etc.)
- Set TargetConfigNames for routing FHIR resources to transformation components
FHIR to SDA Transformation
- Standard transformations map FHIR R4 resources to SDA streamlets
- Patient resource maps to Patient container
- Condition maps to Diagnosis streamlet
- Observation maps to LabOrder or other result streamlets
- MedicationRequest maps to Medication streamlet
- Bundle resources are unpacked and each entry is processed individually
Payer-to-Payer Edge
- Specialized edge configuration for health plan data exchange
- Supports CMS Interoperability rules for payer-to-payer data sharing
- Handles FHIR Bundles containing clinical and claims data
- Configuration includes payer-specific endpoints and authentication
- Maps payer FHIR profiles to SDA for storage in the ECR
---
Documentation References
9. X12 Processing
Key Points
- X12 834: Enrollment/benefit data transactions
- X12 837: Healthcare claims data transactions
- X12 Business Service: Receives X12 transactions via file or HTTP
- X12 to SDA Transform: Maps X12 segments to SDA streamlets for clinical and administrative data
- Configuration: Adapter settings, delimiters, and transaction set identification
Detailed Notes
Overview
X12 EDI transactions are used to bring enrollment and claims data into the UCR federation. The two primary X12 transaction types supported are 834 (enrollment/benefit enrollment and maintenance) and 837 (healthcare claims). Processing X12 data in the Edge Gateway extends UCR's data scope beyond traditional clinical data to include administrative and financial information.
X12 processing requires specialized business hosts that understand EDI segment structure and can map the data to SDA format for storage in the ECR.
X12 834 Processing (Enrollment)
- 834 transactions contain member enrollment, demographic, and benefit information
- Business service receives 834 files via file adapter or HTTP endpoint
- Transformation maps member data to Patient container and related SDA elements
- Coverage and benefit information maps to relevant SDA streamlets
- Handles add, change, and terminate enrollment actions
X12 837 Processing (Claims)
- 837 transactions contain healthcare claim information (professional, institutional, dental)
- Business service receives 837 files via file adapter or HTTP endpoint
- Transformation maps claim data to Encounter, Diagnosis, Procedure, and other SDA streamlets
- Handles claim header, service lines, diagnosis codes, and provider information
- Supports 837P (professional), 837I (institutional), and 837D (dental) variants
Configuration
- Configure X12 business services with appropriate file paths or HTTP endpoints
- Set delimiter characters (segment terminator, element separator, sub-element separator)
- Configure transaction set identification for routing 834 vs. 837
- Set up appropriate DTL transformations for each transaction type
- Test with sample X12 files to verify correct parsing and transformation
---
Documentation References
10. Custom Processing Hooks
Key Points
- Custom Business Processes: Extend standard classes to add custom processing logic
- Custom Business Operations: Create operations for specialized storage or notification requirements
- Integration Points: Hook into the pipeline at PreProcessor, PostProcessor, or custom routing points
- ObjectScript/Python: Implement hooks using ObjectScript or Embedded Python
- Testing: Test custom hooks in isolation before deploying to production
Detailed Notes
Overview
Custom processing hooks allow organizations to extend the standard Edge Gateway pipeline with specialized logic. These hooks can perform custom validation, enrichment, transformation, routing, or notification tasks that are not covered by the standard UCR processing components. Hooks are implemented as custom business processes or operations and integrated into the production at appropriate points.
Custom hooks are particularly useful for organization-specific requirements such as custom data validation rules, integration with non-standard systems, specialized alerting, or data enrichment from external sources.
Creating Custom Business Processes
- Extend the appropriate base class for Edge Gateway business processes
- Implement the OnRequest or OnMessage method with custom logic
- Register the custom class in the production configuration
- Set appropriate TargetConfigNames for downstream routing
- Handle errors and generate appropriate responses
Creating Custom Business Operations
- Extend the appropriate base class for Edge Gateway business operations
- Implement custom logic for data storage, notification, or external system integration
- Configure adapter settings if the operation communicates with external systems
- Register in the production and configure as a target for upstream components
Integration Approaches
- PreProcessor Hook: Insert custom class as PreProcessor for validation or enrichment before standard processing
- PostProcessor Hook: Insert custom class as PostProcessor for notifications or logging after standard processing
- Custom Router: Replace or augment the standard routing process with custom routing logic
- Parallel Processing: Add custom operations as additional targets alongside standard components
- Error Handler: Create custom error handling components for specialized error recovery
Best Practices
- Keep custom hooks focused on a single responsibility
- Use configuration settings rather than hard-coded values for flexibility
- Implement comprehensive logging for troubleshooting
- Test hooks in a development environment with representative data
- Document custom hooks for maintenance and knowledge transfer
- Consider performance impact, especially for PreProcessor hooks that run on every message
---
Documentation References
Exam Preparation Summary
Critical Concepts to Master:
- Edge Gateway Pipeline: Understand the flow from Business Service to Business Process to Business Operation, including PreProcessor and PostProcessor hook points
- DTL Transformations: Know how to create and configure DTL transformations for mapping source data to SDA format
- TargetConfigNames: Understand how this setting controls message routing between business hosts
- Translation Profiles: Know the relationship between InboundCodeSystemProfile and TranslationProfile for terminology normalization
- Multi-Format Ingestion: Understand the configuration differences for HL7, FHIR, X12, and CDA data feeds
- Pipeline Settings: Know how Pipeline, PreProcessor, and PostProcessor settings control processing behavior
Common Exam Scenarios:
- Configuring a new HL7 v2 data feed from a hospital system to the Edge Gateway
- Setting up FHIR ingestion for a modern EHR integration
- Creating DTL transformations to map non-standard HL7 segments to SDA
- Configuring translation profiles to normalize terminology from a new facility
- Adding custom processing hooks for organization-specific validation requirements
- Troubleshooting message routing issues in the Edge Gateway production
- Configuring X12 834/837 processing for payer data ingestion
Hands-On Practice Recommendations:
- Set up an Edge Gateway production and configure HL7 v2 inbound processing
- Create DTL transformations using the visual editor to map HL7 to SDA
- Configure translation profiles and test terminology normalization
- Add FHIR business hosts and test with sample FHIR resources
- Create a custom PreProcessor hook and verify it executes in the pipeline
- Use the message trace in the Management Portal to follow messages through the production
- Practice configuring TargetConfigNames for multi-target routing scenarios