Posts

Showing posts from 2015

Weblogic SAF Configuration

Weblogic SAF agents allow us to store the message  in local persistence , in case the remote server is not available. Once the remote server is up , the SAF agents will forward the message to destination. This will overcome the message loss issues when the destination is not available.  Following configurations needs to be done to achieve SAF mechanism. At Destination side.     1. Create a JMS server     2.  Create a JMS Module     3. Create a Sub-deployment. Targeting to JMS server     4. Create a JMS Topic or Queue targeting to Sub-Deployment     5. Create a Connection Factory. At Source Side.    1. Create a JMS server    2. Create a JMS Module    3. Create a Subdeployment - Targeting to SAF Agent    4. Create a Connection Factory    5. Create a SAF Remote Context  (This will have a remote server URL, Login Id and Password)    6. Create SAF Imported Destination (Use SAF Remote Context here)    7. Create a Topic /Queue under SAF Imported Destination.        

SOA - FTP Adapter

SOA File / FTP Adapters enable BPEL process to exchange the files on local or remote file systems.  Steps to Create File Adapter Drag and drop the file adapter component  to component pallet. Provide the File adapter name under Service Name Select Define from operation and schema (specified later) option select the Operation Read File Write File Synchronous Read file List Files Mention the physical or logical path in the next page Mention the Archive path if the file needs to the archived after reading  Select Delete file after successful retrieval option if we  need to delete the file after successful read Mention the type of files which needs to be processed Mention the file polling frequency  Create a schema file using existing file in the message window by clicking the settings icon After creating schema select finish Difference between Read File and Synchronous Read File Operation   Synchronous Read :  It is a synchronous operation and follow r

SOA Dehydration Process

Oracle BPEL Process Manager uses the dehydration store database to maintain long-running asynchronous processes and their current state information in a database while they wait for asynchronous callbacks. Storing the process in a database preserves the process and prevents any loss of state or reliability if a system shuts down or a network problem occurs. There are two types of processes in Oracle BPEL Process Manager. These processes impact the dehydration store database in different ways. Transient  processes: this process type does not incur any intermediate dehydration points during process execution. If there are unhandled faults or there is system downtime during process execution, the instances of a transient process do not leave a trace in the system. Instances of transient processes cannot be saved in-flight (whether they complete normally or abnormally). Transient processes are typically short-lived, request-response style processes. Durable processes : this p

BPEL - CallBack Message Handling

BPEL engine maintains all Async call back messages into database table called dlv_message. Wecan see such all messages in BPEL console call-back manual recovery area.The query being used by bpel console is joined on dlv_message and work_item tables.This query simply picks up all call back messages which are undelivered and have not been modified with in certain threshold time. Call-back messages are processed in following steps BPEL engine assigns the call-back message to delivery service Delivery service saves the message into dlv_message table with state 'UNDELIVERED-0' Delivery service schedules a dispatcher thread to process message asynchronously Dispatcher thread enqueues message into JMS queue Message is picked up by MDB MDB delivers the message to actual BPEL process  waiting for call-back and changes state to 'HANDLED=2' So given above steps, there is always possibility that message is available in dlv_message table but MDB is failed in deliverin