Monday, August 16, 2010

XML messages in the MRM and XML domains !!

If your messages are in XML, you can use either the XML Wire Format in the MRM domain, or one of the dedicated XML domains. Three XML domains are supported. The XMLNSC and XMLNS domains provide namespace support, but the XML domain does not support XML namespaces and is provided only for compatibility with WebSphere MQ Integrator Version 2.

Whether you choose the MRM domain, or one of the dedicated XML domains (XMLNSC or XMLNS), depends on the nature of your XML messages and the transformation logic that you want to use. The differentiating features of each domain are described below.

* The parser for the MRM domain is model-driven, using a message dictionary that is generated from a message set. This message dictionary enables the MRM parser to interpret the data in an XML message.
For example:
o The MRM parser can validate XML messages against the model when parsing and serializing.
o The model indicates the real data type of a field in the message instead of always treating it as a character string.
o Base64 binary data can be automatically decoded.
o Date and time information can be extracted from a data value using a specified format string.
o When creating output messages, the MRM parser can automatically generate the XML declaration, and other XML constructs, based on options in the model; this simplifies the transformation logic.
* The parsers for the XMLNSC and XMLNS domains are programmatic and do not use a model when parsing.

For example:
o All data in an XML message is treated as character strings.
o Validation is not possible when parsing and serializing.
o Transformation logic must create explicitly all constructs in an output message.
o Both parsing and serializing are quicker than with the MRM domain.
* The MRM parser discards some parts of an XML message when parsing; for example, the XML declaration, namespace declarations, white space formatting, XML comments, XML processing instructions, and inline DTDs. If you use this parser, you cannot create these constructs when building an output message.
* The XMLNSC parser, by default, discards white space formatting, XML comments, XML processing instructions, and inline DTDs. However, options are provided to preserve all of these constructs, except inline DTDs. You can create them all, except inline DTDs, when constructing an output message.
* The XMLNS parser preserves all parts of an XML document, including white space formatting. You can create all XML constructs when constructing an output message.
* The MRM and XMLNSC parsers build compact message trees that use fewer syntax elements than the XMLNS parser for attributes and simple elements, thus making these parsers more suitable for parsing very large XML messages.
* The XMLNS parser builds a message tree that conforms more closely to the XML data model. You might want to use this parser if you are using XPath to access the message tree, and the relative position of parent and child nodes is important, or if you are accessing text nodes directly.

Tip: If you need to validate the content and values in XML messages, use the MRM domain.

Tip: If performance is critical, and you do not need to validate XML messages, use the XMLNSC domain.

Tip: If you need to preserve formatting in XML messages on output, use the XMLNSC domain with the option to retain mixed content.

Tip: If you are using XPath to access the message tree, and you require the message tree to conform as closely as possible to the XML data model, use the XMLNS domain.

Tip: If you are taking non-XML data that has been parsed by the CWF or TDS formats of the MRM domain, and transforming the data to the equivalent XML, use the MRM domain.

Sunday, June 27, 2010

Communicating with a Queue Manager outside Cluster!!

1. You already have a Queue Manager cluster (named JOSEPH) setup with 2 Queue Managers with names QM1 and QM2 and all the requires channels are setup for communication between QM1 and QM2
2. You have QM3, which is outside the cluster
3. QM2 has a Queue [Q2] defined at cluster level.
In this tutorial, let us send message from QM3’s Q3 to QM2’s Q2 and in the reverse path.
In this case one of your Queue Managers should act as a gateway. Say for example, lets make QM1 as the gateway for cluster and QM3 is outside cluster.
Steps:
1. The queue manager outside the cluster must have a QREMOTE definition for each queue in the cluster that it wants to put messages to.
    DEFINE QREMOTE(Q2) RNAME(Q2) RQMNAME(QM2) XMITQ(TQ1) [in this case Q2 is a cluster queue and i want to put message from QM3 -> QM2]
2. QM3 must have a SDR channel and TX queue to QM1 and QM1 should have corresponding RCVR channel
For replies:
3. The gateway (QM1) advertises a queue-manager alias for the queue manager outside the cluster. It advertises this alias to the whole cluster by adding the cluster attribute to its queue-manager alias definition.
DEFINE QREMOTE(QM3) RNAME(‘ ‘) RQMNAME(QM4) CLUSTER(JOSEPH)
4. QM1 must have a SDR channel and TX queue to QM4 and QM4 should have corresponding RCVR channel.
Observe the below diagram, to understand the setup









Now, you need to define the following, for communicating from QM3 to Cluster and Cluster to QM3.
on QM1:
DEFINE QL(TQ3) USAGE(XMITQ)
DEFINE LISTENER(QM1_LIST) TRPTYPE(TCP) PORT(1111)
DEFINE CHANNEL(TO.QM3) CHLTYPE(SDR) TRPTYPE(TCP) CONNAME(‘localhost(3333)’) XMITQ(TQ3)
DEFINE CHANNEL(TO.QM1) CHLTYPE(RCVR) TRPTYPE(TCP)CONNAME(‘localhost(1111)’)
DEFINE QREMOTE(QM3) RNAME(‘ ‘) RQMNAME(QM3) XMITQ(TQ3) CLUSTER(JOSEPH)
on QM2:
DEFINE QL(Q2) CLUSTER(JOSEPH)
DEFINE LISTENER(QM2_LIST) TRPTYPE(TCP) PORT(2222)
on QM3:
DEFINE QL(Q3)
DEFINE QL(TQ1) USAGE(XMITQ)
DEFINE LISTENER(QM3_LIST) TRPTYPE(TCP) PORT(3333)
DEFINE CHANNEL(TO.QM1) CHLTYPE(SDR) TRPTYPE(TCP) CONNAME(‘localhost(1111)’) XMITQ(TQ1)
DEFINE CHANNEL(TO.QM3) CHLTYPE(RCVR) TRPTYPE(TCP)CONNAME(‘localhost(3333)’)
DEFINE QREMOTE(Q2) RNAME(Q2) RQMNAME(QM2) XMITQ(TQ1)
Start all the listeners on QM1/2/3 and Ensure all channels are running. First test that you are able to communicate between QM1 and QM2.
Then, you should be able to put messages
QM3[Q3] –> QM2[Q2]
QM2[Q2] –> QM3[Q3]
Similarly, if you have any queue on QM1, create corresponding QREMOTE on QM3(which is outside).


Questions :

1.      Follow up Question …. If QM1 should die what happens to QM3? Can QM3 be configured to reattach to a live QM in the cluster to live on?
ANS: the communication between QM1 and QM3 is point-to-point . So if QM1 is not reachable, no communication b/n QM1QM3.



Websphere MQ Quick reference
MQSC: indicates a runmqsc command, which can be executed while in runmqsc [QmgrName] or  as one line command using:
echo command | runmqsc [QmgrName]
on Unix platforms add double quotes:
echo “command” | runmqsc [QmgrName]


All about MQinput Node !!

MQInput node
Use the MQInput node to receive messages from clients that connect to the broker by using the WebSphere® MQ Enterprise Transport, and that use the MQI and AMI application programming interfaces.

Purpose

The MQInput node receives a message from a WebSphere MQ message queue that is defined on the broker's queue manager. The node uses MQGET to read a message from a specified queue, and establishes the processing environment for the message. If appropriate, you can define the input queue as a WebSphere MQ clustered queue or shared queue.
Message flows that handle messages that are received across WebSphere MQ connections must always start with an MQInput node. You can set the properties of the

node to control the way in which messages are received, by causing appropriate MQGET options to be set. For example, you can indicate that a message is to be processed under transaction control. You can also request that data conversion is performed on receipt of every input message.

If you include an output node in a message flow that starts with an MQInput node, the output node can be any one of the supported output nodes, including user-defined output nodes; you do not need to include an MQOutput node. You can create a message flow that receives messages from WebSphere MQ clients and generates messages for clients that use any of the supported transports to connect to the broker, because you can configure the message flow to request that the broker provides any conversion that is necessary.
If you create a message flow to use as a subflow, you cannot use a standard input node; you must use an Input node as the first node to create an In terminal for the subflow.
If your message flow does not receive messages across WebSphere MQ connections, you can choose one of the supported input nodes.

Connecting the terminals

The MQInput node routes each message that it retrieves successfully to the Out terminal. If this action fails, the message is retried. If the backout count is exceeded (as defined by the BackoutThreshold attribute of the input queue), the message is routed to the Failure terminal; you can connect nodes to this terminal to handle this condition. If you have not connected the Failure terminal, the message is written to the backout queue.
If the message is caught by this node after an exception has been thrown further on in the message flow, the message is routed to the Catch terminal. If you have not connected the Catch terminal, the message loops continually through the node until the problem is resolved.
You must define a backout queue or a dead-letter queue (DLQ) to prevent the message from looping continually through the node.

Configuring for coordinated transactions
When you include an MQInput node in a message flow, the value that you set for Transaction mode defines whether messages are received under sync point:
  • If you set the property to Automatic, the message is received under sync point if the incoming message is marked as persistent; otherwise, it is not received under sync point. Any message that is sent subsequently by an output node is put under sync point, as determined by the incoming persistence property, unless the output node has overridden this explicitly.
  • If you set the property to Yes (the default), the message is received under sync point; that is, within a WebSphere MQ unit of work. Any messages that are sent subsequently by an output node in the same instance of the message flow are put under sync point, unless the output node has overridden this explicitly.
  • If you set the property to No, the message is not received under sync point. Any messages that are sent subsequently by an output node in the message flow are not put under sync point, unless an individual output node has specified that the message must be put under sync point.
The MQOutput node is the only output node that you can configure to override this option.

MQGET buffer size
The MQGET buffer size is implemented internally by the broker and you cannot change it. The following description is provided for information only. You must not rely on it when you develop your message flows, because the internal implementation might change.
When the node initializes, it sets the size of the default buffer for the first MQGET to 4 KB. The MQInput node monitors the size of messages and increases or reduces the size of the buffer:
  1. If an MQGET fails because the message is larger than the size of the buffer, the node immediately increases the size of the buffer to accommodate the message, issues the MQGET again, and zeros a message count.
  2. When 10 messages have been counted since the increase in the size of the buffer, the compares the size of the largest of the 10 messages with the size of the buffer. If the size of the largest message is less than 75% of the size of the buffer, the buffer is reduced to the size of the largest of the 10 messages. If an MQGET fails during the 10 messages because the message is larger than the size of the buffer, the node takes action 1.
For example, if the first message that the node receives is 20 MB, and the next 10 messages are each 14 MB, the size of the buffer is increased from 4 KB to 20 MB and remains at 20 MB for 10 messages. However, after the 10th message the size of the buffer is reduced to 14 MB.

Friday, December 18, 2009

AMQ9213: A communications error for CPI-C occurred.

Problem Description: User getting AMQ9213 when starting sender channel.

Reference: Woloc user

Effected User: Sender channel with SNA protocol in retry status. All sender channels with TCP/IP protocol are in running state.

MQ Version: 5.3 CSD12

OS: Unix

Action: Check /var/mqm/qmgrs/BPDS11/errros/AMQERR01.LOG. The error reads as follows:

-------------------------------------------------------------------------------
11/27/06 13:24:57
AMQ9213: A communications error for CPI-C occurred.

EXPLANATION:An unexpected error occurred in communications.ACTION:The return code from the CPI-C (cmrcv) call was 6 (X'6'). Record these valuesand tell the systems administrator.
-------------------------------------------------------------------------------
11/27/06 13:25:14
AMQ9213: A communications error for CPI-C occurred.

EXPLANATION:An unexpected error occurred in communications.ACTION:The return code from the CPI-C (cmrcv) call was 27 (X'1B'). Record these valuesand tell the systems administrator.
-------------------------------------------------------------------------------
11/27/06 13:25:14
AMQ9999: Channel program ended abnormally.

EXPLANATION:Channel program 'MQS1.BPDS11.INQ.01' ended abnormally.ACTION:Look at previous error messages for channel program 'MQS1.BPDS11.INQ.01' in theerror files to determine the cause of the failure.
-------------------------------------------------------------------------------


2024 MQRC_SYNCPOINT_LIMIT_REACHED Problem


Problem Description: 1200 cyl of DASD has been dedicated to CIF box (NCIF) in MBB1 & another 1200cyl in MBB5. However, during recent batch propagation, we were unable to send even 30,000 rec. The error is 2024 MQRC_SYNCPOINT_LIMIT_REACHED (refer below).
Please explain why we still experience this problem even 1200cyl has been dedicated to CIF?
This limitation is not acceptable as we are processing bulk data. We cannot afford to send 10,000 records at 1 time for 1.2 million customers. 10,000 recs at 1 time is as per previous environment (shared DASD with other application).
Explanation: An MQGET, MQPUT, or MQPUT1 call failed because it would have caused the number of uncommitted messages in the current unit of work to exceed the limit defined for the queue manager (see the MaxUncommittedMsgs queue-manager attribute). The number of uncommitted messages is the sum of the following since the start of the current unit of work:
  • Messages put by the application with the MQPMO_SYNCPOINT option
  • Messages retrieved by the application with the MQGMO_SYNCPOINT option
  • Trigger messages and COA report messages generated by the queue manager
    for messages put with the MQPMO_SYNCPOINT option
  • COD report messages generated by the queue manager for messages
    retrieved with the MQGMO_SYNCPOINT option
  • On Compaq NonStop Kernel, this reason code occurs when the maximum
    number of I/O operations in a single TM/MP transaction has been exceeded.
Corrective action: Check whether the application is looping. If it is not, consider reducing the complexity of the application. Alternatively, increase the queue-manager limit for the maximum number of uncommitted messages within a unit of work.
Suggestion: If we are confident that the application is not looping and the application does not want to send these messages in smaller batches (reducing the complexity) then by all means increase this value to accommodate there predicted amounts.
Please also check that the Local Q these messages will be delivered to can handle the message total, by IBM default these are set to the highest value.
Maximum queue depth . . . . : 999999999 0 - 999999999
Also for performance you may want to increase the Buffers allocated to the PSID associated to the Local Q that these messages are sent to.