Mule Best Practices - It can save your day

From various blogs and Article I have found following precise best practices for Mule which can save you day:

Lets start with fundamental unit of Mule Flow. The Message or Payload. One of core metadata of Mule Flow Message is MessageID and its imperative to track it for monitoring and trouble shooting any critical production issue. So its highly recommended to login.

Why MessageID is Critical?


In any environment where there are lots of messages flying around, there are three critical things you need to know about any particular message:
  1. Did it arrive as intended?
An integration team *constantly* gets the ask, “Did you get my Message?”  “Did you send my Message?” There Must be traceability to messages and whether they arrived or not, with business information attached in the log, like the Order Number of the message, along with the generated MessageID, so you can search on either.
2. Why did that error occur?
Messages fail.  The question is why?  In 99.x% of environments you can’t log a full message payload to any logger (except at the TRACE or DEBUG level),  so you have to have a way of knowing what the payload of the message was that caused the error. (See upcoming post about message auditing) And you also need to know which message caused the error.  If you are logging message ID’s then you can easily filter for log lines that contain the message ID and get the entire history of the flow and transformations.
3. Where did the message go next?
Many systems both upstream and downstream of messaging busses have the capability to take in an ID of some sort and correlate it with logs.  So send the Message ID as an Outbound Property and have it in the next system’s logs too.
Message ID Logging
Generating a MessageID  and logging it. You do this First thing in a flow after the inbound endpoint.
<set-session-variable variableName="messageID" value="#[java.util.UUID.randomUUID().toString()]" />
<logger level="INFO" message="Received message with messageID=#[messageID] from queue my_queue"
category="com.company.myapp.flowname" doc:name="INFO" />
view rawx.xml hosted with ❤ by GitHub

From this point forward, *EVERY LOGGER* has the #[messageID] variable in it.  Thus you get traceability, and logging is awesome.  Other ideas:
  1. Set an outbound property of messageID when you send the message somewhere.  This works over most transports like JMS and HTTP.  Thus the receiving system can get your messageID
  2. See if whatever system is sending the message can also generate and send you a messageID and use that one instead for traceability backwards. Instead of generating the messageID, you’re using an inboundProperty
  3. Correlate messageID with payloads in a database, so that you can lookup the message payload in a persisted way that’s more secure than a log file. (Upcoming blog post).
  4. If you can’t log a message to a database, consider using a separate log file (configured with log4j2.xml) that stores message payloads and messageIDs only.  Consider also having it somewhere with better security than the log directory.

Here’s an example where it’s super useful to have messageIDs, first the version without MessageIDs.  Notice that there is an error here. Also notice there are two requests, the second request comes in on line 12
[12-02 11:26:08] INFO flow1: Got Records 4
[12-02 11:26:08] INFO Transformer: Update Case to be mapped requestAction=UpdateJIRA, jiraId=STAR-10491
[12-02 11:26:08] INFO sfdc-flow: Outbound to SFDC recordId:303487
[12-02 11:26:08] INFO sfdc-flow: SFDC call success recordId:303487
[12-02 11:26:08] INFO update-record: After Update 303487
[12-02 11:26:08] INFO Transformer: Update Case Payload to be mapped requestAction=UpdateJIRA, jiraId=STAR-10496
[12-02 11:26:08] INFO sfdc-flow: Outbound to SFDC recordId:303488
[12-02 11:26:08] INFO sfdc-flow: SFDC call success recordId:303488
[12-02 11:26:08] INFO update-record: After Update 303488
[12-02 11:26:08] INFO Transformer: Update Case Payload to be mapped requestAction=UpdateJIRA, jiraId=STAR-10491
[12-02 11:26:08] INFO sfdc-flow: Outbound to SFDC recordId:303489
[12-02 11:26:08] INFO flow1: Got Records 2
[12-02 11:26:08] INFO Transformer: Update Case Payload to be mapped requestAction=UpdateJIRA, jiraId=HRCOMP-23681
[12-02 11:26:08] INFO sfdc-flow: Outbound to SFDC recordId:303491
[12-02 11:26:08] INFO sfdc-flow: SFDC call success recordId:303489
[12-02 11:26:08] INFO update-record: After Update 303489
[12-02 11:26:08] ERROR Transformer: Update Case ERROR INVALID requestAction. Not mapped
[12-02 11:26:08] INFO sfdc-flow: SFDC call success recordId:303491
[12-02 11:26:08] INFO update-record: After Update 303491
[12-02 11:26:08] INFO Transformer: Update Case Payload to be mapped requestAction=UpdateJIRA, jiraId=PERFCUST-298
[12-02 11:26:08] INFO sfdc-flow: Outbound to SFDC recordId:303492
[12-02 11:26:08] INFO sfdc-flow: SFDC call success recordId:303492
[12-02 11:26:08] INFO update-record: After Update 303492
view rawlog1.log hosted with ❤ by GitHub
Is the Error above part of the first request or the second?  It’s probably something you can figure out, but it adds time to support cases. And we are looking for supportability by someone other than an expensive developer.
Now the version with MessageIDs. You’ll notice the Debugging log got a LOT clearer about which message failed. Notice that the next request that came in got a new messageID, and you can easily see what log messages are associated with one request.
[12-02 11:26:08] INFO flow1: 18a72d84cb3b : Got Records 4
[12-02 11:26:08] INFO Transformer: 18a72d84cb3b : Update Case to be mapped requestAction=UpdateJIRA, jiraId=STAR-10491
[12-02 11:26:08] INFO sfdc-flow: 18a72d84cb3b : Outbound to SFDC recordId:303487
[12-02 11:26:08] INFO sfdc-flow: 18a72d84cb3b : SFDC call success recordId:303487
[12-02 11:26:08] INFO update-record: 18a72d84cb3b : After Update 303487
[12-02 11:26:08] INFO Transformer: 18a72d84cb3b : Update Case Payload to be mapped requestAction=UpdateJIRA, jiraId=STAR-10496
[12-02 11:26:08] INFO sfdc-flow: 18a72d84cb3b : Outbound to SFDC recordId:303488
[12-02 11:26:08] INFO sfdc-flow: 18a72d84cb3b : SFDC call success recordId:303488
[12-02 11:26:08] INFO update-record: 18a72d84cb3b : After Update 303488
[12-02 11:26:08] INFO Transformer: 18a72d84cb3b : Update Case Payload to be mapped requestAction=UpdateJIRA, jiraId=STAR-10491
[12-02 11:26:08] INFO sfdc-flow: 18a72d84cb3b : Outbound to SFDC recordId:303489
[12-02 11:26:08] INFO flow1: 2d98498d1274 : Got Records 2
[12-02 11:26:08] INFO Transformer: 2d98498d1274 : Update Case Payload to be mapped requestAction=UpdateJIRA, jiraId=HRCOMP-23681
[12-02 11:26:08] INFO sfdc-flow: 2d98498d1274 : Outbound to SFDC recordId:303491
[12-02 11:26:08] INFO sfdc-flow: 18a72d84cb3b : SFDC call success recordId:303489
[12-02 11:26:08] INFO update-record: 18a72d84cb3b : After Update 303489
[12-02 11:26:08] ERROR Transformer: 18a72d84cb3b : Update Case ERROR INVALID requestAction. Not mapped
[12-02 11:26:08] INFO sfdc-flow: 2d98498d1274 : SFDC call success recordId:303491
[12-02 11:26:08] INFO update-record: 2d98498d1274 : After Update 303491
[12-02 11:26:08] INFO Transformer: 2d98498d1274 : Update Case Payload to be mapped requestAction=UpdateJIRA, jiraId=PERFCUST-298
[12-02 11:26:08] INFO sfdc-flow: 2d98498d1274 : Outbound to SFDC recordId:303492
[12-02 11:26:08] INFO sfdc-flow: 2d98498d1274 : SFDC call success recordId:303492
[12-02 11:26:08] INFO update-record: 2d98498d1274 : After Update 303492
view rawlog2.log hosted with ❤ by GitHub

The second piece that makes everyones apps even more supportable is storing the payload that is received at the “Got Records” log step into a message database, and associating that line with the MessageID generated. Any support person could then look in the DB, do a select on the messageID and immediately get the payload that caused the error.

Mule Development Best Practices:

  1. Keep INFO level logging concise but informative.  Keep DEBUG verbose. You support workload depends on this.
  2. Use Message ID to Identify Log Entry Per Event.
  3. Store messages coming in (and maybe going out). Associate with the MessageID you created in 1.
  4. Create a Parent POM file the has all the usual Mule Stuff, only put application specific needs into project POM files.
  5. Always deploy into production or pre-production from maven package, or maven release artifacts.  Never from Studio (though for dev this is probably fine)
  6. Use Maven to do build and testing. Even better: Get a CI build tool like Jenkins or Bamboo and make it do the work.  Pair it with Artifactory for a Enterprise-level environment feel.
  7. Create Unit Tests for every application, even if they are basic.  Your weekends depend on this.
  8. Use Java transformers for anything more complicated than the 15 minute rule.  Which means if you can’t get the script/datamapper/weave transformation right in  <15 minutes, do it in Java. The exception to this rule is XSLT based transformations which can be just as powerful.

Mule Server Best Practices

1. Mule Server as a Service in Linux


Too many times I’ve seen installations on Linux that are unstable, hard to customize and not repeatable.  Here’s the fix.
Please refer detail Instructions and files  GIT repository.
Init.d Script for Running Mule ESB in Debian and Ubuntu as a service. RHEL is similar
1) Create directory /usr/local/mule
2) Untar Mule ESB into /usr/local/mule creating ./mule-enterprise-3.6.2
3) Instead of /usr/local/mule/mule-enterprise-3.6.2 which is typical when you unzip I create a symbolic link:
ln -s /usr/local/mule/mule-enterprise-3.6.2 /usr/local/mule/mule-dev
This way when you install a new version, you can unzip into another folder and simply move the symlink

Init.d script

Place the etc/init.d/mule-dev script in /etc/init.d/
sudo chown root:root mule-dev
sudo chmod 755 mule-dev
sudo update-rc.d mule-dev defaults

Init.d parameter file

Place the etc/mule/mule-dev parameter file in /etc/mule [need to create directory]
sudo chown -R root:root /etc/mule/
sudo chmod 700 /etc/mule
sudo chmod 600 /etc/mule/mule-dev
Edit the parameter file for the variables that you need. I suggest also renaming the scripts as appropriate, especially if you have multiple mule instances on the same server. For example mule-qa1 mule-qa2 mule-highvolume

Wrapper.conf

Notice that the wrapper.conf file has variables in a few places and that these variables are being represented in the init script. Before you overwrite your wrapper.conf note down any customizations you may have done to the file. And optionally add them to the init.d script or the parameter file.
Wrapper.conf goes in
/usr/local/mule/mule-dev/conf

Other things to note about Mule Instance Installation

  • Place a fresh copy of JDK 1.7 into the /usr/local/java directory, you may want to also have a copy of 1.8 but at the time of this writing 1.7 was supported in Mule 3.6.x
  • Change the user that Mule is running under generally. I have it listed here as ‘root’ that’s required under some circumstances: Opening ports under 1024 for example. If it’s possible to run Mule under it’s own user, then do so. Make sure to get the hidden directories as well
sudo chown -R mule /usr/local/mule/mule-dev
sudo chown -R mule /usr/local/mule/mule-dev/.mule
sudo chown -R mule /usr/local/mule/mule-dev/.mule/.agent

Creating Symbolic Links to Log File



There are many reasons you may want to aggregate log files together into one place.  Using Splunk to scrape the log files is a great reason. To do this you need to create symlinks to the Server Log files in a consolidated location.
I choose:
/var/log/mule
Run
sudo mkdir /var/log/mule
sudo ln -s /usr/local/mule-enterprise-3.6.3/logs /var/log/mule/mule363-1
Repeat for any other MuleSoft servers.  Voila you now have a place where your logs are aggregated.

Log4j.xml tweks

This is one I’ve seen recently in some un edited log4j2.xml files:
<RollingFile name="RollingFile"
    fileName="/usr/local/mule-enterprise-standalone-3.6.1/logs/mule-app-myapp.log”
    filePattern="/usr/local/mule-enterprise-standalone-3.6.1/logs/$${date:yyyy-MM}/mule-app-myapp-%d{yyyy-MM-dd-HH}-%i.log.gz">
What’s wrong with this?  So many things…
What happens if I move this server?  What happens if I move the app?  Especially if I have two servers on the same machine.
Use instead:
<Properties>
   <Property name="logDir">${sys:mule.home}/logs</Property>
   <Property name="appName">myapp/Property>
</Properties>
<Appenders>
    <RollingFile name="diagnostic" fileName="${logDir}/mule-app-${appName}.log"
 append="true"
 filePattern="${logDir}/$${date:yyyy-MM}/mule-app-${appName}-%d{MM-dd-yyyy}-%i.log.gz">
    <PatternLayout pattern="[%d{MM-dd HH:mm:ss}] %-5p %c{1} [%t]: %m%n" />
    <TimeBasedTriggeringPolicy />
    </RollingFile>
...
Now you a variable that you edit at the top of every log4j2.xml file that will say the app name.  You can then use a basically universal config file for all your apps
To make your logger setup complete, you also need to add the category field to every logger, the naming convention I use is:
com.mycompany.mule.myappname.flowname
  1. This gives you the ability to set the level of logging specific to your loggers in an app different from any built in Mule loggers
  2. This will log the flow name in the log file that produced a particular log line.  In the above logging config the
    %c{1}
    enables this.
  3.  <Logger name=”com.mycompany” level=”INFO”/> is now customizable in the log4j2.xml
One final note about this.  The
${sys:mule.home}
as a variable requires that mule home is set effectively on your server. Sometimes this is inconsistent on mule servers started in an inconsistent way. Use the Mule as a Service to make this enterprise-grade.
Full Example log file

<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
<Properties>
<Property name="logDir">${sys:mule.home}/logs</Property>
<Property name="appName">myapp</Property>
</Properties>
<Appenders>
<RollingFile name="diagnostic" fileName="${logDir}/mule-app-${appName}.log"
append="true"
filePattern="${logDir}/$${date:yyyy-MM}/mule-app-${appName}-%d{MM-dd-yyyy}-%i.log.gz">
<PatternLayout pattern="[%d{MM-dd HH:mm:ss}] %-5p %c{1} [%t]: %m%n" />
<TimeBasedTriggeringPolicy />
</RollingFile>
<RollingFile name="monitoring" fileName="${logDir}/mule-app-${appName}-monitoring.log"
append="true"
filePattern="${logDir}/$${date:yyyy-MM}/mule-app-${appName}-monitoring-%d{MM-dd-yyyy}-%i.log.gz">
<PatternLayout pattern="[%d{MM-dd HH:mm:ss}] %-5p %c{1} [%t]: %m%n" />
<TimeBasedTriggeringPolicy />
</RollingFile>
</Appenders>
<Loggers>
<!-- CXF is used heavily by Mule for web services -->
<Logger name="org.apache.cxf" level="WARN"/>
<!-- Apache Commons tend to make a lot of noise which can clutter the log-->
<Logger name="org.apache" level="WARN"/>
<!-- Reduce startup noise -->
<Logger name="org.springframework.beans.factory" level="WARN"/>
<!-- Prevent Default Logger ERROR level messages (which display the payload) -->
<Logger name="org.mule.exception.CatchMessagingExceptionStrategy" level="FATAL"/>
<Logger name="org.mule.exception.DefaultMessagingExceptionStrategy" level="FATAL"/>
<!-- Mule classes -->
<Logger name="httpclient.wire" level="INFO" />
<Logger name="org.apache.commons.httpclient" level="INFO" />
<Logger name="org.mule" level="INFO"/>
<Logger name="com.mulesoft" level="INFO"/>
<!-- Reduce DM verbosity -->
<Logger name="org.jetel" level="WARN"/>
<Logger name="Tracking" level="WARN"/>
<Logger name="org.glassfish.grizzly" level="INFO"/>
<Logger name="org.asynchttpclient" level="INFO"/>
<Logger name="org.mule.util.queue.DualRandomAccessFileQueueStoreDelegate"
level="INFO" />
<Logger name="org.mule.transformer" level="WARN" />
<Logger name="org.jetel.graph" level="WARN" />
<Logger name="org.mule.transport.service.DefaultTransportServiceDescriptor"
level="WARN" />
<Logger name="com.sforce" level="INFO" />
<Logger name="org.mule.module.launcher.DeploymentDirectoryWatcher"
level="WARN" />
<Logger name="org.mule.transformer.graph" level="WARN" />
<Logger name="com.mulesoft.module.batch" level="INFO" />
<Logger name="org.mule.transport.jms.JmsMessageUtils"
level="ERROR" />
<Logger name="org.mule.lifecycle.AbstractLifecycleManager"
level="WARN" />
<Root level="INFO">
<AppenderRef ref="diagnostic"/>
</Root>
</Loggers>
</Configuration>
view rawlog4j2.xml hosted with ❤ by GitHub

Please refer to this excellent mule blog for more details

2 comments: