Showing posts from 2013

Solaris Memory Utilization

Solaris servers uses certain percentage of free memory for IO cache. This is based on the ZFS configuration.  This configuration can be checked using root or kernel access. If there is no memory available, then these cache memory will be used. If the Application need memory, the I/O cache will release the memory. The "sr" value on vmstat provide the status of the Memory swap rate. As long as the "sr" value stays at zero there is no memory issue. 

HTTP - Access Log Entries

c-ip date time cs-method cs-uri cs-status bytes time-taken

Weblogic DataSource

Data Source Connection Pool Sizing By Steve Felts on  Dec 10, 2012 One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to

Weblogic JMS Queue

A JMS queue in Weblogic Server is associated with a number of additional resources: JMS Server A JMS server acts as a management container for resources within JMS modules. Some of its responsibilities include the maintenance of persistence and state of messages and subscribers . A JMS server is required in order to create a JMS module. JMS Module A JMS module is a definition which contains JMS resources such as queues and topics . A JMS module is required in order to create a JMS queue. Subdeployment JMS modules are targeted to one or more WLS instances or a cluster. Resources within a JMS module, such as queues and topics are also targeted to a JMS server or WLS server instances. A subdeployment is a grouping of targets. It is also known as advanced targeting . Connection Factory A connection factory is a resource that enables JMS clients to create connections to JMS destinations. JMS Queue A JMS queue (as opposed to a JMS topic) is a point-to-point des

XA and Non XA DataSource

An XA transaction, in the most general terms, is a "global transaction" that may span multiple resources. A non-XA transaction always involves just one resource.  An XA transaction involves a coordinating transaction manager, with one or more databases (or other resources, like JMS) all involved in a single global transaction. Non-XA transactions have no transaction coordinator, and a single resource is doing all its transaction work itself (this is sometimes called local transactions).  XA transactions come from the X/Open group specification on distributed, global transactions. JTA includes the X/Open XA spec, in modified form.  Most stuff in the world is non-XA - a Servlet or EJB or plain old JDBC in a Java application talking to a single database. XA gets involved when you want to work with multiple resources - 2 or more databases, a database and a JMS connection, all of those plus maybe a JCA resource - all in a single transaction. In this scenario, you'll hav

OSB Service Calls

OSB, Service Callouts and OQL - Part 1 Oracle Fusion Middleware customers use Oracle Service Bus (OSB) for virtualizing Service endpoints and implementing stateless service orchestrations. Behind the performance and speed of OSB, there are a couple of key design implementations that can affect application performance and behavior under heavy load. One of the heavily used feature in OSB is the Service Callout pipeline action for message enrichment and invoking multiple services as part of one single orchestration. Overuse of this feature, without understanding its internal implementation, can lead to serious problems. This post will delve into OSB internals, the problem associated with usage of Service Callout under high loads, diagnosing it via thread dump and heap dump analysis using tools like  ThreadLogic and OQL (Object Query Language) and resolving it. The first section in the series will mainly cover the threading model used internally by OSB for implementing Route Vs. S

OSB Coherence

Caching using Coherence in OSB is very simple to activate and use.  The following figure illustrates what is going on behind the scenes. Because for most cases Coherence will be able to retrieve the result from the in-memory grid on the same application server, there will be no latency introduced by network or database I/O.  This should greatly reduce the response time of your service, assuming frequent requests for the same data are made.

Using jps and jstat to get JVM GC statistics

Using jps and jstat to get JVM GC statistics Ensure that JDK bin directory is on your path Issue command 'jps' to find the running JVM's PID ( it will show all running java processes ). Once you have identifed the PID issue command 'jstat -gcutil [PID] 5000', replacing [PID] with the one identified above Now  the console should update on a 5 second basis with statistics. Fields available in jstat gcutil output Basically, the first set of columns (S0, S1, E, O, P) describes the utilisation of the various memory heaps (Survivor heaps, Eden - young generation, Old generation and Perm. heap space). Next, (YGC and YGCT) show the number of young (eden) space collections and total time taken so far doing these collections. Columns (FCG, FGCT) show the number and time taken doing old space collections. Lastly, GCT shows the total time taken performing garbage collection so far. Sample Output: #Timestamp         S0     S1     E      O      P     YGC