. . .


Performance tweaks are done in YAMBAS and MongoDB separately. Both parts are explained here.


Linux: Use Tomcat native libs (minor performance increase)

as root:


aptitude install libapr1 libaprutil1 libapr1-dev libssl-dev make
export JAVA_HOME=/opt/jdk
cd /opt/tomcat7/temp
tar zxvf ../bin/tomcat-native.tar.gz
cd tomcat-native*/jni/native
./configure --with-apr=/usr/bin/apr-1-configure
make install
ln -s /usr/local/apr/lib/libtcnative-1.so /usr/lib/libtcnative-1.so

Restart tomcat, then logs should show:


INFO: Loaded APR based Apache Tomcat Native Library 1.1.120

Tomcat connector settings

Following values can be set in /opt/tomcat/conf/server.xml, <Connector> part:

  • maxThreads: default 200,

  • connectionTimeout: Must be as high as a complete deploy on all nodes lasts to prevent connection timeouts during this longest request


  • yambas.limits.mongodbConnections: 100 is default; this value is multiplied by the number of app servers; that is, if you have 3 appservers, we have 300 open connections to database. Measurements showed 100 to be a good value.

  • yambas.limits.maxResults: maximum number of results to return per query (for more, LIMIT/OFFSET must be used). In YAMBAS 3.1.3 and later, this is set to unlimited (0) by default! In versions < 3.1.3, it is set to 1000 by default. Setting this value too high may lead to blocking operations in database if indexes are missing


Values are set in /etc/apiomat/tomcat7:

  • -XX:+UseSerialGC - improves speed in contrast to XX:+UseConcMarkSweepGC

  • -Xmx12g - sets the max available memory for java. Should be about 80% of the machines RAM (12GB in this example)

  • -Xms12g - sets the initial memory for java. Should be as high as Xmx for most installations (12GB in this example)


Set Apiomat logs to WARN in log4j.xml!


To setup mongoDB in production environments, please consider the MongoDB Production Notes


General Advice

Use indexes for every query which is executed in production environment. A single query to and unindexed collection can somewhat stop other requests to be processed when collection size is about millions of entries!

By default, all queries lasting longer than 300 ms are logged to /var/log/mongodb/mongod.log.

To get more information and specifiy the time which queries should be logged, use the profiler:

  1. Go to the applications database

  2. The following command will catch each command which lasts longer than 1000ms:


    db.setProfilingLevel(1, 1000);


    Or print all commands, not only the slow ones:mongo


  3. Look into the profiling collections to see the queries:



  4. To see only the last results sorted by date:


    db.system.profile.find().limit(10).sort( { ts : -1 } ).pretty()

  5. Add indexes for every result which has a high value at " nscanned" or has type "collscan"

More info: