Performance
Performance tweaks are done in YAMBAS and MongoDB separately. Both parts are explained here.
YAMBAS
Linux: Use Tomcat native libs (minor performance increase)
as root:
aptitude
install
libapr1 libaprutil1 libapr1-dev libssl-dev
make
# The "JAVA_HOME" environment variable should already be set (during the installation),
# but in case it's not set, set it to the installation path of the JDK ("/opt/jdk" by default):
#export JAVA_HOME=/opt/jdk
cd
/opt/tomcat7/temp
tar
zxvf ..
/bin/tomcat-native
.
tar
.gz
cd
tomcat-native*
/jni/native
.
/configure
--with-apr=
/usr/bin/apr-1-configure
make
make
install
ln
-s
/usr/local/apr/lib/libtcnative-1
.so
/usr/lib/libtcnative-1
.so
Restart tomcat, then logs should show:
INFO: Loaded APR based Apache Tomcat Native Library
1.1
.
120
Tomcat connector settings
Following values can be set in /opt/tomcat/conf/server.xml, <Connector> part:
-
maxThreads: default 200,
-
connectionTimeout: Must be as high as a complete deploy on all nodes lasts to prevent connection timeouts during this longest request
Yambas.conf
-
mongodbConnections: 100 is default; this value is multiplied by the number of app servers; that is, if you have 3 appservers, we have 300 open connections to database. Measurements showed 100 to be a good value.
-
maxResults: maximum number of results to return per query (for more, LIMIT/OFFSET must be used). Default: 1000. Setting this value too high may lead to blocking operations in database if indexes are missing
JVM
Values are set in /etc/apiomat/tomcat7:
-
-XX:+UseSerialGC - improves speed in contrast to XX:+UseConcMarkSweepGC
-
-Xmx12g - sets the max available memory for java. Should be about 80% of the machines RAM (12GB in this example)
-
-Xms12g - sets the initial memory for java. Should be as high as Xmx for most installations (12GB in this example)
Logs
Set Apiomat logs to WARN in log4j.xml!
MongoDB
To setup mongoDB in production environments, please consider the MongoDB Production Notes
Indexes
General Advice
Use indexes for every query which is executed in production environment. A single query to and unindexed collection can somewhat stop other requests to be processed when collection size is about millions of entries!
By default, all queries lasting longer than 300 ms are logged to /var/log/mongodb/mongod.log.
To get more information and specifiy the time which queries should be logged, use the profiler:
-
Go to the applications database
-
The following command will catch each command which lasts longer than 1000ms:
db.setProfilingLevel(
1
,
1000
);
http://docs.mongodb.org/manual/reference/method/db.setProfilingLevel/#db.setProfilingLevel
Or print all commands, not only the slow ones:
db.setProfilingLevel(
2
,
0
);
-
Look into the profiling collections to see the queries:
db.system.profile.find()
-
To see only the last results sorted by date:
db.system.profile.find().limit(
10
).sort( { ts : -
1
} ).pretty()
-
Add indexes for every result which has a high value at " nscanned" or has type "collscan"
More info:
-
Profiler: http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/
-
Interpret profiler output: http://docs.mongodb.org/manual/reference/database-profiler/