EC2 Server Deployment and Troubleshooting
This document still needs to be adapted for the OLE test servers, but they are based on the KFS Test Drive deployment model.
Overview
KFS Test Drive is deployed within the Amazon EC2 Infrastructure. It is created from a selected branch of the KFS code base (presently branches/release-4-0
) for both the application server and the database. The data within the environment is the same as the released demo data set. (Plus some additional data manually entered by the functional users. See section on "Customizing Test Drive Data" below.) The database is a MySQL database hosted by Amazon within their RDS service. This infrastructure makes it trivial to refresh the database in a short period of time.
The server and database are created/deployed/loaded by the KFS Hudson Server. Batch scripts on that server orchestrate the deployment.
Refresh Schedule
KFS Test Drive's database is refreshed on a periodic database to prevent the build-up of data entered by users testing out the system. This results in a period of down-time as the database is destroyed and reloaded.
Additionally, on a less frequent basis, the test drive servers are re-created from scratch to guard against build-up of the log files. However, it appears that the default configuration of the tomcat server on the instance automatically zips up each week's main catalina.out
file, so this may not be necessary to do very often as the KFS logs are mostly self-limiting.
Event |
Schedule |
Notes |
---|---|---|
Refresh database |
every other week at ?? am |
Hudson Job: |
Restart KFS Instance |
?? |
To clear out memory |
Redeploy Test Drive Server |
Not yet determined |
|
Core Template Jobs
These jobs can all be reviewed at https://ci.ole.kuali.org/view/Template%20Jobs/. The template jobs are never used directly. They are used as "includes" in other scripts that either take parameters or are hard-wired for a specific instance for convenience.
Job Name |
Description |
---|---|
Template-Create-RDS-Instance |
Creates a new Amazon RDS MySQL Database instance. (NOT YET CONVERTED FOR OLE) |
|
|
Template-Oracle-DB-Refresh |
Checks out the OLE project from SVN and loads a combined OLE/Rice database using the impex tool. |
Template-MySql-DB-Refresh |
Checks out the OLE project from SVN and loads a combined OLE/Rice database using the impex tool. |
Template-Restore-RDS-Instance |
Restores the database from a named Amazon RDS snapshot. (Which can be created after running the above job once.) |
Template-Launch-EC2-Server |
Creates a new EC2 Instance and loads the necessary software (java/tomcat). Also registers the server's new public IP address with DNS. |
Template-Deploy-OLE-Server |
Builds and deploys the server WAR file. This must be run only after the database (created by one of the above jobs) is completely functional, as it uses the EC2 APIs to find the database location. |
TestDrive-RedeployServerWar |
Used to deploy new code to the server without having to re-create the instance. |
Template-Terminate-RDS-Instance |
Destroys the database instance. |
Template-Terminate-EC2-Instance |
Destroys an EC2 instance. The instance can not be recovered after this operation. It's equivalent to pulling the plug and degaussing the hard drives. |
Template-Server-Start |
Starts up the server. |
Template-Server-Stop |
Stops the server. |
Template-DB-Export |
|
Template-Start-Instance |
|
Template-Stop-Instance |
|
Test Drive Configuration
Host Name |
|
---|---|
Instance URL |
|
IP Address |
|
Administrative User Account |
|
Database Type |
MySQL 5.1.50 (Amazon RDS) |
Database Host (probably) |
|
EC2 Instance Type: |
|
* One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. (Per Amazon EC2 Instance Type Documentation)
Building Test Drive
Creating the Database Instance
The database instance is created within the Amazon RDS (Relational Database Services) infrastructure. A (fairly) simple shell script is used to build the database. The script below:
- Sets a number of variables
- Checks whether an instance of the given name already exists
- Requests that a new instance be allocated with the given parameters
- Waits for the instance to finish creation. (This takes 10-15 minutes.)
EC2_RDS_INSTANCE_TYPE=db.m1.small EC2_RDS_SECURITY_GROUP=kfs-rds-servers EC2_OWNER_ID=${KFS_EC2_OWNER} EC2_RDS_PARAMETER_GROUP=kfs-parameter-group EC2_AVAILABILITY_ZONE=us-east-1b EC2_RDS_KFS_ADMIN_USER=${RDS_ADMIN_USER} EC2_RDS_KFS_ADMIN_PASSWORD=${RDS_ADMIN_PW} INSTANCE_NAME=kfs-${INSTANCE} mkdir -p $INSTANCE_NAME echo Checking for an existing instance for this service: $INSTANCE_NAME OLD_STATUS=`rds-describe-db-instances $INSTANCE_NAME --show-long | grep DBINSTANCE | cut -f 8 -d ,` if [ "$OLD_STATUS" = "available" ]; then echo An Instance for $INSTANCE_NAME is already running: rds-describe-db-instances $INSTANCE_NAME exit 0; fi echo No Existing instances running with this name - proceeding with launch rds-create-db-instance $INSTANCE_NAME \ --allocated-storage 5 \ --db-instance-class ${EC2_RDS_INSTANCE_TYPE} \ --engine MySQL5.1 \ --backup-retention-period 0 \ --master-username $EC2_RDS_KFS_ADMIN_USER \ --master-user-password $EC2_RDS_KFS_ADMIN_PASSWORD \ --availability-zone $EC2_AVAILABILITY_ZONE \ --db-parameter-group-name $EC2_RDS_PARAMETER_GROUP \ --db-security-groups $EC2_RDS_SECURITY_GROUP # Wait until the instance is available echo Starting instance $INSTANCE_NAME - this takes a few minutes DONE="false" while [ "$DONE" = "false" ] do STATUS=`rds-describe-db-instances $INSTANCE_NAME --show-long | grep DBINSTANCE | cut -f 8 -d ,` if [ "$STATUS" = "available" ]; then DONE="true" else echo Waiting...$STATUS sleep 10 fi done echo $INSTANCE_NAME Is Available
Importing Rice and KFS Data
After the instance is brought up, we need to import first the Rice base data and then overlay the KFS Rice data and KFS database tables.
Rice Baseline Data
We import the Rice baseline data by using the Kuali database impex tool and a copy of the database stored in SVN. The script below (and associated ant commands) does the following:
- Finds the Amazon RDS Host Name (to construct the JDBC URL)
- Builds the
impex-build.properties
file needed by the import process.- Fixes the case of the table names in the data import graphs since Amazon RDS databases are case sensitive on the table and schema names.
- Runs the import tool to:
- Create or empty the schema as needed
- Build the tables, views, and sequences
- Load the baseline Rice data.
EC2_RDS_KFS_ADMIN_USER=${RDS_ADMIN_USER} EC2_RDS_KFS_ADMIN_PASSWORD=${RDS_ADMIN_PW} BASEDIR=$WORKSPACE INSTANCE_NAME=kfs-${INSTANCE} RICE_DATA_DIR=$WORKSPACE/rice-cfg-dbs #Generate the impex.properties file EC2_RDS_HOST=`rds-describe-db-instances $INSTANCE_NAME --show-long | grep DBINSTANCE | cut -f9 -d,` EC2_DB_URL=jdbc:mysql://${EC2_RDS_HOST}:3306 ( cat <<-EOF import.torque.database.user=$SCHEMA_NAME import.torque.database.schema=$SCHEMA_NAME import.torque.database.password=$SCHEMA_NAME torque.project=kfs torque.schema.dir=$RICE_DATA_DIR torque.sql.dir=\${torque.schema.dir}/sql torque.output.dir=\${torque.sql.dir} import.torque.database=mysql import.torque.database.driver=com.mysql.jdbc.Driver import.torque.database.url=${EC2_DB_URL}/$SCHEMA_NAME import.admin.user=$EC2_RDS_KFS_ADMIN_USER import.admin.password=$EC2_RDS_KFS_ADMIN_PASSWORD import.admin.url=$EC2_DB_URL EOF ) > impex-build.properties # Fix the case of the table names (since EC2 is case sensitive) perl -pi -e 's/dbTable="([^"]*)"/dbTable="\U\1"/g' $RICE_DATA_DIR/graphs/*.xml perl -pi -e 's/viewdefinition="([^"]*)"/viewdefinition="\U\1"/g' $RICE_DATA_DIR/schema.xml perl -pi -e 's/&#[^;]*;/ /gi' $RICE_DATA_DIR/schema.xml ant -Duser.home=$WORKSPACE -buildfile kul-cfg-dbs/impex/build.xml create-schema empty-schema import
Overlay KFS Rice Data
Next, we overlay the Rice data which is part of KFS. This data is added through Liquibase scripts and running the workflow ingester. This adds the KFS Roles, Permissions, System Parameters, and workflow document types. This again uses the Kuali database impex tool to facilitate running of the liquibase and ingester programs.
- Finds the Amazon RDS Host Name (to construct the JDBC URL)
- Builds the
impex-build.properties
file needed by the import process. - Runs the impex tool in liquibase and workflow only mode.
BASEDIR=$WORKSPACE INSTANCE_NAME=kfs-${INSTANCE} IMPEX_TOOL_DIR=$WORKSPACE/kul-cfg-dbs RICE_DATA_DIR=$WORKSPACE/rice-cfg-dbs KFS_DIR=$WORKSPACE/kfs LIQUIBASE_DIR=${KFS_DIR}/work/db/rice-data WORKFLOW_DIR=${KFS_DIR}/work/workflow #Generate the impex.properties file EC2_RDS_HOST=`rds-describe-db-instances $INSTANCE_NAME --show-long | grep DBINSTANCE | cut -f9 -d,` EC2_DB_URL=jdbc:mysql://${EC2_RDS_HOST}:3306 ( cat <<-EOF import.torque.database.user=$SCHEMA_NAME import.torque.database.schema=$SCHEMA_NAME import.torque.database.password=$SCHEMA_NAME torque.project=kfs torque.schema.dir=$RICE_DATA_DIR torque.sql.dir=\${torque.schema.dir}/sql torque.output.dir=\${torque.sql.dir} post.import.liquibase.project=kfs post.import.liquibase.xml.directory=$LIQUIBASE_DIR post.import.workflow.project=kfs post.import.workflow.xml.directory=$WORKFLOW_DIR post.import.workflow.ingester.launcher.ant.script=${KFS_DIR}/build.xml post.import.workflow.ingester.launcher.ant.target=import-workflow-xml post.import.workflow.ingester.launcher.ant.xml.directory.property=workflow.dir post.import.workflow.ingester.jdbc.url.property=datasource.url post.import.workflow.ingester.username.property=datasource.username post.import.workflow.ingester.password.property=datasource.password post.import.workflow.ingester.additional.command.line=-Ddatasource.ojb.platform=MySQL -Dbase.directory=$BASEDIR -Dexternal.config.directory=$BASEDIR -Dis.local.build= -Drice.ksb.batch.mode=true -Ddont.filter.project.rice= -Ddont.filter.project.spring.ide= -Ddont.filter.project.schema= -Ddev.mode= import.torque.database=mysql import.torque.database.driver=com.mysql.jdbc.Driver import.torque.database.url=${EC2_DB_URL}/$SCHEMA_NAME import.admin.user=$EC2_RDS_KFS_ADMIN_USER import.admin.password=$EC2_RDS_KFS_ADMIN_PASSWORD import.admin.url=$EC2_DB_URL EOF ) > impex-build.properties ant -Duser.home=$WORKSPACE -buildfile kul-cfg-dbs/impex/build.xml run-liquibase-post-import import-workflow
Import KFS Database
Finally, we load the KFS database structure and data. This is the same as the Rice data import earlier except that it uses the KFS SVN repository data for the source.
BASEDIR=$WORKSPACE INSTANCE_NAME=kfs-${INSTANCE} IMPEX_TOOL_DIR=$BASEDIR/kul-cfg-dbs KFS_DATA_DIR=$BASEDIR/kfs-cfg-dbs #Generate the impex.properties file EC2_RDS_HOST=`rds-describe-db-instances $INSTANCE_NAME --show-long | grep DBINSTANCE | cut -f9 -d,` EC2_DB_URL=jdbc:mysql://${EC2_RDS_HOST}:3306 ( cat <<-EOF import.torque.database.user=$SCHEMA_NAME import.torque.database.schema=$SCHEMA_NAME import.torque.database.password=$SCHEMA_NAME torque.project=kfs torque.schema.dir=$KFS_DATA_DIR torque.sql.dir=\${torque.schema.dir}/sql torque.output.dir=\${torque.sql.dir} import.torque.database=mysql import.torque.database.driver=com.mysql.jdbc.Driver import.torque.database.url=${EC2_DB_URL}/$SCHEMA_NAME import.admin.user=$EC2_RDS_KFS_ADMIN_USER import.admin.password=$EC2_RDS_KFS_ADMIN_PASSWORD import.admin.url=$EC2_DB_URL EOF ) > impex-build.properties # Fix the case of the table names (since EC2 is case sensitive) perl -pi -e 's/dbTable="([^"]*)"/dbTable="\U\1"/g' $KFS_DATA_DIR/graphs/*.xml perl -pi -e 's/viewdefinition="([^"]*)"/viewdefinition="\U\1"/g' $KFS_DATA_DIR/schema.xml perl -pi -e 's/&#[^;]*;/ /gi' $KFS_DATA_DIR/schema.xml ant -Duser.home=$WORKSPACE -buildfile kul-cfg-dbs/impex/build.xml create-schema import
Creating Instance From Snapshot
Alternatively, we can create the database in one step from a previously saved snapshot. The script for creating from a snapshot is much the same as creating the instance. This process is used to restore the test drive instance to a known state on a regular basis.
- Sets a number of variables
- Checks whether an instance of the given name already exists
- Requests that a new instance be allocated with the given parameters and provided snapshot name
- Waits for the instance to finish creation. (This takes 10-15 minutes.)
- Restores some of the instance properties which are not possible to set when building from a snapshot.
EC2_RDS_INSTANCE_TYPE=${EC2_RDS_INSTANCE_TYPE:-db.m1.small} EC2_RDS_SECURITY_GROUP=${EC2_RDS_SECURITY_GROUP:-kfs-rds-servers} EC2_RDS_PARAMETER_GROUP=${EC2_RDS_PARAMETER_GROUP:-kfs-parameter-group} EC2_AVAILABILITY_ZONE=${EC2_AVAILABILITY_ZONE:-us-east-1b} INSTANCE_NAME=kfs-${INSTANCE} echo Checking for an existing instance for this service: $INSTANCE_NAME OLD_STATUS=`rds-describe-db-instances $INSTANCE_NAME --show-long | grep DBINSTANCE | cut -f 8 -d ,` if [ "$OLD_STATUS" = "available" ]; then echo An Instance for $INSTANCE_NAME is already running: rds-describe-db-instances $INSTANCE_NAME exit 0; fi echo No Existing instances running with this name - proceeding with launch rds-restore-db-instance-from-db-snapshot $INSTANCE_NAME \ --db-snapshot-identifier $SNAPSHOT_NAME \ --db-instance-class ${EC2_RDS_INSTANCE_TYPE} \ --availability-zone $EC2_AVAILABILITY_ZONE # Wait until the instance is available echo Starting instance $INSTANCE_NAME - this takes a few minutes DONE="false" while [ "$DONE" = "false" ] do STATUS=`rds-describe-db-instances $INSTANCE_NAME --show-long | grep DBINSTANCE | cut -f 8 -d ,` if [ "$STATUS" = "available" ]; then DONE="true" else echo Waiting...$STATUS sleep 10 fi done echo $INSTANCE_NAME Is Available echo Updating Settings After Restore rds-modify-db-instance $INSTANCE_NAME \ --apply-immediately \ --backup-retention-period 0 \ --db-parameter-group-name $EC2_RDS_PARAMETER_GROUP \ --db-security-groups $EC2_RDS_SECURITY_GROUP
Building the KFS Application Server
Building the server consists of a few steps.
- Provisioning a new server in the Amazon EC2 cloud.
- Installing some base software on the system.
- Configure that software as needed.
- Configure the Dynamic DNS name of the server to point to the newly-created instance.
Creating an EC2 Instance
This step simply builds the instance and then runs a shell script on the new server which load the needed software. We rely on the "yum
" tool to pull in and configure the software in a standard manner. Amazon provides a way to include a script in the data which we use to start the instance that will be automatically executed after the instance boots.
This script assumes the following variables have been set:
INSTANCE=SET ME!!! EC2_INSTANCE_TYPE= DMEID=SET ME!!! DYNDNS=true/false ELASTIC_IP=SET ME IF NEEDED
- If needed, set the DNS provider variables
- Create a shell script which will be used to initialize the instance
- This script installs Tomcat and Lynx (the latter for troubleshooting as needed)
- Attempts to update the DNS address (this does not always seem to work from here)
- Creates and applies a patch to the sudoers file which allows for sudo commands to be executed remotely. (Needed for server administration.)
- Requests EC2 to provision an instance with the given parameters
- Waits for the instance to complete
- Tags the instance so it can be found by other scripts later as well as be identifiable in the AWS Console.
- Waits for the init script created earlier to complete. (Watches for the "done" file created on the last line.)
- Performs the final DNS updates as needed (either using an Elastic IP or Dynamic DNS.)
# DNS Made Easy Information DMEUSER=xxxxx # This is your password DMEPASS=xxxxx # This is the unique number for the record that you are updating. # This number can be obtained by clicking on the DDNS link for the # record that you wish to update; the number for the record is listed # on the next page. #DMEID= cat > ${INSTANCE}-cloud-init <<EOF #!/bin/bash yum -y install tomcat6 yum -y install lynx if [[ "$DYNDNS" == "true" ]]; then IPADDR=\`ec2-metadata | grep public-ipv4 | awk '{ print \$2 };'\` RESULT=\`wget -q -O /proc/self/fd/1 http://www.dnsmadeeasy.com/servlet/updateip?username=${DMEUSER}\&password=${DMEPASS}\&id=${DMEID}\&ip=\$IPADDR\` echo \$RESULT logger -t DNS-Made-Easy -s "DNS Update Result: \$RESULT" fi ( cat <<-XXXXX 56c56 < Defaults requiretty --- > #Defaults requiretty XXXXX ) > /root/etc-sudoers-patch patch /etc/sudoers /root/etc-sudoers-patch echo "Done" > /home/ec2-user/done.txt EOF EC2_INSTANCE_TYPE=${EC2_INSTANCE_TYPE:-m1.large} EC2_SECURITY_ZONE=${EC2_SECURITY_ZONE:-kfs-test-servers} EC2_AVAILABILITY_ZONE=${EC2_AVAILABILITY_ZONE:-us-east-1b} AMI_ID=${AMI_ID:-ami-38c33651} mkdir -p $INSTANCE # make sure an instance is not already running of the given type echo Checking for an existing instance for this service: $INSTANCE OLD_STATUS=`ec2-describe-instances --filter tag:Name=kfs-$INSTANCE | grep INSTANCE | cut -f6` if [ "$OLD_STATUS" = "running" ]; then OLD_INSTANCE_ID=`ec2-describe-instances --filter tag:Name=kfs-$INSTANCE | grep INSTANCE | cut -f2` echo An Instance for $INSTANCE is already running: $OLD_INSTANCE_ID exit 0 fi # Create the instance - capture the output to get the instance ID ec2-run-instances $AMI_ID \ -t ${EC2_INSTANCE_TYPE} \ -g ${EC2_SECURITY_ZONE} \ -k kfs-key \ --availability-zone ${EC2_AVAILABILITY_ZONE} \ --user-data-file=${INSTANCE}-cloud-init > ${INSTANCE}/instance-info.txt if [ $? != 0 ]; then echo Error starting $INSTANCE instance for image $AMI_ID exit 1 fi # get the instance id grep INSTANCE ${INSTANCE}/instance-info.txt | cut -f2 > ${INSTANCE}/instance-id.txt INSTANCE_ID=`cat ${INSTANCE}/instance-id.txt` # Wait until the instance has started echo Starting instance $INSTANCE_ID DONE="false" while [ "$DONE" = "false" ] do STATUS=`ec2-describe-instances $INSTANCE_ID | grep INSTANCE | cut -f6` echo $STATUS if [ "$STATUS" = "running" ]; then DONE="true" else echo Waiting... sleep 10 fi done echo $INSTANCE_ID Is Running ec2-create-tags $INSTANCE_ID --tag Name=kfs-${INSTANCE} --tag ServerType=KFS --tag Instance=${INSTANCE} # Now, re-pull the instance information ec2-describe-instances $INSTANCE_ID > ${INSTANCE}/instance-info.txt grep INSTANCE ${INSTANCE}/instance-info.txt | cut -f4 > ${INSTANCE}/instance-host.txt grep INSTANCE ${INSTANCE}/instance-info.txt | cut -f18 > ${INSTANCE}/instance-ip.txt SERVER_HOST=`cat ${INSTANCE}/instance-ip.txt` SSH_KEY_FILE=${HOME}/.ec2/kfs-key.pem SSH_USER=${SSH_USER:-ec2-user} SSH_PARM="-i $SSH_KEY_FILE -o BatchMode=yes -o StrictHostKeyChecking=no" SCP_CMD="scp $SSH_PARM" SSH_CMD="ssh $SSH_PARM ${SSH_USER}@${SERVER_HOST}" echo Sleeping to give init scripts a chance to finish sleep 60 echo Waiting until instance init scripts have completed DONE="false" while [ "$DONE" = "false" ] do set +e STATUS=`$SSH_CMD cat done.txt` set -e echo "File Contents: $STATUS" if [ "$STATUS" = "Done" ]; then DONE="true" else echo Waiting... sleep 10 fi done echo $INSTANCE_ID Is Running if [[ ! -z "$ELASTIC_IP" ]]; then ec2-associate-address $ELASTIC_IP -i $INSTANCE_ID fi if [[ "$DYNDNS" == "true" ]]; then IPADDR=`grep INSTANCE ${INSTANCE}/instance-info.txt | cut -f 17` curl -v http://www.dnsmadeeasy.com/servlet/updateip?username=${DMEUSER}\&password=${DMEPASS}\&id=${DMEID}\&ip=$IPADDR fi
Installing KFS on the EC2 Instance
After the instance is ready, we build the KFS WAR file and do the additional configuration needed for a KFS instance. The scripts below are run with the following properties for Test Drive:
ELASTIC_IP=184.73.254.131 EC2_INSTANCE_TYPE=m1.large DYNDNS=false INSTANCE=ptd KFS_SCHEMA_NAME=kfs SERVER_HOST=testdrive.kfs.kuali.org SERVER_PORT=80 POOL_SIZE=100 STANDALONE_RICE=false
- Find the instance using the tags added during server creation and check if it is running.
- Build a patch file to make it possible to run multiple tomcat instances on the server. (There is a bug in the provided scripts.)
- Based on the properties provided above, set (quite) a number of variables which will be used when configuring the server.
- Find the location of the database instance for the JDBC URLs.
- Build KFS, passing in all the derived properties.
- This will create 4 files: the main WAR file, a skeleton zip with the needed directory paths, a zip containing some external settings, and a zip containing the "secure" information (database passwords and connect strings)
- SCP the resulting binaries to the server.
- Additionally, copy all the files which will need to be added to the tomcat common library directories.
- Build a script on the server which will do the following to prepare the server (it will be running as root):
- Create the base
/opt
directories and ensure tomcat is the owner. - Unpack the three zip files into the appropriate directories.
- Copy the war file into the Tomcat webapps directory.
- Copy the appserver library files into the tomcat lib directory.
- Update the global tomcat server settings for Kuali execution.
- Garbage collection
- Networking options
- URL and IP addresses used by Rice/KFS
- Tomcat/java configuration options needed for KFS
- Update the instance-specifig tomcat server settings for Kuali execution.
- Memory Settings
- Run that patch file on the tomcat startup script
- Set up iptables to map port 80 into 8080 (so Tomcat does not have to run as root)
- Start the service.
- Create the base
- SCP the script and tomcat patch file to the server.
- Make the script executable and run it.
BUILD_KFS=${BUILD_KFS:-true} COPY_BINARIES=${COPY_BINARIES:-true} PREPARE_SERVER=${PREPARE_SERVER:-true} KFS_SCHEMA_NAME=${KFS_SCHEMA_NAME:-kfs} RICE_SCHEMA_NAME=${RICE_SCHEMA_NAME:-$KFS_SCHEMA_NAME} POOL_SIZE=${POOL_SIZE:-20} STANDALONE_RICE=${STANDALONE_RICE:-true} SERVER_HOST=${SERVER_HOST:-${INSTANCE}.kfs.kuali.org} SERVER_PORT=${SERVER_PORT:-8080} SERVER_PRIVATE_IP=`ec2-describe-instances --filter tag:Name=kfs-${INSTANCE} --filter instance-state-name=running | grep INSTANCE | cut -f 18` APPSERVER_URL=http://${SERVER_HOST} if [[ "$SERVER_PORT" != "80" ]]; then APPSERVER_URL=${APPSERVER_URL}:${SERVER_PORT} fi if [[ "$STANDALONE_RICE" == "true" ]]; then RICE_URL=http://${SERVER_HOST}:8081/kr-$INSTANCE else RICE_URL=$APPSERVER_URL/kfs-${INSTANCE} fi DATABASE_HOST=`rds-describe-db-instances kfs-$INSTANCE --show-long | grep DBINSTANCE | cut -f9 -d,` if [ "$SSH_KEY_FILE" == "" ]; then SSH_KEY_FILE=${HOME}/ec2/.ec2/kfs-key.pem fi SSH_USER=${SSH_USER:-ec2-user} BASEDIR=$WORKSPACE KFS_DIR=$BASEDIR/kfs SSH_PARM="-i $SSH_KEY_FILE -o BatchMode=yes -o StrictHostKeyChecking=no" SCP_CMD="scp $SSH_PARM" SSH_CMD="ssh $SSH_PARM ${SSH_USER}@${SERVER_PRIVATE_IP}" BUILD_VERSION="`svn info kfs | grep URL | grep -o branches.*` (rev:`svn info kfs | grep Revision: | grep -o " .*"`) ${BUILD_ID}" # build the system binaries if [ "$BUILD_KFS" == "true" ]; then echo '*** Building KFS' echo "" > $BASEDIR/kfs-build.properties pushd $KFS_DIR ant clean-project dist dist-external \ -Duser.home=$BASEDIR \ "-Dbuild.version=$BUILD_VERSION" \ -Dexternal.config.directory=/opt \ -Dappserver.url=$APPSERVER_URL \ -Dstandalone.rice=${STANDALONE_RICE} \ -Drice.url=$RICE_URL \ -Ddeploy.working.directory=${BASEDIR} \ -Dbuild.environment=${INSTANCE} \ -Duse.quartz.jdbc.jobstore=false \ -Ddatasource.pool.size=${POOL_SIZE} \ -Ddont.filter.project.spring.ide= \ -Ddo.filter.project.help= \ -Drice.thread.pool.size=${POOL_SIZE} \ -Ddatasource.ojb.platform=MySQL \ -Dmysql.datasource.url=jdbc:mysql://${DATABASE_HOST}:3306/${KFS_SCHEMA_NAME} \ -Ddatasource.username=${KFS_SCHEMA_NAME} \ -Ddatasource.password=${KFS_SCHEMA_NAME} \ -Drice.server.datasource.url=jdbc:mysql://${DATABASE_HOST}:3306/${RICE_SCHEMA_NAME} \ -Drice.server.datasource.username=${RICE_SCHEMA_NAME} \ -Drice.server.datasource.password=${RICE_SCHEMA_NAME} popd fi if [ "$COPY_BINARIES" == "true" ]; then echo '*** Copying Compiled Binaries to Server' echo $SERVER_PRIVATE_IP pushd $KFS_DIR $SCP_CMD kfs-${INSTANCE}.war "${SSH_USER}@${SERVER_PRIVATE_IP}:~" popd $SCP_CMD settings.zip ${SSH_USER}@${SERVER_PRIVATE_IP}:~ $SCP_CMD security.zip ${SSH_USER}@${SERVER_PRIVATE_IP}:~ $SCP_CMD skel.zip ${SSH_USER}@${SERVER_PRIVATE_IP}:~ $SSH_CMD mkdir appserver-lib $SCP_CMD $KFS_DIR/build/external/appserver/carol.properties ${SSH_USER}@${SERVER_PRIVATE_IP}:~/appserver-lib $SCP_CMD $KFS_DIR/build/external/appserver/*.jar ${SSH_USER}@${SERVER_PRIVATE_IP}:~/appserver-lib $SCP_CMD $KFS_DIR/build/drivers/*.jar ${SSH_USER}@${SERVER_PRIVATE_IP}:~/appserver-lib fi if [ "$PREPARE_SERVER" == "true" ]; then echo '*** Preparing Files On Server' # Build the script for unpacking everything on the server # Create /opt structure # Unpack files # NOTE: this script will be running as root ( cat <<-EOF set -x mkdir -p /opt/sa_forms/java/${INSTANCE}/kfs mkdir -p /opt/j2ee/${INSTANCE}/kfs mkdir -p /opt/logs/${INSTANCE}/kfs mkdir -p /opt/work/${INSTANCE}/kfs # changing the owner so that tomcat can read/write these directories chown -R tomcat /opt/sa_forms chown -R tomcat /opt/j2ee chown -R tomcat /opt/logs chown -R tomcat /opt/work # Unpacking as tomcat for proper permissions sudo -u tomcat unzip -o skel.zip -d /opt/work/${INSTANCE}/kfs sudo -u tomcat unzip -o security.zip -d /opt/sa_forms/java/${INSTANCE}/kfs sudo -u tomcat unzip -o settings.zip -d /opt/j2ee/${INSTANCE}/kfs # Copy the war sudo -u tomcat cp kfs-${INSTANCE}.war /usr/share/tomcat6/webapps/ # Copy the server lib dirs cp appserver-lib/* /usr/share/tomcat6/lib/ # Prepare the Tomcat Web App # First - global tomcat server settings echo 'JAVA_OPTS="\$JAVA_OPTS -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:SurvivorRatio=128 -XX:MaxTenuringThreshold=0 -XX:+UseTLAB"' >> /etc/tomcat6/tomcat6.conf echo 'JAVA_OPTS="\$JAVA_OPTS -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled"' >> /etc/tomcat6/tomcat6.conf echo 'JAVA_OPTS="\$JAVA_OPTS -Dorg.apache.jasper.compiler.Parser.STRICT_QUOTE_ESCAPING=false -Djava.awt.headless=true"' >> /etc/tomcat6/tomcat6.conf echo 'JAVA_OPTS="\$JAVA_OPTS -Dcom.sun.jndi.ldap.read.timeout=60000 -Dcom.sun.jndi.ldap.connect.pool.timeout=30000 -Dcom.sun.jndi.ldap.connect.timeout=10000"' >> /etc/tomcat6/tomcat6.conf echo 'JAVA_OPTS="\$JAVA_OPTS -Dnetworkaddress.cache.ttl=60 -Doracle.net.CONNECT_TIMEOUT=10000 -Djava.util.prefs.syncInterval=2000000"' >> /etc/tomcat6/tomcat6.conf echo -n 'JAVA_OPTS="\$JAVA_OPTS ' >> /etc/tomcat6/tomcat6.conf echo "-Dhttp.url=${SERVER_HOST}:${SERVER_PORT} -Dhost.ip=${SERVER_PRIVATE_IP}\"" >> /etc/tomcat6/tomcat6.conf # Next - the KFS instance specific settings echo 'JAVA_OPTS="\$JAVA_OPTS -Xmx4g -Xms2g -XX:MaxPermSize=1g -XX:PermSize=256m -XX:MaxNewSize=256m -XX:NewSize=256m"' >> /etc/sysconfig/tomcat6 #echo "CONNECTOR_PORT=\"${SERVER_PORT}\"" >> /etc/sysconfig/tomcat6 # patch the startup script because it doesn't work patch -b /usr/sbin/tomcat6 usr-sbin-tomcat6-patch # fix the connector port because the above is ignored /sbin/iptables -t nat -I PREROUTING -p tcp --dport ${SERVER_PORT} -j REDIRECT --to-port 8080 /sbin/iptables-save chkconfig --level 35 iptables on service iptables start #perl -pi -e "s/8080/${SERVER_PORT}/gi" /etc/tomcat6/server.xml # Start the tomcat server service tomcat6 start EOF ) > kfs-init.sh # Copy the completed script to the server $SCP_CMD kfs-init.sh ${SSH_USER}@${SERVER_PRIVATE_IP}:~ # Copy the patch file $SCP_CMD usr-sbin-tomcat6-patch ${SSH_USER}@${SERVER_PRIVATE_IP}:~ # make the script executable $SSH_CMD chmod a+rx kfs-init.sh # Make the directory readable so tomcat can see it $SSH_CMD chmod o+rx . # Run the initialization script $SSH_CMD -t sudo ./kfs-init.sh fi echo Server URL: ${APPSERVER_URL}/kfs-${INSTANCE}
Customizing Test Drive Data
See child page: Customizing KFS Test Drive Data
Troubleshooting
Logging into Test Drive
To log into test drive, you must have the private key used to create the instance. You must obtain this key from another KFS Configuration Manager via a secure mechanism. (I.e. - DO NOT EMAIL IT!!!) There is no way to log in with a password. This means that your command to log in must look something like:
ssh -i .ec2/kfs-key.pem ec2-user@testdrive.kfs.kuali.org
Locations of Files
KFS Logs |
|
---|---|
Tomcat Logs |
|
Starting/Stopping Tomcat
Start KFS |
|
---|---|
Stop KFS |
|
TODO
- Customizing Test Drive Data
- "ptds" instance
- creation of snapshot
Operated as a Community Resource by the Open Library Foundation