Deploy and start i2 Analyze
This topic describes how to deploy and start i2 Analyze in a containerized environment.
For an example of the activities described, see the examples/pre-prod/deploy-pre-prod
script.
Running Solr and ZooKeeper
The running Solr and ZooKeeper section runs the required containers and creates the Solr cluster and ZooKeeper ensemble.
The
deploy_zk_cluster
function creates the secure Zookeeper cluster for the deployment. The function includes a number of calls:- The
run_zk
server function runs the ZooKeeper containers that make up the ZooKeeper ensemble. For more information about running a ZooKeeper container, see ZooKeeper. Indeploy-pre-prod
, 3 ZooKeeper containers are used.
- The
The
configure_zk_for_solr_cluster
function creates the Zookeeper configuration for the secure Solr cluster for the deployment. The function includes a number of calls:The
run_solr_client_command
client function is used a number of times to complete the following actions:- Create the
znode
for the cluster.
i2 Analyze uses a ZooKeeper connection string with a chroot. To use a chroot connection string, a znode with that name must exist. For more information, see SolrCloud Mode. - Set the
urlScheme
to behttps
. Configure the Solr authentication by uploading the
security.json
file to ZooKeeper.For more information about the function, see run_solr_client_command.
- Create the
The
deploy_solr_cluster
function creates the secure Solr cluster for the deployment. The function includes a number of calls:The
run_solr
server function runs the Solr containers for the Solr cluster. For more information about running a Solr container, see Solr.
Indeploy-pre-prod
, 2 Solr containers are used.At this point, your ZooKeepers are running in an ensemble, and your Solr containers are running in SolrCloud Mode managed by ZooKeeper.
Initializing the Information Store database
The initializing the Information Store database section creates a persistent database backup volume and runs the database container and configures the database management system. You can either deploy the Information Store on Microsoft SQL Server or PostgreSQL.
SQL Server
The
deploy_secure_sql_server
function creates a persistent database backup volume and runs the database container. The function includes a number of calls that complete the following actions:The database backup volume is created first with the Docker command:
docker volume create "${SQL_SERVER_BACKUP_VOLUME_NAME}"
The volume will not be automatically deleted when the SQL Server container is removed. This helps maintain any backups created while running a SQL server container. For more information about docker storage, see Docker Storage.
The
run_sql_server
server function creates the secure SQL Server container for the deployment.For more information about building the SQL Server image and running a container, see Microsoft SQL Server.Before continuing,
deploy-pre-prod
uses thewait_for_sql_server_to_be_live
common function to ensure that SQL Server is running.The
change_sa_password
client function is used to change thesa
user's password. For more information, see change_sa_password
The
initialize_istore_database
function generates the ISTORE scripts and creates database roles, logins and users. The function includes a number of calls that complete the following actions:Generate the Information Store scripts.
- The
run_i2_analyze_tool
client function is used to run thegenerateInfoStoreToolScripts.sh
tool.
- The
Generate the static Information Store database scripts.
- The
run_i2_analyze_tool
client function is used to run thegenerateStaticInfoStoreCreationScripts.sh
tool.
- The
Create the Information Store database and schemas.
- The
run_sql_server_command_as_sa
client function is used to run therunDatabaseCreationScripts.sh
tool.
- The
Create the database roles, logins, and users.
- The
run_sql_server_command_as_sa
client function runs thecreate_db_roles.sh
script. - The
create_db_login_and_user
client function creates the logins and users. - The
run_sql_server_command_as_sa
client function runs thegrant_permissions_to_roles.sh
script. For more information about the database users and their permissions, see Database users.
- The
Grant the
dba
user the required permissions in themsdb
andmaster
databases, this grants the correct permissions for the Deletion by Rule feature of i2Analyze.- The
run_sql_server_command_as_sa
client function runs theconfigure_dba_roles_and_permissions.sh
script.
- The
Make the
etl
user a member of the SQL serversysadmin
group to allow this user to perform bulk inserts into the external staging tables.- The
run_sql_server_command_as_sa
client function runs theadd_etl_user_to_sys_admin_role.sh
script.
- The
Run the static scripts that create the Information Store database objects.
- The
run_sql_server_command_as_dba
client function is used to run therunStaticScripts.sh
tool.
- The
PostgreSQL
The
run_postgres_server
server function creates the secure Postgres container for the deployment. For more information about building the Postgres image and running a container, see PostgreSQL Server.The
wait_for_postgres_server_to_be_live
common function is used to ensure that Postgres is running.The
change_postgres_password
client function is used to change thepostgres
user's password. For more information, see PostgresYou can implement a
initialize_istore_database
function that generates the ISTORE scripts and creates database roles, logins and users. The function must include a number of calls that complete the following actions:Generate the Information Store scripts.
- The
run_i2_analyze_tool
client function is used to run thegenerateInfoStoreToolScripts.sh
tool.
- The
Generate the static Information Store database scripts.
- The
run_i2_analyze_tool
client function is used to run thegenerateStaticInfoStoreCreationScripts.sh
tool.
- The
Create the Information Store database and schemas.
- The
run_postgres_server_command_as_postgres
client function is used to run therunDatabaseCreationScripts.sh
tool.
- The
Create the database roles.
- The
run_postgres_server_command_as_postgres
client function runs thecreate_db_roles.sh
script.
- The
Create the PG Cron extension.
- The
run_postgres_server_command_as_postgres
client function is used to run thecreate_pg_cron_extension.sh
tool.
- The
Create the database DBA login and user.
- The
create_db_login_and_user
client function creates the DBA login and user. - The
run_postgres_server_command_as_dba
client function runs thegrant_permissions_to_roles.sh
script. For more information about the database users and their permissions, see Database users.
- The
Create the database logins and users.
- The
create_db_login_and_user
client function creates the logins and roles. For more information about the database users and their permissions, see Database users.
- The
Run the static scripts that create the Information Store database objects.
- The
run_postgres_server_command_as_dba
client function is used to run therunStaticScripts.sh
tool.
- The
Configuring Solr and ZooKeeper
The configuring Solr and ZooKeeper sections creates Solr configuration and then configures the Solr cluster and creates the Solr collections.
Before continuing,
deploy-pre-prod
uses thewait_for_solr_to_be_live
common function to ensure that Solr is running.The
configure_solr_collections
function generates and uploads the Solr collections to Zookeeper.The
generateSolrSchemas.sh
i2-tool creates thesolr
directory inexamples/pre-prod/configuration/solr/generated_config
. This directory contains the managed-schema, Solr synonyms file and Solr config files for each index.The
run_solr_client_command
client function is used to upload themanaged-schema
,solr.xml
, and synonyms file for each collection to ZooKeeper.
For example:run_solr_client_command solr zk upconfig -v -z "${ZK_HOST}" -n daod_index -d /conf/solr_config/daod_index
The
create_solr_cluster_policy
function uses therun_solr_client_command
client function to set a cluster policy.The
create_solr_cluster_policy
is using Solr's built-in replica placement plugin with the default configuration that defines each host has 1 replica of each shard. For example:run_solr_client_command bash -c "curl -u \"\${SOLR_ADMIN_DIGEST_USERNAME}:\${SOLR_ADMIN_DIGEST_PASSWORD}\" --cacert ${CONTAINER_CERTS_DIR}/CA.cer -X POST -H 'Content-Type: application/json' -d '{\"add\":{ \"name\": \".placement-plugin\", \"class\": \"org.apache.solr.cluster.placement.plugins.AffinityPlacementFactory\"}}' \"${SOLR1_BASE_URL}/api/cluster/plugin\""
For more information about Solr's Replica Placement Plugin, see Replica Placement Plugins.
The
create_solr_collections
function creates the Solr Collections.The
run_solr_client_command
client function is used to create each Solr collection. For example:run_solr_client_command bash -c "curl -u \"\${SOLR_ADMIN_DIGEST_USERNAME}:\${SOLR_ADMIN_DIGEST_PASSWORD}\" --cacert /run/secrets/CA.cer \"${SOLR1_BASE_URL}/solr/admin/collections?action=CREATE&name=main_index&collection.configName=main_index&numShards=1&rule=replica:<2,host:*\""
For more information about the Solr collection API call, see CREATE: Create a Collection.
Configuring the Information Store database
The configuring the Information Store database section creates objects within the database.
- The
configure_istore_database
function generates and runs the dynamic database scripts that create the schema specific database objects within the database.- The
run_i2_analyze_tool
client function is used to run thegenerateDynamicInfoStoreCreationScripts.sh
tool. - The
run_sql_server_command_as_dba
client function is used to run therunDynamicScripts.sh
tool.
- The
Configuring the Example Connector
The configuring example connector section runs the example connector used by the i2 Analyze application.
- The
configure_example_connector
function runs and waits for the example connector to be live.- The
run_example_connector
server function runs the example connector application. - The
wait_for_connector_to_be_live
client function checks the connector is live before allowing the script to proceed.
- The
Configuring i2 Analyze
The configuring i2 Analyze section runs the Liberty containers that run the i2 Analyze application.
The
build_liberty_configured_image_for_pre_prod
server function builds the configured Liberty image. For more information, see Building a configured Liberty image.The
deploy_liberty
function runs 2 liberty containers and the load balancer.The
run_liberty
server function runs a Liberty container from the configured image.For more information, see Running a Liberty container
The
run_load_balancer
function inserver_functions.sh
runs HAProxy as a load balancer in a Docker container.The load balancer configuration we use can be found in
haproxy.cfg
file and the variables are passed as environment variables to the Docker container.The load balancer routes requests to the application to both Liberty servers that are running. The configuration that used is a simplified configuration for example purposes and is not to be used in production.
For more information about configuring a load balancer with i2 Analyze, see Load balancer.
Before continuing,
deploy-pre-prod
uses thewait_for_i2_analyze_service_to_be_live
common function to ensure that Liberty is running.The
update_match_rules
function updates the system match rules.- The
run_i2_analyze_tool
client function is used to run therunIndexCommand.sh
tool. The tool is run twice, once to update the match rules file and once to switch the match indexes.
For more information, see Manage Solr indexes tool.
- The
Running Prometheus and Grafana
The running Prometheus and Grafana section runs the Prometheus and Grafana containers.
The
configure_prometheus_for_pre_prod
common function creates the Prometheus configuration.The
run_prometheus
server function creates the Prometheus container. For more information about running a Prometheus container, see Prometheus.Before continuing,
deploy-pre-prod
uses thewait_for_prometheus_server_to_be_live
common function to ensure that Prometheus is running.The
run_grafana
server function creates the Grafana container. For more information about running a Grafana container, see Grafana.Before continuing,
deploy-pre-prod
uses thewait_for_grafana_server_to_be_live
common function to ensure that Grafana is running.