Deploy and start i2 Analyze
This topic describes how to deploy and start i2 Analyze in a containerized environment.
For an example of the activities described, see the examples/pre-prod/deploy-pre-prod script.
Running Solr and ZooKeeper
The running Solr and ZooKeeper section runs the required containers and creates the Solr cluster and ZooKeeper ensemble.
The
deploy_zk_clusterfunction creates the secure Zookeeper cluster for the deployment. The function includes a number of calls:- The
run_zkserver function runs the ZooKeeper containers that make up the ZooKeeper ensemble. For more information about running a ZooKeeper container, see ZooKeeper. Indeploy-pre-prod, 3 ZooKeeper containers are used.
- The
The
configure_zk_for_solr_clusterfunction creates the Zookeeper configuration for the secure Solr cluster for the deployment. The function includes a number of calls:The
run_solr_client_commandclient function is used a number of times to complete the following actions:- Create the
znodefor the cluster.
i2 Analyze uses a ZooKeeper connection string with a chroot. To use a chroot connection string, a znode with that name must exist. For more information, see SolrCloud Mode. - Set the
urlSchemeto behttps. Configure the Solr authentication by uploading the
security.jsonfile to ZooKeeper.For more information about the function, see run_solr_client_command.
- Create the
The
deploy_solr_clusterfunction creates the secure Solr cluster for the deployment. The function includes a number of calls:The
run_solrserver function runs the Solr containers for the Solr cluster. For more information about running a Solr container, see Solr.
Indeploy-pre-prod, 2 Solr containers are used.At this point, your ZooKeepers are running in an ensemble, and your Solr containers are running in SolrCloud Mode managed by ZooKeeper.
Initializing the Information Store database
The initializing the Information Store database section creates a persistent database backup volume and runs the database container and configures the database management system. You can either deploy the Information Store on Microsoft SQL Server or PostgreSQL.
SQL Server
The
deploy_secure_sql_serverfunction creates a persistent database backup volume and runs the database container. The function includes a number of calls that complete the following actions:The database backup volume is created first with the Docker command:
docker volume create "${SQL_SERVER_BACKUP_VOLUME_NAME}"The volume will not be automatically deleted when the SQL Server container is removed. This helps maintain any backups created while running a SQL server container. For more information about docker storage, see Docker Storage.
The
run_sql_serverserver function creates the secure SQL Server container for the deployment.For more information about building the SQL Server image and running a container, see Microsoft SQL Server.Before continuing,
deploy-pre-produses thewait_for_sql_server_to_be_livecommon function to ensure that SQL Server is running.The
change_sa_passwordclient function is used to change thesauser's password. For more information, see change_sa_password
The
initialize_istore_databasefunction generates the ISTORE scripts and creates database roles, logins and users. The function includes a number of calls that complete the following actions:Generate the Information Store scripts.
- The
run_i2_analyze_toolclient function is used to run thegenerateInfoStoreToolScripts.shtool.
- The
Generate the static Information Store database scripts.
- The
run_i2_analyze_toolclient function is used to run thegenerateStaticInfoStoreCreationScripts.shtool.
- The
Create the Information Store database and schemas.
- The
run_sql_server_command_as_saclient function is used to run therunDatabaseCreationScripts.shtool.
- The
Create the database roles, logins, and users.
- The
run_sql_server_command_as_saclient function runs thecreate_db_roles.shscript. - The
create_db_login_and_userclient function creates the logins and users. - The
run_sql_server_command_as_saclient function runs thegrant_permissions_to_roles.shscript. For more information about the database users and their permissions, see Database users.
- The
Grant the
dbauser the required permissions in themsdbandmasterdatabases, this grants the correct permissions for the Deletion by Rule feature of i2Analyze.- The
run_sql_server_command_as_saclient function runs theconfigure_dba_roles_and_permissions.shscript.
- The
Make the
etluser a member of the SQL serversysadmingroup to allow this user to perform bulk inserts into the external staging tables.- The
run_sql_server_command_as_saclient function runs theadd_etl_user_to_sys_admin_role.shscript.
- The
Run the static scripts that create the Information Store database objects.
- The
run_sql_server_command_as_dbaclient function is used to run therunStaticScripts.shtool.
- The
PostgreSQL
The
run_postgres_serverserver function creates the secure Postgres container for the deployment. For more information about building the Postgres image and running a container, see PostgreSQL Server.The
wait_for_postgres_server_to_be_livecommon function is used to ensure that Postgres is running.The
change_postgres_passwordclient function is used to change thepostgresuser's password. For more information, see PostgresYou can implement a
initialize_istore_databasefunction that generates the ISTORE scripts and creates database roles, logins and users. The function must include a number of calls that complete the following actions:Generate the Information Store scripts.
- The
run_i2_analyze_toolclient function is used to run thegenerateInfoStoreToolScripts.shtool.
- The
Generate the static Information Store database scripts.
- The
run_i2_analyze_toolclient function is used to run thegenerateStaticInfoStoreCreationScripts.shtool.
- The
Create the Information Store database and schemas.
- The
run_postgres_server_command_as_postgresclient function is used to run therunDatabaseCreationScripts.shtool.
- The
Create the database roles.
- The
run_postgres_server_command_as_postgresclient function runs thecreate_db_roles.shscript.
- The
Create the PG Cron extension.
- The
run_postgres_server_command_as_postgresclient function is used to run thecreate_pg_cron_extension.shtool.
- The
Create the database DBA login and user.
- The
create_db_login_and_userclient function creates the DBA login and user. - The
run_postgres_server_command_as_dbaclient function runs thegrant_permissions_to_roles.shscript. For more information about the database users and their permissions, see Database users.
- The
Create the database logins and users.
- The
create_db_login_and_userclient function creates the logins and roles. For more information about the database users and their permissions, see Database users.
- The
Run the static scripts that create the Information Store database objects.
- The
run_postgres_server_command_as_dbaclient function is used to run therunStaticScripts.shtool.
- The
Configuring Solr and ZooKeeper
The configuring Solr and ZooKeeper sections creates Solr configuration and then configures the Solr cluster and creates the Solr collections.
Before continuing,
deploy-pre-produses thewait_for_solr_to_be_livecommon function to ensure that Solr is running.The
configure_solr_collectionsfunction generates and uploads the Solr collections to Zookeeper.The
generateSolrSchemas.shi2-tool creates thesolrdirectory inexamples/pre-prod/configuration/solr/generated_config. This directory contains the managed-schema, Solr synonyms file and Solr config files for each index.The
run_solr_client_commandclient function is used to upload themanaged-schema,solr.xml, and synonyms file for each collection to ZooKeeper.
For example:run_solr_client_command solr zk upconfig -v -z "${ZK_HOST}" -n daod_index -d /conf/solr_config/daod_index
The
create_solr_cluster_policyfunction uses therun_solr_client_commandclient function to set a cluster policy.The
create_solr_cluster_policyis using Solr's built-in replica placement plugin with the default configuration that defines each host has 1 replica of each shard. For example:run_solr_client_command bash -c "curl -u \"\${SOLR_ADMIN_DIGEST_USERNAME}:\${SOLR_ADMIN_DIGEST_PASSWORD}\" --cacert ${CONTAINER_CERTS_DIR}/CA.cer -X POST -H 'Content-Type: application/json' -d '{\"add\":{ \"name\": \".placement-plugin\", \"class\": \"org.apache.solr.cluster.placement.plugins.AffinityPlacementFactory\"}}' \"${SOLR1_BASE_URL}/api/cluster/plugin\""For more information about Solr's Replica Placement Plugin, see Replica Placement Plugins.
The
create_solr_collectionsfunction creates the Solr Collections.The
run_solr_client_commandclient function is used to create each Solr collection. For example:run_solr_client_command bash -c "curl -u \"\${SOLR_ADMIN_DIGEST_USERNAME}:\${SOLR_ADMIN_DIGEST_PASSWORD}\" --cacert /run/secrets/CA.cer \"${SOLR1_BASE_URL}/solr/admin/collections?action=CREATE&name=main_index&collection.configName=main_index&numShards=1&rule=replica:<2,host:*\""For more information about the Solr collection API call, see CREATE: Create a Collection.
Configuring the Information Store database
The configuring the Information Store database section creates objects within the database.
- The
configure_istore_databasefunction generates and runs the dynamic database scripts that create the schema specific database objects within the database.- The
run_i2_analyze_toolclient function is used to run thegenerateDynamicInfoStoreCreationScripts.shtool. - The
run_sql_server_command_as_dbaclient function is used to run therunDynamicScripts.shtool.
- The
Configuring the Example Connector
The configuring example connector section runs the example connector used by the i2 Analyze application.
- The
configure_example_connectorfunction runs and waits for the example connector to be live.- The
run_example_connectorserver function runs the example connector application. - The
wait_for_connector_to_be_liveclient function checks the connector is live before allowing the script to proceed.
- The
Configuring i2 Analyze
The configuring i2 Analyze section runs the Liberty containers that run the i2 Analyze application.
The
build_liberty_configured_image_for_pre_prodserver function builds the configured Liberty image. For more information, see Building a configured Liberty image.The
deploy_libertyfunction runs 2 liberty containers and the load balancer.The
run_libertyserver function runs a Liberty container from the configured image.For more information, see Running a Liberty container
The
run_load_balancerfunction inserver_functions.shruns HAProxy as a load balancer in a Docker container.The load balancer configuration we use can be found in
haproxy.cfgfile and the variables are passed as environment variables to the Docker container.The load balancer routes requests to the application to both Liberty servers that are running. The configuration that used is a simplified configuration for example purposes and is not to be used in production.
For more information about configuring a load balancer with i2 Analyze, see Load balancer.
Before continuing,
deploy-pre-produses thewait_for_i2_analyze_service_to_be_livecommon function to ensure that Liberty is running.The
update_match_rulesfunction updates the system match rules.- The
run_i2_analyze_toolclient function is used to run therunAdminCommand.shtool. The tool is run twice, once to update the match rules file and once to switch the match indexes.
For more information, see Manage Solr indexes tool.
- The
Running Prometheus and Grafana
The running Prometheus and Grafana section runs the Prometheus and Grafana containers.
The
configure_prometheus_for_pre_prodcommon function creates the Prometheus configuration.The
run_prometheusserver function creates the Prometheus container. For more information about running a Prometheus container, see Prometheus.Before continuing,
deploy-pre-produses thewait_for_prometheus_server_to_be_livecommon function to ensure that Prometheus is running.The
run_grafanaserver function creates the Grafana container. For more information about running a Grafana container, see Grafana.Before continuing,
deploy-pre-produses thewait_for_grafana_server_to_be_livecommon function to ensure that Grafana is running.