i2 Analyze Deployment Tooling

    Show / Hide Table of Contents

    Deploy and start i2 Analyze

    This topic describes how to deploy and start i2 Analyze in a containerized environment.

    For an example of the activities described, see the examples/pre-prod/deploy-pre-prod script.

    Running Solr and ZooKeeper

    The running Solr and ZooKeeper section runs the required containers and creates the Solr cluster and ZooKeeper ensemble.

    1. The deploy_zk_cluster function creates the secure Zookeeper cluster for the deployment. The function includes a number of calls:

      1. The run_zk server function runs the ZooKeeper containers that make up the ZooKeeper ensemble. For more information about running a ZooKeeper container, see ZooKeeper. In deploy-pre-prod, 3 ZooKeeper containers are used.
    2. The configure_zk_for_solr_cluster function creates the Zookeeper configuration for the secure Solr cluster for the deployment. The function includes a number of calls:

      1. The run_solr_client_command client function is used a number of times to complete the following actions:

        1. Create the znode for the cluster.
          i2 Analyze uses a ZooKeeper connection string with a chroot. To use a chroot connection string, a znode with that name must exist. For more information, see SolrCloud Mode.
        2. Set the urlScheme to be https.
        3. Configure the Solr authentication by uploading the security.json file to ZooKeeper.

          For more information about the function, see run_solr_client_command.

    3. The deploy_solr_cluster function creates the secure Solr cluster for the deployment. The function includes a number of calls:

      1. The run_solr server function runs the Solr containers for the Solr cluster. For more information about running a Solr container, see Solr.
        In deploy-pre-prod, 2 Solr containers are used.

        At this point, your ZooKeepers are running in an ensemble, and your Solr containers are running in SolrCloud Mode managed by ZooKeeper.

    Initializing the Information Store database

    The initializing the Information Store database section creates a persistent database backup volume and runs the database container and configures the database management system. You can either deploy the Information Store on Microsoft SQL Server or PostgreSQL.

    • SQL Server
    • PostgreSQL

    SQL Server

    1. The deploy_secure_sql_server function creates a persistent database backup volume and runs the database container. The function includes a number of calls that complete the following actions:

      1. The database backup volume is created first with the Docker command:

        docker volume create "${SQL_SERVER_BACKUP_VOLUME_NAME}"
        

        The volume will not be automatically deleted when the SQL Server container is removed. This helps maintain any backups created while running a SQL server container. For more information about docker storage, see Docker Storage.

      2. The run_sql_server server function creates the secure SQL Server container for the deployment.For more information about building the SQL Server image and running a container, see Microsoft SQL Server.

      3. Before continuing, deploy-pre-prod uses the wait_for_sql_server_to_be_live common function to ensure that SQL Server is running.

      4. The change_sa_password client function is used to change the sa user's password. For more information, see change_sa_password

    2. The initialize_istore_database function generates the ISTORE scripts and creates database roles, logins and users. The function includes a number of calls that complete the following actions:

      1. Generate the Information Store scripts.

        • The run_i2_analyze_tool client function is used to run the generateInfoStoreToolScripts.sh tool.
          • run_i2_analyze_tool
          • Generate Information Store scripts
      2. Generate the static Information Store database scripts.

        • The run_i2_analyze_tool client function is used to run the generateStaticInfoStoreCreationScripts.sh tool.
          • run_i2_analyze_tool
          • Generate static database scripts tool
      3. Create the Information Store database and schemas.

        • The run_sql_server_command_as_sa client function is used to run the runDatabaseCreationScripts.sh tool.
          • run_sql_server_command_as_sa
          • Run database creation scripts tool
      4. Create the database roles, logins, and users.

        • The run_sql_server_command_as_sa client function runs the create_db_roles.sh script.
        • The create_db_login_and_user client function creates the logins and users.
        • The run_sql_server_command_as_sa client function runs the grant_permissions_to_roles.sh script. For more information about the database users and their permissions, see Database users.
      5. Grant the dba user the required permissions in the msdb and master databases, this grants the correct permissions for the Deletion by Rule feature of i2Analyze.

        • The run_sql_server_command_as_sa client function runs the configure_dba_roles_and_permissions.sh script.
      6. Make the etl user a member of the SQL server sysadmin group to allow this user to perform bulk inserts into the external staging tables.

        • The run_sql_server_command_as_sa client function runs the add_etl_user_to_sys_admin_role.sh script.
      7. Run the static scripts that create the Information Store database objects.

        • The run_sql_server_command_as_dba client function is used to run the runStaticScripts.sh tool.
          • run_sql_server_command_as_dba
          • Run static database scripts tool

    PostgreSQL

    1. The run_postgres_server server function creates the secure Postgres container for the deployment. For more information about building the Postgres image and running a container, see PostgreSQL Server.

    2. The wait_for_postgres_server_to_be_live common function is used to ensure that Postgres is running.

    3. The change_postgres_password client function is used to change the postgres user's password. For more information, see Postgres

    4. You can implement a initialize_istore_database function that generates the ISTORE scripts and creates database roles, logins and users. The function must include a number of calls that complete the following actions:

      1. Generate the Information Store scripts.

        • The run_i2_analyze_tool client function is used to run the generateInfoStoreToolScripts.sh tool.
          • run_i2_analyze_tool
          • Generate Information Store scripts
      2. Generate the static Information Store database scripts.

        • The run_i2_analyze_tool client function is used to run the generateStaticInfoStoreCreationScripts.sh tool.
          • run_i2_analyze_tool
          • Generate static database scripts tool
      3. Create the Information Store database and schemas.

        • The run_postgres_server_command_as_postgres client function is used to run the runDatabaseCreationScripts.sh tool.
          • run_postgres_server_command_as_postgres
          • Run database creation scripts tool
      4. Create the database roles.

        • The run_postgres_server_command_as_postgres client function runs the create_db_roles.sh script.
      5. Create the PG Cron extension.

        • The run_postgres_server_command_as_postgres client function is used to run the create_pg_cron_extension.sh tool.
          • run_postgres_server_command_as_postgres
      6. Create the database DBA login and user.

        • The create_db_login_and_user client function creates the DBA login and user.
        • The run_postgres_server_command_as_dba client function runs the grant_permissions_to_roles.sh script. For more information about the database users and their permissions, see Database users.
      7. Create the database logins and users.

        • The create_db_login_and_user client function creates the logins and roles. For more information about the database users and their permissions, see Database users.
      8. Run the static scripts that create the Information Store database objects.

        • The run_postgres_server_command_as_dba client function is used to run the runStaticScripts.sh tool.
          • run_postgres_server_command_as_dba
          • Run static database scripts tool

    Configuring Solr and ZooKeeper

    The configuring Solr and ZooKeeper sections creates Solr configuration and then configures the Solr cluster and creates the Solr collections.

    1. Before continuing, deploy-pre-prod uses the wait_for_solr_to_be_live common function to ensure that Solr is running.

    2. The configure_solr_collections function generates and uploads the Solr collections to Zookeeper.

      • The generateSolrSchemas.sh i2-tool creates the solr directory in examples/pre-prod/configuration/solr/generated_config. This directory contains the managed-schema, Solr synonyms file and Solr config files for each index.

      • The run_solr_client_command client function is used to upload the managed-schema, solr.xml, and synonyms file for each collection to ZooKeeper.
        For example:

        run_solr_client_command solr zk upconfig -v -z "${ZK_HOST}" -n daod_index -d /conf/solr_config/daod_index
        
    3. The create_solr_cluster_policy function uses the run_solr_client_command client function to set a cluster policy.

      • The create_solr_cluster_policy is using Solr's built-in replica placement plugin with the default configuration that defines each host has 1 replica of each shard. For example:

        run_solr_client_command bash -c "curl -u \"\${SOLR_ADMIN_DIGEST_USERNAME}:\${SOLR_ADMIN_DIGEST_PASSWORD}\" --cacert ${CONTAINER_CERTS_DIR}/CA.cer -X POST -H 'Content-Type: application/json' -d '{\"add\":{ \"name\": \".placement-plugin\", \"class\": \"org.apache.solr.cluster.placement.plugins.AffinityPlacementFactory\"}}' \"${SOLR1_BASE_URL}/api/cluster/plugin\""
        

        For more information about Solr's Replica Placement Plugin, see Replica Placement Plugins.

    4. The create_solr_collections function creates the Solr Collections.

      • The run_solr_client_command client function is used to create each Solr collection. For example:

        run_solr_client_command bash -c "curl -u \"\${SOLR_ADMIN_DIGEST_USERNAME}:\${SOLR_ADMIN_DIGEST_PASSWORD}\"
        --cacert /run/secrets/CA.cer 
        \"${SOLR1_BASE_URL}/solr/admin/collections?action=CREATE&name=main_index&collection.configName=main_index&numShards=1&rule=replica:<2,host:*\""
        

        For more information about the Solr collection API call, see CREATE: Create a Collection.

    Configuring the Information Store database

    The configuring the Information Store database section creates objects within the database.

    1. The configure_istore_database function generates and runs the dynamic database scripts that create the schema specific database objects within the database.
      • The run_i2_analyze_tool client function is used to run the generateDynamicInfoStoreCreationScripts.sh tool.
        • run_i2_analyze_tool
        • Generate dynamic Information Store creation scripts tool
      • The run_sql_server_command_as_dba client function is used to run the runDynamicScripts.sh tool.
        • run_sql_server_command_as_dba
        • Run dynamic Information Store creation scripts tool

    Configuring the Example Connector

    The configuring example connector section runs the example connector used by the i2 Analyze application.

    1. The configure_example_connector function runs and waits for the example connector to be live.
      • The run_example_connector server function runs the example connector application.
      • The wait_for_connector_to_be_live client function checks the connector is live before allowing the script to proceed.

    Configuring i2 Analyze

    The configuring i2 Analyze section runs the Liberty containers that run the i2 Analyze application.

    1. The build_liberty_configured_image_for_pre_prod server function builds the configured Liberty image. For more information, see Building a configured Liberty image.

    2. The deploy_liberty function runs 2 liberty containers and the load balancer.

      • The run_liberty server function runs a Liberty container from the configured image.

        For more information, see Running a Liberty container

      • The run_load_balancer function in server_functions.sh runs HAProxy as a load balancer in a Docker container.

        The load balancer configuration we use can be found in haproxy.cfg file and the variables are passed as environment variables to the Docker container.

        The load balancer routes requests to the application to both Liberty servers that are running. The configuration that used is a simplified configuration for example purposes and is not to be used in production.

        For more information about configuring a load balancer with i2 Analyze, see Load balancer.

    3. Before continuing, deploy-pre-prod uses the wait_for_i2_analyze_service_to_be_live common function to ensure that Liberty is running.

    4. The update_match_rules function updates the system match rules.

      • The run_i2_analyze_tool client function is used to run the runIndexCommand.sh tool. The tool is run twice, once to update the match rules file and once to switch the match indexes.
        For more information, see Manage Solr indexes tool.

    Running Prometheus and Grafana

    The running Prometheus and Grafana section runs the Prometheus and Grafana containers.

    1. The configure_prometheus_for_pre_prod common function creates the Prometheus configuration.

    2. The run_prometheus server function creates the Prometheus container. For more information about running a Prometheus container, see Prometheus.

    3. Before continuing, deploy-pre-prod uses the wait_for_prometheus_server_to_be_live common function to ensure that Prometheus is running.

    4. The run_grafana server function creates the Grafana container. For more information about running a Grafana container, see Grafana.

    5. Before continuing, deploy-pre-prod uses the wait_for_grafana_server_to_be_live common function to ensure that Grafana is running.

    Back to top © N. Harris Computer Corporation