With Boomi’s low-code, high-productivity design, there is less of a clear-cut case for using CI / CD than with a legacy, high-code platform. Many of the benefits CI / CD brings are built into the core Boomi platform, including atomic deployment packages, environment-specific deployment permissions, and full traceability. Some of the common motivations for integrating Boomi with a CI / CD platform include:
- Coordinating deployment of Boomi assets with non-Boomi dependencies (e.g. cloud app config, DB schema changes, etc.).
- Separation of duties between developers and release/deployment operations teams.
- Enhanced auditability for compliance.
- Organisational policies around standardisation and consistency in general.
Boomi is platform agnostic regarding CI / CD, requiring only a platform that can execute shell commands. Platforms such as Azure DevOps, Jenkins and Atlassian Bamboo are often used to implement Continuous Integration for Boomi.
Boomi has a comprehensive platform API and provides a CLI over it, which can be invoked by a CI / CD platform. This CLI supports all major tasks which would be required in a CI / CD process, including
- Creating packaged components for a process
- Downloading packaged components
- Deploying and undeploying a package to an environment
- Executing a deployed process
DevOps Center Solution Accelerator. Boomi also makes available a packaged solution that uses the Boomi platform API and Boomi Flow to provide a portal with a wide range of capabilities, including workflow, advanced analytics and the ability to develop and manage automated testing of Boomi Integrate processes. Workflow capabilities support collaboration for release management tasks across development, QA, production operations and other teams. Advanced analytics eases change management with version history reports, visual component version comparison and audit log reporting of all operations activities. The test automation capabilities enable managing suites of tests where each test validates a specific use case, edge condition or error processing scenario. All capabilities are configurable and extensible using standard Boomi technologies.
Automated unit and integration testing can be a crucial component of ensuring quality in integration processes, both in the initial development effort and when maintaining the solution. Test processes in Boomi can be executed using the CLI as part of a CI process, allowing test scenarios to run for each package generated automatically.
Unit testing in Boomi. Unit tests are generally built using a standard “set expectations, execute logic under test, validate” approach, with external dependencies mocked to the greatest extent possible. Some core aspects of the platform, such as connector calls in maps, cannot realistically be mocked and as such, processes which use them are not amenable to unit testing. Processes can be structured to avoid using these platform capabilities at the cost of using slower and more convoluted designs.
Naming conventions are often the best way to associate unit tests with the processes to which they belong. The Continuous Integration process can enumerate unit test processes at execution based on the packaged component under test and execute them using the Boomi CLI.
Having a universally applied set of development standards and practices is of vital importance in large integration portfolios, where not only do you need to enforce a minimum quality level but also need to be able to understand how an integration works on sight. Automated static analysis of integrations as part of a Continuous Integration process is an excellent way to deliver this, and this can be implemented in Boomi.
Static analysis of code is performed against the actual source files, and the equivalent in the Boomi platform is the XML definitions for Boomi artefacts. To perform static analysis against these definitions, they can be downloaded from the Boomi platform via the CLI.
Once the XML definitions are downloaded, some level of analysis needs to be executed over them. This can be done manually by inspecting the XML files but is generally performed using a specialised static analysis tool like SonarQube. A base set of static analysis rules for SonarQube is available from Boomi, and these can be modified and extended to enforce the correct standards for your organisation.
A generic static analysis CI workflow for Boomi can look like the following:
- Workflow is triggered (manually, scheduled)
- Use Boomi CLI to download the packaged component from the Boomi Platform
- Run static analysis over downloaded component files
- Report analysis results
Some organisations prefer to store the component files in a Git repository so that they can integrate the repository with the static analysis platform and provide a richer experience; this can be achieved by committing the downloaded packages to the repository after downloading and then running analysis against the repository commit.
Boomi’s native capabilities to deploy to environments consistently using packaged components are such that an external CD process is often not required if Boomi is the only system under management. When Boomi integrations are being deployed as part of a wider system, then a dedicated CD process that can manage all deployments consistently can be beneficial.
By leveraging the platform, CLI Boomi packages can easily be deployed to environments. The infrastructure for a Boomi environment can also be scripted and deployed as part of a deployment process, including deploying an atom, attaching it to an environment and updating environment extensions.
Visibility of what is running in an environment and the health of all processes is key to successfully supporting an integration platform. Boomi provides a single pane of glass view over integration processes in all environments, which is a core aspect of supporting the platform.
Boomi Process reporting provides a consolidated view of all processes executed in a Boomi tenancy for the last 60 days. Processes can be filtered by the time of execution, the atom on which they are executed, the name of the process and the method of execution. This enables the same interface to go from a system-wide overview to drilling down to a single process with the click of a button.
To provide more detailed information on an individual execution, both process logs and an execution trace can be accessed through this interface. This simplifies diagnosing issues, removing the requirement to write every step to a database or log on to a VM to pick up logs from disk.
If you need to find a particular document rather than an execution, you can use the Document view to access a view of all documents received or sent by the process executions. With the ability to filter by both connector metadata (connector type, HTTP URL, database server etc.) and user-defined fields, you can trace a single business document across multiple integrations to provide a rounded view of its integration activity.
If a process has failed, you can also rerun the process directly from Process Reporting using the same input values used in the initial execution. This reduces the number of steps required for a support team to address a failure, providing a more straightforward and less error-prone platform.