1. Ploigos - An Opinionated CI/CD Workflow
1.1. Overview
What is Ploigos?
Ploigos is an opinionated workflow used to transform ideas into delivered software in a production environment. Ploigos can be divided into two major components: the Idea Delivery Workflow and the CI/CD Workflow.
The Idea Delivery Workflow is an abstract process workflow that defines a high level view of how an organization can best take ideas that solve business problems and implement the solution in a well defined procedure. This process focuses on the specific actions that need to take place without identifying specific tools for each step. Ploigos provides a general use Idea Delivery Workflow that can be further customized to fit specific projects as needed. A workflow first approach identifies the organizational behaviors and expected outcomes to define what quality is. The workflow codifies and enforces these to ensure all aspects of security, compliance, trust, and privacy are addressed.
The CI/CD Workflow is the implementation of the software development portion of the Idea Delivery Workflow. Ploigos creates a framework for a modular, extensible, and opinionated CD/CD pipeline. Modularity and extensibility are accomplished by defining one or more 'Steps' in the CD/CD workflow, and automating the Step by creating a StepImplementer. As part of fulfilling the opinionated aspect, Ploigos provides a number of predefined Steps, StepImplementers, and several CI/CD Workflows.
1.2. Idea Delivery Workflow
Ploigos defines a reference high level workflow that covers the process of taking the genesis of an idea, idea development, integration, and review. In this high level rendering of the Ploigos workflow, the majority of steps are Predefined Procedures. These procedures are placeholders for more complex sub workflows containing multiple Procedures, the details of which are defined in later sections. This simplification allows for a 'one slide' view of the workflow.
1.2.1. Idea Delivery Workflow Steps
Each predefined process represents an encapsulation of steps that are typically necessary in a complete end-to-end pipeline
- Prioritize Ideas
-
New ideas are placed in a work management tool, refined, and prioritized. The work management tool allows for organization and tracking of ideas as they move through the workflow. Steps will vary based on existing organizational structure and specific tool chosen, but basic requirements involve being able to: record new ideas, backlog management, assign ideas to developers, and track progress and status.
- Development
-
The development procedure where an idea is implemented in software. This includes an isolated development environment for developer to begin coding.
- Create or Update Merge Request to Release Branch
-
The procedure for how merges happen into the release branch in the source code repository. Deployment of build artifacts into a temporary development environment that will occur when a merge request is created (from a feature branch to the primary release branch), and for any subsequent changes to the merging feature branch. Acceptance of a merge request is automated, contingent on passing all gates defined in the CI/CD pipeline for the merging branch, and passing all gates of the CI/CD pipeline after the merge into the primary branch is complete.
- Detect Change
-
The procedure for detecting change in the source code repository, and using that event to initiate appropriate action in the CI/CD procedure. The event can be a change in a feature branch, release branch, a new merge request, etc.
- CI/CD
-
Build artifacts are created and tested automatically on a continual basis. This provides developers with the ability to make code changes with instant (real-time) deployment of those changes into live development runtime environment. An ability to bypass pipeline to reduce friction during development on feature branches. The detailed CI/CD Process Workflow section provides more details on this predefined process.
- Retrospective
-
An introspection of the processes used by the team to deliver the software to evaluate the effectiveness of the overall process itself. Allows for modifying a predefined procedure or even modifying the Idea Delivery Workflow itself based on team feedback.
- Peer Review
-
Collaborate with others in the process of reviewing code in order to make sure it works, the code conforms to organizational coding standards, and in order to improve the code where possible.
-
Mark Merge Request as Work In Progress (WIP)
-
Peer Review
-
Merge to Release Branch
-
Delete MR#-DEV Deployment Environment
-
- Release
-
Predefined process for how software is released.
1.3. CI/CD Process Workflow
The CI/CD Process Workflow is broken down into 3 broad categories:
- Minimal Workflow
-
Implements the fewest number of tools possible to still be considered a CI/CD pipeline. This workflow is recommended for demo presentations, and initial proof of concept evaluations.
- Typical Workflow
-
This workflow represents a typical CI/CD pipeline that provides a standard tool suite and features that are commonly found in modern CI/CD pipelines. Most workflows will start with the Typical Workflow as a baseline.
- Everything Workflow
-
A workflow that uses all possible capabilities of currently implemented steps in Ploigos. This workflow is used to showcase all capabilities and for application development that requires stringent security and testing capabilities that are not usually found in the Typical Workflow
Each workflow type at minimum follows the process below and only differs in the specific steps in the Setup, Continuous Integration(CI), and Continuous Deployment(CD) stages.
- Setup
-
Initial setup of the CI/CD pipeline. This step includes deployment and configuration of a pipeline using Ploigos
- Continuous Integration
-
Development work that happens on the deployed CI/CD pipeline
- Continuous Deployment - DEV
-
Deployment of the application into a development environment for the developer to review
- Continuous Deployment - TEST
-
Deployment of the application into the testing environment for functionality, performance, and acceptance testing.
- Continuous Deployment - PROD
-
Deployment of the application into the production environment.
References to the different workflows are defined in the ploigos-jenkins-library repository. This repository provides references examples of the Minimal, Typical, and Everything Workflow.
1.3.1. Minimum Workflow
Minimum Workflow Steps
The following steps are part of the Minimum Workflow:
-
Detect Change
-
Setup
-
Continuous Integration
-
Continuous Deployment - DEV (Feature Branch)
-
Continuous Deployment - TEST (Release Branch)
-
Continuous Deployment - PROD (Release Branch)
-
Report
1.3.2. Typical Workflow
The following steps are part of the Typical Workflow:
-
Detect Change
-
Setup
-
Continuous Integration
-
Continuous Deployment - DEV (Feature Branch)
-
Continuous Deployment - TEST (Release Branch)
-
Continuous Deployment - PROD (Release Branch)
-
Report
1.3.3. Everything Workflow
The Everything Workflow uses all implemented steps in Ploigos:
-
Detect Change
-
Setup
-
Continuous Integration
-
Continuous Deployment - DEV (Feature Branch)
-
Continuous Deployment - TEST (Release Branch)
-
Continuous Deployment - PROD (Release Branch)
-
Report
1.3.4. Detailed Workflow Step Descriptions
- Detect Change
-
-
Detect new/changed/merged branches
To bring an idea from development into a release (and ultimately production) a developer will create a merge request from feature branch to the primary release branch. The merge request should initially be created as WIP, which indicates this is a "Work in progress" and not yet ready to be merged. The act of creating the merge request from a feature branch to the release branch should trigger the pipeline to be run on the new feature branch.
-
Start CI/CD Workflow for Changed Branch
The capability of the CI tool to detect actions at the source control tool. For actions "new merge request" or "changed merge request", the pipeline will run and the subject will be feature branch being merged. For "merge of feature branch to release branch" the pipeline will run and the subject will be the primary release branch.
-
- Setup
- Continuous Integration
-
-
The pipeline will generate a semantic version based on other metadata, to produce version and image tag to uniquely identify artifacts associated with the pipeline run. This information gets applied to runtime artifacts and container image as labels.
-
This step will take the version created in the "Generate Metadata" step to tag the source in source control.
-
Validate that each module of the software performs as designed.
-
Build runtime artifacts, distribution archives, and other necessary artifacts required to run application.
-
The pipeline will perform static analysis on source code to identify defects, vulnerabilities, programmatic and stylistic problems as early in the development life cycle as possible. For example, static analysis is completed prior to building, scanning and deploying the image.
-
Push Application Artifact to Repository
Transfer runtime artifacts into a centralized artifact repository for distribution.
-
Assemble the minimal container image that the application will need to run, including the packaged application artifacts. Test container images, verify functionality, and validate the structure and content of the container themselves.
-
Run Static Image Scan: Compliance
Ensure adherence to an organization's security compliance policy by your container image.
-
Run Static Image Scan: Vulnerability
Identify software vulnerabilities in your container image.
-
Push Container Image to Repository
Transfer the verified image to centralized repository with metadata applied as labels to the image.
-
Sign the container image to allow validating image source and ensure image has not been tampered with.
-
Generate, Publish, and Sign Evidence
Generates, publishes, and signs evidence output by all previous steps up to this point. Used later for attestation gates.
-
- Continuous Deployment
-
-
Evaluates the evidence generated up to this point against a given set of attestations.
-
Deploy or Update to DEV Environment
Provide a temporary environment for deployment of code changes associated with a feature. If the environment does not already exist, the environment will be created. The lifetime of the environment is limited to the time it takes to implement the feature and merge the changes into the release branch of the primary code repository. At which point the development environment will be deleted.
-
Deploy or Update to TEST Environment
Deploy image built from the latest release branch to the test environment.
-
Deploy or Update to PROD Environment
Deploy tested application to shared production environment with latest feature available to end users.
-
Validate Environment Configuration
To validate the environment matches a given baseline of required objects, and configuration of those objects are correct. Requirements for this step can often come from an enterprise security and compliance team.
-
Run User Acceptance Tests(UAT)
Assess if the system can support day-to-day business and user scenarios and ensure the system is sufficient and correct for business usage.
-
Run Performance Tests (limited)
Evaluates the evidence generated up to this point against a given set of attestations.
-
To identify and eliminate the performance bottlenecks in the application.
-
Allows you to roll out new code/features to a subset of end-users as an initial test.
-
Generate, Publish, and Sign Evidence
Generates, publishes, and signs evidence output by all previous steps up to this point. Used later for attestation gates.
-
- Report
1.3.5. Workflow Source Files
While the rendered PNGs here are useful for starting the conversation and stating clearly the opinionated Ploigos workflow, it is recognized that every implementation of the Ploigos will be different. This includes the tool abstracted workflow, as well as the specific tools used to implement it.
To facilitate ease of adoption, consistency, reuse, and contribution back to the community, the workflows are all drawn in the MXGraph format using Draw.io and provided here for consumption, modification, and reuse.
-
Ploigos Workflows
1.4. Ploigos Workflow Tools
Once a Idea Delivery Workflow and CI/CD Process Workflow is established, then the next task is to determine all of the types of components needed to implement the workflow. This is done by ensuring each of the required tool categories in the table below are being tracked as an item that is either deployed or provided by a 3rd party.
Finally, choosing the actual component implementers (meaning the specific software tools or products) can be performed. This section has components that are dictated by the established Idea Delivery Workflow, CI/CD Process Workflow, and the tool categories. Some of the categories below may be optional depending on the workflow.
Tool Category | Purpose | Component Implementers |
---|---|---|
Container Platform |
Serves as runtime infrastructure for containers. |
Red Hat OpenShift Container Platform (OCP) |
Identity Management |
Allows organization to manage identity and access of users and groups to enterprise resources to keep systems and data secure. |
Red Hat Identity Manager (IdM) |
Authentication |
To verify the identity of a user or process. |
Red Hat Single Sign-On (RH SSO) (KeyCloak) |
Authorization |
The use of access control rules to decide whether access requests from (authenticated) consumers shall be approved or disapproved. |
Red Hat OpenShift Container Platform (OCP) with LDAP Group Sync from Red Hat Identity Manager (IdM) |
Artifact Repository |
Supports pushing, pulling and storing of software artifacts/packages and accompanying meta data. |
|
Container Image Registry |
Supports pushing, pulling and storing of OCI compliant images and accompanying meta data. |
|
Source Control |
Responsible for managing changes to source code. |
|
Workflow Runner (Continuous Integration (CI)) |
Automated tasks that verify new code’s correctness before integration. |
|
Static Code Analysis |
The analysis of software to catch defects, bugs, or security issues before actually executing the programs. |
|
Application Language Packager |
Performs the process of binding relevant files and components together to create a consistent, standardized deliverable for a software application. |
|
Application Language Unit Tester |
Tools to support testing the smallest part or individual unit to determine if the component is fit for use. |
|
Binary Artifact Uploader |
Pushes binary artifacts and accompanying metadata via network into a storage repository. |
|
Container Image Builder |
Packages application into a binary OCI compliant image suitable to run inside any OCI compliant container platform. |
|
Container Image Uploader |
Pushes container image and accompanying metadata via network into a storage repository. |
|
Container Image Build Vulnerability Scanner |
Analysis on a container image’s packages and other dependencies to determine if there are any known vulnerabilities in those packages or dependencies. |
|
Container Image Build Compliance Scanner |
Analysis on a container image’s packages and other dependencies to determine if there are any known compliance violations in those packages or dependencies. |
|
Container Image Signer |
Cryptographically sign the container image for systems that require a verified signature on container images for deployment. |
|
Kubernetes Resources Creation Tool |
Creates the necessary Kubernetes resources needed to deploy and run an application inside a container platform. |
|
Continuous Deployer |
Detects code changes and based on those changes, will automatically run pipeline to deploy code to chosen deployment environments. |
|
Environment Configuration Validator |
Tool that validates the environment matches a given baseline of required objects, and configuration of those objects are correct. |
|
User Acceptance Tester (UAT) |
Tests to verify user-facing functionality and that the software meets real-world requirements. |
|
Performance Tester |
Tests software for performance issues. |
|
Runtime Application Vulnerability Scanner |
Identifies vulnerabilities and scan your containers for malware at runtime (dynamic) as opposed to static analysis. |
|
Canary Tester |
Allows slow roll-out of new code to a small subset of users to reduce the risk of introducing a new software version in production. |
|
Container Image Registry Vulnerability Scanner |
Scans container images in a registry for known vulnerabilities. |
|
Container Image Registry Compliance Scanner |
Scans container images in a registry for violations of organizational policy. |
|
Container Platform Vulernability Scanner |
Scans the container platform hosts and platform for vulnerabilities. |
|
Container Image Runtime Vulnerability Scanner |
Scans running container for known vulnerabilities. |
|
Container Image Runtime Compliance Scanner |
Scans running container images for violations of organizational policy. |
|
Container Platform Vulnerability Enforcer |
Enforces polcies related to vulernabilities on the container platform |
|
Container Platform Compliance Scanner |
Scans the container platform and hosts for violations of organizational policy. |
|
Container Platform Compliance Enforcer |
Enforces polcies related to compliance of organizational policy on the container platform |
|
Compliance and Validation Input Files Repository |
Repository that creates a central repoistory of compliance and validation configurations |
|
Workflow Results Presentation |
Repository or GUI where results of the workflow |
|
Signed Container Enforcement |
|
|
Integrated Development Environment (IDE) |
Provides a comprehensive suite of commonly used development functionality into a single application to assist developer’s efficiency and productivity. |
|
Peer Review Tracker |
Facilitates and assists developers with management of the peer code review process. |
|
Discussion |
A forum for debate and discussion to keep developers informed and aware of what is happening. |
|
2. Deploying Ploigos
2.1. Infrastructure Resources
The most basic requirement for the deployment of the Ploigos CI/CD Workflow is an operational OCP 4.x cluster.
2.1.1. OCP Platform Suggested Sizing
Below is the infrastructure used to develop and test the Minimum Viable Product(MVP) of the Ploigos CI/CD Workflow Reference Implementation.
Node | CPUs | Memory (GB) | Disk (GB) | AWS EC2 Instance Type | Sizing Source |
---|---|---|---|---|---|
Control 0 |
8 |
32 |
120 |
m4.2xlarge |
|
Control 1 |
8 |
32 |
120 |
m4.2xlarge |
|
Control 2 |
8 |
32 |
120 |
m4.2xlarge |
|
Compute 0 |
8 |
32 |
120 |
m4.2xlarge |
Based on Containerized Tool Sizing needs |
Compute 1 |
8 |
32 |
120 |
m4.2xlarge |
Based on Containerized Tool Sizing needs |
Compute 2 |
8 |
32 |
120 |
m4.2xlarge |
Based on Containerized Tool Sizing needs |
2.1.2. Containerized Tool Suggested Sizing
Below is suggested initial sizing for tools commonly deployed with Ploigos
Tool | CPUs | Memory (GB) | Sizing Source |
---|---|---|---|
Red Hat SSO |
2 |
2 |
|
Sonatype Nexus Repository Manager 3 OSS |
8 |
16-32+ |
|
JFrog Artifactory OSS - DB (PostgreSQL) |
2 |
2 |
WAG since JFrog Artifactory Docs - Recommended Hardware provides no recommendation |
Red Hat Quay - Operator Based |
2 |
4 |
|
Red Hat Quay - Operator Based - DB (Crunchy Data PostgreSQL) |
2 / Operator Governed |
8 / Operator Governed |
|
Red Hat Clair |
2 |
4 |
|
Tekton |
Operator Governed |
Operator Governed |
|
Jenkins Master |
2 |
2 |
WAG based on experience. |
Jenkins Works |
Variable |
Variable |
|
ArgoCD |
1 |
4 |
Worker images are 50Mb in size, API server is extremely small. |
SonarQube |
2 |
2 |
WAG since SonarQube official docs offers no suggestion |
SonarQube DB (PostgreSQL) |
2 |
2 |
WAG since SonarQube official docs offers no suggestion |
Cucumber |
N/A (embedded in CI container) |
N/A (embedded in CI container) |
2.2. Typical Workflow Deployment
The following guide shows how to deploy the reference implementation of the Ploigos Standard Workflow into an OCP 4.x cluster with a simple quarkus application as a demo.
2.2.1. Prerequisites
-
Operational OCP 4.x cluster
-
Non-FIPS
-
Access to Red Hat CDN, quay.io, and the OCP Operator Hub
-
2.2.3. Deploy the Typical Workflow
Deploy a Workflow with the Ploigos Software Factory Operator
Follow Ploigos Software Factory Operator - Quick Start to deploy the latest Software Factory Operator.
Deploy a Workflow with existing/manually installed Infrastructure
The following section describes examples on how to deploy various components on OCP.
- Deploy IDM
-
NOTE: Edit the following values:
Secret.stringData.admin.password
,Service.spec.clusterIP
,Route.spec.host
,Pod.spec.env
-
Install containerized IdM
oc new-project idm oc apply -f - << EOF apiVersion: v1 kind: PersistentVolumeClaim metadata: name: freeipa-data-pvc spec: accessModes: - ReadWriteOnce storageClassName: gp2 resources: requests: storage: 10Gi --- apiVersion: v1 kind: Secret metadata: name: freeipa-server-password stringData: admin.password: Secret123 --- apiVersion: v1 kind: Service metadata: name: freeipa-server-service spec: selector: app: freeipa-server clusterIP: <idm_service_ip> ports: - name: http port: 80 targetPort: 80 - name: https port: 443 targetPort: 443 - name: dns-udp port: 53 protocol: UDP targetPort: 53 - name: kerberos-tcp port: 88 protocol: TCP targetPort: 88 - name: kerberos-udp port: 88 protocol: UDP targetPort: 88 - name: kerberospw-udp port: 464 protocol: UDP targetPort: 464 - name: kerberospw-tcp port: 464 protocol: TCP targetPort: 464 - name: ldap-tcp port: 389 protocol: TCP targetPort: 389 - name: ldaps-tcp port: 636 protocol: TCP targetPort: 636 --- kind: Route apiVersion: route.openshift.io/v1 metadata: name: freeipa spec: host: freeipa-idm.apps.<cluster_domain> to: kind: Service name: freeipa-server-service weight: 100 port: targetPort: https tls: termination: passthrough wildcardPolicy: None --- apiVersion: v1 kind: Pod metadata: name: freeipa-server labels: app: freeipa-server spec: restartPolicy: OnFailure containers: - name: freeipa-server image: quay.io/freeipa/freeipa-server:centos-8-stream imagePullPolicy: IfNotPresent volumeMounts: - name: freeipa-server-data mountPath: /data ports: - containerPort: 80 protocol: TCP - containerPort: 443 protocol: TCP - containerPort: 53 protocol: UDP - containerPort: 88 protocol: TCP - containerPort: 88 protocol: UDP - containerPort: 464 protocol: TCP - containerPort: 464 protocol: UDP - containerPort: 389 protocol: TCP - containerPort: 636 protocol: TCP env: - name: IPA_SERVER_HOSTNAME value: freeipa-idm.apps.<cluster_domain> - name: IPA_SERVER_IP value: <idm_clusterIP> - name: PASSWORD valueFrom: secretKeyRef: name: freeipa-server-password key: admin.password - name: IPA_SERVER_INSTALL_OPTS value: "-U -r CLUSTER.LOCAL --setup-dns --no-forwarders --no-ntp" readinessProbe: exec: command: [ "/usr/bin/systemctl", "status", "ipa" ] initialDelaySeconds: 60 timeoutSeconds: 10 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 volumes: - name: freeipa-server-data persistentVolumeClaim: claimName: freeipa-data-pvc - name: cgroups hostPath: path: /sys/fs/cgroup EOF
-
- Deploy Keycloak
-
-
Install SSO operator to SSO namespace as
keycloak-1
-
Obtain admin creds from secret
credential-keycloak-1
-
Setup SSO configuration for OCP
-
Create new client: Clients → Create
---- clientid: openshift client protocol: openid-connect ----
-
Edit client:
---- access type: confidential valid redirect URIs: - https://console-openshift-console.apps.<cluster_domain>/* - https://oauth-openshift.apps.<cluster_domain>/* web origins: - https://console-openshift-console.apps.<cluster_domain> - https://oauth-openshift.apps.<cluster_domain> ----
-
-
Integrate with IDM
-
Configure → User Federation → add provider (ldap)
---- Vendor: Red Hat Directory Server Connection URL: ldap://<idm_ip>:389 Users DN: cn=users,cn=accounts,dc=cluster,dc=local Bind DN: cn=Directory Manager Bind Credential: <bind_secret> ----
-
-
Setup OCP for SSO configuration
-
Administration → Cluster Settings → Global Configuration → OAuth → Add OpenID Connect
---- name: sso clientid: openshift client secret: <from SSO> issuer url: https://keycloak-sso.apps.<cluster_domain>/auth/realms/master ----
-
-
- Deploy Jenkins
-
-
Install Jenkins operator to
jenkins
namespace -
Create Jenkins instance
-
- Deploy Gitea
-
-
Add HelmChartRepository CR
---- oc apply -f - << EOF apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: gitea-charts spec: # optional name that might be used by console name: gitea-charts connectionConfig: url: https://dl.gitea.io/charts/ EOF ----
-
Switch to 'developer' view, create gitea namespace, install gitea using helm
-
Add 'anyuid' scc for Gitea deploy
---- oc adm policy add-scc-to-user anyuid -z default -n gitea` ----
-
Create route for gitea http
-
Obtain admin user/pass from:
Workloads→ statefulsets → gitea → environments → init container (configure-gitea)
-
Connecting Giteao to IdM:
---- #From the Gitea WebUI: Site Administration -> Authentication Sources, add source Auth Type: LDAP(Via BindDN) Auth Name: idm Security protocol: unencrypted Host: <IdM host ip> Port: 389 BindDN: cn=Directory Manager Bind Passord: <binddn secret> User Search Base: cn=users,cn=accounts,dc=cluster,dc=local User Filter: (&(objectClass=posixAccount)(uid=%s)) Email attribute: mail ----
-
Note
|
To expose SSH functionality will require external loadbalancer and networking configuration that is not available in some sandbox environments (ex: OpenTLC/RHPDS deployments) |
- Deploy Nexus
-
-
Install Nexus Repository Operator
-
Add 'anyuid' scc for Nexus deploy
oc adm policy add-scc-to-user anyuid -z nxrm-operator-certified -n nexus
-
Create route to nexus http server
-
Login to nexus (admin/admin123)
-
Follow configuration wizard
-
Disable anonymous access
-
-
Add docker repository
-
Configure docker repository for port 9001 (http)
-
-
Add OCP service for nexus docker (port 9001)
-
Add OCP route for nexus docker (https/edge → 9001)
-
- Deploy ArgoCD
-
-
Install ArgoCD community operator
-
Create an instance of ArgoCD
-
Create route to argocd-server (passthrough)
-
Get admin password from secret (Secrets → argocd-1-cluster → admin.password)
-
- Deploy SonarQube
-
-
Install sonarqube operator (redhatgov)
-
Create a sonarqube instance
-
Login and change password (default: admin/admin)
-
- Deploy ACS(StackRox)
-
-
Install ACS operator
-
Remove resource limit for stackrox project (if exists)
-
Deploy central
-
Create cluster bundle
-
Stackrox UI →
Platform Configuration → integrations → cluster init bundle → New Integration
-
Generate and download kubernetes yaml
-
Apply yaml to cluster
-
-
Deploy secured cluster
-
3. Customizing the CI/CD Workflow
3.1. Developing Opinions
Ploigos is an opinionated software development and deployment workflow. The Typical CI/CD workflow, as the provided suggested workflow to implement, presents specific technology chosen based on industry popularity, feature set, open source involvement, and proficiency by contributing members of Ploigos. These considerations, and many more, may be evaluated by a particular organization when deciding on specific component implementer within the workflow.
Some other considerations that guide opinions established in this document are:
-
The CI tool should be modular and easily replaceable.
-
Trunk Based Development, instead of:
-
When creating merge requests, create feature branches on forks rather than on the primary repository, to mirror the workflow of large open source projects.
-
The "Release Branch" is the branch that goes into production. The term "main" will be used to refer to the release branch. The name of this branch may be different from team to team. There are many different naming conventions for a branch that goes to production (e.g., master, trunk, release)
-
The minimum, and recommended, number of logical environments is
-
N
number of development environments, one per active feature branch -
Test
-
Production
-
-
A "StepRunner" is the implementation of all the necessary work associated with common steps in a CI/CD pipeline. Examples of step runners include: package, tag source, unit test, etc. The implementation of the StepRunner is abstracted away from the user, and available as reusable modules via the ploigos-step-runner library.
-
For UAT test bed infrastructure (i.e., Selenium Grid), such infrastructure, when possible, should be dynamically created on-demand (by infrastructure-as-code) using a StepRunner from the ploigos-step-runner library. There will be cases where UAT infrastructure already exists in-house. In such cases, the use of preexisting/static infrastructure is a supported alternative for UAT testing. For example, if your company already has an established deployment of Selenium Grid, that instance of Selenium Grid can be used rather than have the framework dynamically stand up additional instances.
3.2. Changing the Workflow
- Forking the StepRunner and Jenkins Library
-
In order to customize the CI/CD workflow and how specific steps in that workflow are configured, the first step is to clone and modify the source code repositories that control the workflow operation. With Ploigos this will be the
ploigos-jenkins-library
andploigos-step-runner
repositories.
In the following example, it will be demonstrated how to fork the official
ploigos-step-runner
and ploigos-jenkins-library
repositories to the Gitea
server hosted as part of the CI/CD Workflow, and update the reference
application to use the forked repositories.
-
Create the empty/bank repositories in Gitea with the following names:
ploigos-step-runner ploigos-jenkins-library
-
Clone the repositories:
git clone https://github.com/ploigos/ploigos-step-runner.git git clone https://github.com/ploigos/ploigos-jenkins-library
-
Update the cloned repository remote to point to the Gitea server
cd ploigos-jenkins-library git remote rename origin github git remote add origin https://<repo_url>/platform/ploigos-jenkins-library.git git push -u origin main git push --tags
cd ploigos-step-runner git remote rename origin github git remote add origin https://<repo_url>/platform/ploigos-step-runner.git git push -u origin main git push --tags
-
Configure the Jenkinsfile for the reference application to point to the new internal
ploigos-step-runner
(if using the Software Factory Operator and themvn-reference-app
application)//cicd/ploigos-software-factory-operator/Jenkinsfile // Load the Ploigos Jenkins Library library identifier: 'ploigos-jenkins-library@v0.21.0', retriever: modernSCM([ $class: 'GitSCMSource', remote: 'https://<repo_url>/platform/ploigos-jenkins-library.git' ]) // run the pipeline ploigosWorkflowTypical( stepRunnerConfigDir: 'cicd/ploigos-software-factory-operator/ploigos-step-runner-config/', pgpKeysSecretName: 'ploigos-gpg-key', workflowServiceAccountName: 'jenkins', workflowWorkerImageDefault: 'quay.io/ploigos/ploigos-base:v0.21', workflowWorkerImageAgent: 'quay.io/ploigos/ploigos-ci-agent-jenkins:v0.21', workflowWorkerImageUnitTest: 'quay.io/ploigos/ploigos-tool-maven:v0.21', workflowWorkerImagePackage: 'quay.io/ploigos/ploigos-tool-maven:v0.21', workflowWorkerImageStaticCodeAnalysis: 'quay.io/ploigos/ploigos-tool-sonar:v0.21', workflowWorkerImagePushArtifacts: 'quay.io/ploigos/ploigos-tool-maven:v0.21', workflowWorkerImageContainerOperations: 'quay.io/ploigos/ploigos-tool-containers:v0.21', workflowWorkerImageContainerImageStaticVulnerabilityScan: 'quay.io/ploigos/ploigos-tool-openscap:v0.21', workflowWorkerImageDeploy: 'quay.io/ploigos/ploigos-tool-argocd:v0.21', workflowWorkerImageUAT: 'quay.io/ploigos/ploigos-tool-maven:v0.21', separatePlatformConfig: true, //updated section stepRunnerUpdateLibrary: true, stepRunnerLibSourceUrl: "git+https://<repo_url>/platform/ploigos-step-runner.git@main" )
- Creating or Updating the Workflow
-
Changes to the Ploigos CI/CD Workflow start in the
ploigos-jenkins-library
repository. The repository container a vars/ directory which defines the 3 common CI/CD workflows:-
ploigosWorkflowMinimal.groovy - Minimum Jenkins workflow
-
ploigosWorkflowTypical.groovy - Typical Jenkins workflow (Recommended)
-
ploigosWorkflowEverything.groovy - Jenkins workflow with all options enabled
-
Each groovy file is split into two primary sections: the WorkflowParams which contains variables for the workflow that can be set in the application Jenkinsfile, and the following section which defines the Jenkins pipeline
When creating new pipelines, ensure that it contains at least the following stages:
-
SETUP: Workflow Step Runner: this stage prepares the PGP keys for use across all subsequent steps, and updates the Ploigos Step Runner if needed.
-
CI: Generate Metadata: this stage gathers information about the git repository that this workflow is being run against, as well as any metadata that would be relevant for upcoming steps (e.g., information from the POM file if this is a Maven build).
-
CI: Tag Source Code: this stage tags the current commit and pushes it to the git repo, allowing for traceability between code commits and the CI/CD workflow results.
-
The reporting step (defined under pipeline.post.always) should be part of every workflow.
For the new workflow, identify which environments will have deployments (DEV/TEST/PROD) and ensure each environment has a stage('<ENV>') entry defined. Substages under the stage(Continuous Integration) definition can be customized to match the planned CI/CD Workflow.
Before removing the definition for a stage, take note of all variables that are being used. If these variables are not used anywhere else in the file, the variable declaration at the top of the file should be removed. While all variables should be examined, the following are the main variables to watch out for:
-
WORKFLOW_WORKER_*: these variables are used in the container() declaration. Note that if a container is no longer used, its associated entry in spec.containers entry can be removed as well.
-
envName*: if any of the deployment steps are being removed, the associated variable for environment name (e.g., envNameProd) can be removed.
-
releaseGitRefPatterns: this variable is used in every deployment step; if every deployment step is removed, this variable can be removed as well.
3.3. Process for Adding/Changing a StepRunner
Replacing an existing step (e.g., changing the package step to use npm) vs. adding a new step (e.g., an organization specific compliance scan) are almost identical, the only difference being how the process is started:
To replace the step implementer for an existing step in the process, first locate that step’s configuration in the configuration files (config.yml and config-secrets.yml by default).
Be sure to use sops to edit the secrets file; do not attempt to modify the file directly.
Locate the existing configuration; in the example of changing the package step from Maven to NPM:
Step Implementer - Before
... package: - implementer: MavenPackage
Replace the implementer with the new one:
Step Implementer - After
... package: - implementer: NpmPackage
If adding a new step implementer, simply add the step name and implementer into the file. While it does not matter which order the stage is added, convention is to add this configuration where it would belong in the actual pipeline (e.g., deploy can be defined before build, but this severely reduces readability of the file).
Step Implementer - New
... package: - implementer: MitgComplianceCheck
The implementer will automatically be picked up since the NpmPackage class is defined under the package folder in the Ploigos Step Runner module. If the implementation lives under a folder that does not match the step name, it can be referenced using the fully qualified class name (e.g., ploigos_step_runner.step_implementers.shared.NpmGeneric). Once the desired step implementer is defined, review that implement documentation to determine the possible configuration options. Review the various "Configuration Key" options:
-
Required, no default value: This value must be set under the step implementer config, otherwise the step will fail when the pipeline is run.
-
Required, with default value: If the default value is not acceptable/correct, the value must be overridden in the step implementer config.
-
Not required: These values are not required for every workflow, and can therefore be left blank – but review each one to determine if it is required for your particular workflow.
If the default values are acceptable, and no additional configuration is required, there should be no config section for the implementer. Do not put sensitive information in config.yaml; this information should be encrypted using SOPS as part of config-secrets.yaml
3.4. Creating the Cloud Resources
Currently, Ploigos utilizes a helm chart to set up the Kubernetes resources required to run a pipeline, such as a ServiceAccount, with associated RoleBinding and ClusterRole. As before with the Jenkins Library, this step will involve copying from an existing template. In the case of the cloud resources, however, an entirely new repository is required. The repository to copy from is: ploigos-charts
The simplest way to create a new repository for these cloud resources is:
-
Create a new repository in GitHub, naming it appropriately based on the workflow (e.g., cloud-resources_jenkins_workflow_custom).
-
Create a new folder locally whose name matches that of the newly-created repository.
-
Go into this folder on the command line and initialize the repository locally using git init.
-
Create the initial branch (main):
git checkout -b main
-
Copy all files and folders except for the .git folder from the reference cloud resources project (linked above).
-
Create a remote linking this local repository to the remote repository
git remote add origin https://<git_url>/<org_name>/cloud-resources_jenkins_workflow_custom.git
-
Push the changes to the remote
git push -u origin/main
With this repository created and the root commit pushed, create a branch for defining the workflow – while not required, strongly consider using the pattern feature/<name> for the branch name. On this branch, make the following changes:
Common Changes
-
Determine if this workflow has any deployment steps. If not, delete the folder
reference-quarkus-mvn-deploy
. -
Rename the
reference-quarkus-mvn-workflow
folder to represent the new workflow created above (e.g., build-only-workflow). Do the same for the deploy folder, if it still exists. -
Update the README to reflect the following:
-
Update the first two lines to reflect the purpose of these cloud resources
-
Replace all instances of
charts/reference-quarkus-mvn-workflow
withcharts/<new-workflow-folder>
-
Add any other notes that may be specific to these cloud resources, if any
-
Workflow Resources Changes
Perform the following changes under the charts/<workflow-folder>
directory:
-
Update the applicationName and serviceName values in values.yaml – these values will determine the names of various Kubernetes resources that are created, so be cognizant of Kubernetes' 63 character limit for various resource names, and try to keep the combined length of these two strings to under 32 characters.
-
Create a PGP private key for the pipeline and replace the contents of the jenkins.key value in secrets.yaml
-
Update Chart.yaml to update the version of
ploigos-workflow-jenkins-resources
, if needed
Deployment Resources Changes
If this workflow contains deployment steps, perform the following changes under
the charts/<deploy-folder>
directory:
-
Update the description in Chart.yaml to reflect the intent of these resources / its supporting workflow library
-
Update values.yaml with any values shared between all environments.
-
Delete the
values-*.yaml
files that are not a deployment target, if any.
If there is only one target deployment environment, all of the values could be put in a single values file, but this is not recommended. Instead, separate values into whether or not they would be common or environment specific if there was a second environment; this makes it easier to grow the workflow later, and use it as a reference for other workflows. Once all of these changes are made: add and commit the changes, push them up to the repository, and open a merge request.
All of these changes could have been done as part of the initial root commit, but this information was omitted in order to simplify the written instructions. It is up to the implementer to determine if this should be one or two commits.
3.5. Encrypting Secrets
Sensitive information, such as passwords and private encryption keys, should always be encrypted at rest. This creates a challenge for automation processes, since the tools involved require a secure means to access required authentication credentials without user intervention.
The Ploigos workflow solves this problem using Mozilla SOPS, a powerful tool with a wide array of functionality for protecting secrets at rest. A full explanation of its inner workings and functionality can be reviewed in their documentation; this section aims to provide a high-level overview of only the SOPS functionality that Ploigos takes advantage of for encrypting sensitive information required for an automated pipeline run.
- Ploigos SOPS Functionality
-
A SOPS-encrypted file takes the form of a key/value(s) pair file (e.g., JSON, INI); at the time of writing, Ploigos uses YAML for storing configuration of reference applications, but there is no reason that JSON could not be used instead.
The encrypted file is then stored in version control. The reference
architecture for the Ploigos ecosystem expects this file to reside in the
folder cicd/ploigos-step-runner-config
(relative to the repository root
folder), though this can be modified by overriding the value for
stepRunnerConfigDir. The CI/CD workflow tool decrypts the values in this file
in order to gain access to authentication secrets (e.g., tokens, passwords,
keys) for various tools used across the pipeline. This removes the need for
user input and only authorized users are able to modify this file.
The keys from the configuration key/value pairs are unencrypted, which carries
two benefits. First, anyone with access to the file can see what values are
stored in the file, without gaining access to the values themselves. Also, it
is possible to use version control tooling (e.g., git diff
) to individually
identify which values have been changed at any point in time.
Note
|
The file will have multiple PGP keys associated: one for each dev trusted to decrypt/modify values, plus one for each tool that needs to decrypt that secret (e.g., the CI/CD workflow tool). |
- First Encryption
-
Before a SOPS-encrypted file can be created, a PGP public/private keypair is required; see the section “Generating PGP Keys” below for how this will work. Once a keypair is available, simply invoke the SOPS tool with the desired filename as the only argument:
sops --pgp <pgp-fingerprint> my-config-secrets.yaml
Note
|
PGP fingerprint is explicitly specified to ensure that the correct PGP key is used; if multiple PGP keys exist, SOPS will use the first one it finds. |
This will open a text editor with a pre-generated YAML template. Fill in configuration values as required, then save and quit. SOPS will automatically encrypt the document. To see this in action, open the file through a standard text editor (i.e., not through SOPS) and note that the following are true:
-
The configuration keys are unencrypted, but the configuration values are encrypted.
-
There is a sops.pgp entry in the file with a fingerprint that matches the fingerprint of the PGP keypair used to generate this file.
- Rotating Keys
-
When a SOPS-encrypted file is first created, only the user whose PGP public key was used to encrypt the key (e.g., the user who created the file) can decrypt the file. It will be necessary for multiple people to decrypt this file — at a minimum, the originator of the file, and the automation tooling.
If values were encrypted with an individual developer’s private key, then only that developer would be able to decrypt those values. Instead, a symmetric encryption key is generated for encrypting configuration values, and that key is then protected with the user’s PGP public key. This way, the key can be encrypted multiple times using different PGP keys, allowing the owner of those keys to also decrypt the key (and thus the configuration values in turn).
As a prerequisite to adding a public key to a SOPS-encrypted file, the user performing the operation must already have:
-
Their own public key added to the file.
-
The public key of the user to be given permission to decrypt the file.
-
The public keys for all users who already have permission to decrypt the file (e.g., whose keys were previously added to the file).
Note
|
The method for sharing public keys is outside the scope of this document. |
Once these prerequisites are met, add the new PGP key using its fingerprint:
sops -r -i --add-pgp <fingerprint> some-secrets-file.yaml
To revoke a user’s permissions, use the same command as above, but replace
--add-pgp
with --rm-pgp
.
- Decrypting Configuration in the Pipeline
-
The larger purpose of this exercise has been to create a configuration file that can be stored in version control and read in by a CI/CD workflow tool. The last piece of this puzzle is setting up the workflow tool to be able to decrypt these values.
Generating the Workflow Runner Key
TODO: discuss generating public/private keypair for pipeline
Note
|
The public key needs to be distributed the same way that all the other keys are; DO NOT LOSE IT! DELETE PRIVATE KEY FROM LOCAL MACHINE ONCE DONE!! |
Just as various users’ keys are added to the SOPS-encrypted configuration to be able to decrypt these values, the CI/CD workflow runner tool also requires a PGP public/private keypair. A user will need to generate this keypair, but it should only live on their machine temporarily; once this process is complete, the private key must be deleted from the local machine. The public key should be distributed the same way that all the other user’s public keys are.
Adding Key to Helm Chart
There is a Helm chart that will be run later in this process in order to prepare a pipeline for a specific application; to prepare a Ploigos pipeline; see the Create the Cloud Resources section for additional information.
The private key that was generated for the workflow tool needs to be loaded
into a SOPS-encrypted file within this helm chart. The default location for
storing the private key is:
<cloud-resources>/charts/reference-quarkus-mvn-workflow/secrets.yaml
This file should be a SOPS-encrypted file (using PGP public keys from all users who should have access to this file) with the following content, placing the PGP private key inline as appropriate:
global: pgpKeys: tekton-tssc-references.key: | -----BEGIN PGP PRIVATE KEY BLOCK----- ...SNIPPED... -----END PGP PRIVATE KEY BLOCK-----
Running this Helm chart requires human intervention, so only users’ PGP keys
should be added here. Once the cloud resources Helm chart is run, the output
will contain the name of the secret that was created to hold this PGP key
(e.g., PGP Keys Secret: pgp-keys-ploigos-workflow-dms-child-support
).
The value for pgpKeysSecretName
should be overridden in the Jenkinsfile to
reference this value. When the pipeline is run, this PGP private key will be
injected into the pod and imported, in order to decrypt pipeline configuration
values.
Note that this secret is protected so as to only be accessed by a ServiceAccount with specific permissions; the workflow runner pod must be run using the same permissions.
4. Contributing
4.1. Contribution Guide
This document implements the Red Hat Modular Documentation system. Full details of this documentation style can be found HERE. In addition, the chosen format for all documentation within this repository is Asciidoc. An Asciidoc quick reference can be found HERE
In summary, this modular documentation is based around the concept of dividing documentation content around user stories. Once the user stories are identified, each story is then developed into an Assembly. Each Assembly is made up of one or more of the 3 Module types:
-
Concept Modules - Documentation sections for descriptions and explanations
-
Procedure Modules - Documentation in the form of procedures or step-by-step instructions
-
Reference Modules - Lists, tables, definitions and other information that users would need to reference
4.1.1. Adding to the Documentation
- Automatic Asciidoc builds with GitHub Actions
-
-
Fork the documentation repository
-
Once the repository is forked, ensure GitHub actions is enabled for the repository to build and publish the documentation.
-
-
Perform edits to the documentation
-
Follow the guidelines established by the Modular Documentation system
-
Refine existing user stories - modify existing documentation assemblies
-
Establish new user stories - add additional assemblies into the documentation
-
-
-
Review Changes in GitHub Pages
-
Submitting a pull request to the repository
-
- Manual/Local Build Process
-
Prerequisites:
-
Podman installed
-
quay.io/hdaems/podman-asciidoctor:1.1
image
-
-
Check out document repository ploigos-docs
-
Perform edits
-
Build asciidocs
#make site directory mkdir site/ #copy images into site/ cp -R images site/ #run sciidoctor container podman run --user=<userid> --rm -v <docs_repo>:/documents:Z podman-asciidoctor asciidoctor -D site -o index.html master.adoc
-
Open HTML file site/index.html
4.1.2. Building the Documentation
This document repository is designed to use GitHub Actions to automatically generate a static HTML site using a container image with asciidoctor tools. The generated HTML is pushed back into the repository under the 'gh-pages' branch. Finally, The 'gh-pages' branch is published to GitHub Pages using a 3rd party GitHub Action HERE.
Note
|
The build action will only kick off on a push to the 'main' branch. |
4.2. Reference list of subprojects
-
ploigos-step-runner
Python library that for the ploigos 'step-runner'. The step runner abstracts execution of each step in the defined pipeline by utilizing step implementers for defining the operation of specific tools -
ploigos-software-factory-operator
Kubernetes/OCP operator for deploying the Ploigos platform (opinionated CI/CD pipeline) -
ploigos-containers
Repository for build files (podman) to build containers for ploigos, published to quay.io/ploigos -
ploigos-charts
Helm Charts for deploying Ploigos components to a kubernetes cluster -
ploigos-jenkins-library
Library for Jenkins that defines domain specific language for the implementation of the Ploigos workflow
Appendix A: Ploigos Terms and Definitions
- Workflow
-
A Ploigos procedure as represented by a drawing.
- Workflow Abstraction
-
A Ploigos Workflow as represented by a drawing with no specified tooling to implement that steps of the workflow.
- Workflow Implementation
-
An implementation of a Ploigos Workflow Abstraction with specific tooling.