Introduction to Pipelines

Pipelines are core feature in Microtica and the most crucial part of every software delivery automation. Using pipelines you can define the process your source code should go through from your local machine to production.

Pipeline is a composition of multiple steps executed in sequence and/or parallel. Each step performs a specific action such as:

  • Compile and package the code
  • Run unit and integration tests
  • Run code quality checks
  • Build and push Docker images
  • Deploy services on Kubernetes

#How Microtica pipelines work

Unlike traditional solutions where you need to spin and manage dedicated VMs to handle execution of pipeline actions, Microtica takes the cloud-native approach to provide software delivery automation using Docker as a runtime for pipeline step execution.

Using Microtica pipelines you don’t have to worry about maintaining complex infrastructure for your automation. Using Docker as a runtime you are able to define more flexible pipelines that can use different frameworks and tools in one pipeline.

In one step you can use node image as a runtime environment to compile your NodeJS application but than in the next step you can use hashicorp/terraform image, that contains pre-installed Terraform CLI, to perform some Terraform operations.

Docker runtime architecture

Each step spins up a new Docker container using the image specified for that step.

The Docker container lives until all actions within the step are completed, after that the container is killed and deleted and everything in the container’s memory will no longer be available.

Microtica provides a Pipeline shared state that preserves the state between steps throughout the pipeline execution.

#Pipeline shared state

To preserve state between steps, Microtica uses Docker volumes to achieve that.

Sharing state between steps is useful when you clone the source code from Git in one step and compile and test the code in another step.

Shared state can be accessed from any step within one pipeline on /microtica/shared path. Anything that is written in this folder by any step will be available for all other steps in that pipeline.

Docker volume architecture

The diagram implies that if within the first step you store a file named index.js, the same file will be available in the second step on /microtica/shared/index.js path.

#Pipeline artifacts

Artifacts persist the step state even after the step is completed. Pipeline artifacts are usually used as a storage for deployment packages.

Pipeline artifacts architecture

Each step can define one or multiple artifact packages within microtica.yaml.

microtica.yaml

steps:
  Clone:
    title: Clone my source code from Git
    type: git-clone
  BuildNodeApp:
    image: node:latest
    commands:
      - npm install
      - npm run build
    artifacts:
      files:
        primary: /dst

With this spec we are telling Microtica to package everything within the /dst folder store artifact with name primary. The stored artifact can then be used for deployment or downloaded from the Microtica GUI.

Learn more about artifacts and different output configurations from Pipeline Artifacts.