Skip to main content

Helm

The helm resource allows Helm charts to be provisioned to k8s_cluster resources.

Minimal example#

Install Helm from a remote GitHub repository#

helm "vault" {  cluster = "k8s_cluster.k3s"  chart = "github.com/hashicorp/vault-helm"
  values_string = {    "server.dataStorage.size" = "128Mb"  }}

Install Helm from a local folder#

helm "vault" {  cluster = "k8s_cluster.k3s"  chart   = "./files/helm/vault"
  values_string = {    "server.dataStorage.size" = "128Mb"  }}

Parameters#

depends_on#

Type: []string
Required: false

Depends on allows you to specify resources which should be created before this one. In the instance of a destruction, this container will be destroyed before resources in.

cluster#

Type: string
Required: true

The Kubernetes cluster where the target application is running.

chart_name#

Type: string
Required: false

The name to be given to the deployed chart. If chart_name is not specified the name of the resource will be used for the chart name.

values#

Type: string
Required: false

File path resolving to a YAML file containing values for the Helm chart.

values_string#

Type: map[string]string
Required: false

Map of keys and values to set for the Helm chart. Hierarchy in the Helm YAML for keys is replaced by chaining properties with a .

For example, given the following YAML values:

server:  nodes: 1
client:  child:    property: "a string"

The following values_string map could be used

values_string = {  "server.nodes" = 1  "client.child.property" = "a string}

namespace#

Type: string
Required: false Default: "default"

Kubernetes namespace to install the chart to.

skip_crds#

Type: bool
Required: false Default: "false"

When set to true, Helm will not install any bundled CRDs for the chart.

retry#

Type: int
Required: false Default: "0"

When set, the Helm resource will retry the installation of a chart the specified number of times.

health_check#

Type: HealthCheck
Required: true

Define a health check for the k8s_config, the resource will only be marked as successfully created when the health check passes. Health checks operate on the running state of containers based on the pod selector.

health_check {  timeout = "120s"  pods    = ["app.kubernetes.io/name=vault"]} 

Type health_check#

A health_check stanza allows the definition of a health check which must pass before the k8s_config is marked as successfully created.

timeout#

Type: duration
Required: true

The maximum duration to wait before marking the health check as failed. Expressed as a Go duration, e.g. 1s = 1 second, 100ms = 100 milliseconds.

pods#

Type: []string
Required: true

Pod selector to use for checks, a Pod is marked as healthy when all containers in all pods returned by the selector string are marked as running.

health_check#

Type: string
Required: false Default: "default"