Helm
The helm
resource allows Helm charts to be provisioned to k8s_cluster
resources.
#
Minimal example#
Install Helm from a remote GitHub repositoryhelm "vault" { cluster = "k8s_cluster.k3s" chart = "github.com/hashicorp/vault-helm"
values_string = { "server.dataStorage.size" = "128Mb" }}
#
Install Helm from a local folderhelm "vault" { cluster = "k8s_cluster.k3s" chart = "./files/helm/vault"
values_string = { "server.dataStorage.size" = "128Mb" }}
#
Parameters#
depends_onType: []string
Required: false
Depends on allows you to specify resources which should be created before this one. In the instance of a destruction, this container will be destroyed before resources in.
#
clusterType: string
Required: true
The Kubernetes cluster where the target application is running.
#
chart_nameType: string
Required: false
The name to be given to the deployed chart. If chart_name
is not specified the name of the resource will be used for the chart name.
#
valuesType: string
Required: false
File path resolving to a YAML file containing values for the Helm chart.
#
values_stringType: map[string]string
Required: false
Map of keys and values to set for the Helm chart. Hierarchy in the Helm YAML for keys is replaced by chaining properties with a .
For example, given the following YAML values:
server: nodes: 1
client: child: property: "a string"
The following values_string map could be used
values_string = { "server.nodes" = 1 "client.child.property" = "a string}
#
namespaceType: string
Required: false
Default: "default"
Kubernetes namespace to install the chart to.
#
skip_crdsType: bool
Required: false
Default: "false"
When set to true, Helm will not install any bundled CRDs for the chart.
#
retryType: int
Required: false
Default: "0"
When set, the Helm resource will retry the installation of a chart the specified number of times.
#
health_checkType: HealthCheck
Required: true
Define a health check for the k8s_config
, the resource will only be marked as successfully created when the health check passes. Health checks operate on the running state of containers based on the pod selector.
health_check { timeout = "120s" pods = ["app.kubernetes.io/name=vault"]}
health_check
#
Type A health_check stanza allows the definition of a health check which must pass before the k8s_config
is marked as successfully created.
#
timeoutType: duration
Required: true
The maximum duration to wait before marking the health check as failed. Expressed as a Go duration, e.g. 1s
= 1 second, 100ms
= 100 milliseconds.
#
podsType: []string
Required: true
Pod selector to use for checks, a Pod is marked as healthy when all containers in all pods returned by the selector string are marked as running.
#
health_checkType: string
Required: false
Default: "default"