Deploy Consul on Kubernetes
- 10min
- |
- ConsulConsul
Deployment
Consul is a service networking solution that enables teams to manage secure network connectivity between services and across on-prem and multi-cloud environments and runtimes. Consul offers service discovery, service mesh, traffic management, and automated updates to network infrastructure devices. Check out the What is Consul? page to learn more.
In this tutorial, you will deploy a Consul datacenter onto a Kubernetes cluster. After deploying Consul, you will interact with Consul using the UI and CLI.
In the following tutorials, you will deploy a demo application, integrate it with Consul service mesh, allow external traffic into the service mesh, and enhance observability into your service mesh.
In this tutorial, you will:
- Deploy an Elastic Kubernetes Service (EKS) cluster with Terraform
- Install Consul using Helm or the Consul K8S CLI
- Configure your terminal to communicate with the Consul datacenter
- View Consul services with the CLI, UI, and/or API
In this tutorial, you will:
- Deploy an Azure Kubernetes Service (AKS) cluster with Terraform
- Install Consul using Helm or the Consul K8S CLI
- Configure your terminal to communicate with the Consul datacenter
- View Consul services with the CLI, UI, and/or API
In this tutorial, you will:
- Create a local Kubernetes cluster using
kind
- Install Consul using Helm or the Consul K8S CLI
- Configure your terminal to communicate with the Consul datacenter
- View Consul services with the CLI, UI, and/or API
Prerequisites
For this tutorial, you will need:
For this tutorial, you will need:
For this tutorial, you will need:
Clone GitHub repository
Clone the GitHub repository containing the configuration files and resources.
$ git clone https://github.com/hashicorp-education/learn-consul-get-started-kubernetes
$ git clone https://github.com/hashicorp-education/learn-consul-get-started-kubernetes
$ git clone git@github.com:hashicorp-education/learn-consul-get-started-kubernetes.git
$ git clone git@github.com:hashicorp-education/learn-consul-get-started-kubernetes.git
Change into the directory that contains the complete configuration files for this tutorial.
$ cd learn-consul-get-started-kubernetes/self-managed/eks
$ cd learn-consul-get-started-kubernetes/self-managed/eks
Clone the GitHub repository containing the configuration files and resources.
$ git clone https://github.com/hashicorp-education/learn-consul-get-started-kubernetes
$ git clone https://github.com/hashicorp-education/learn-consul-get-started-kubernetes
$ git clone git@github.com:hashicorp-education/learn-consul-get-started-kubernetes.git
$ git clone git@github.com:hashicorp-education/learn-consul-get-started-kubernetes.git
Change into the directory that contains the complete configuration files for this tutorial.
$ cd learn-consul-get-started-kubernetes/self-managed/aks
$ cd learn-consul-get-started-kubernetes/self-managed/aks
Clone the GitHub repository containing the configuration files and resources.
$ git clone https://github.com/hashicorp-education/learn-consul-get-started-kubernetes
$ git clone https://github.com/hashicorp-education/learn-consul-get-started-kubernetes
$ git clone git@github.com:hashicorp-education/learn-consul-get-started-kubernetes.git
$ git clone git@github.com:hashicorp-education/learn-consul-get-started-kubernetes.git
Change into the directory that contains the complete configuration files for this tutorial.
$ cd learn-consul-get-started-kubernetes/self-managed/local
$ cd learn-consul-get-started-kubernetes/self-managed/local
Create infrastructure
With these Terraform configuration files, you are ready to deploy your infrastructure.
Issue the terraform init
command from your working directory to download the necessary providers and initialize the backend.
$ terraform init Initializing the backend... Initializing provider plugins... ... Terraform has been successfully initialized! ...
$ terraform init
Initializing the backend...
Initializing provider plugins...
...
Terraform has been successfully initialized!
...
Then, deploy the resources. Confirm the run by entering yes
.
$ terraform apply ## ... Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes ## ... Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
$ terraform apply
## ...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
## ...
Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
Note: The Terraform deployment could take up to 10 minutes to complete. Feel free to explore the next sections of this tutorial while waiting for the environment to complete initialization or learn more about the Raft protocol in a fun, interactive way.
With these Terraform configuration files, you are ready to deploy your infrastructure.
Issue the terraform init
command from your working directory to download the necessary providers and initialize the backend.
$ terraform init Initializing the backend... Initializing provider plugins... ... Terraform has been successfully initialized! ...
$ terraform init
Initializing the backend...
Initializing provider plugins...
...
Terraform has been successfully initialized!
...
Then, deploy the resources. Confirm the run by entering yes
.
$ terraform apply ## ... Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes ## ... Apply complete! Resources: 17 added, 0 changed, 0 destroyed.
$ terraform apply
## ...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
## ...
Apply complete! Resources: 17 added, 0 changed, 0 destroyed.
Note: The Terraform deployment could take up to 10 minutes to complete. Feel free to explore the next sections of this tutorial while waiting for the environment to complete initialization or learn more about the Raft protocol in a fun, interactive way.
Create Kubernetes cluster
In this section, you will create a Kubernetes cluster with kind
that provides an environment for you to explore Consul service mesh functionality.
Create a new cluster with kind
.
$ kind create cluster --config=kind/cluster.yaml Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Have a nice day! 👋
$ kind create cluster --config=kind/cluster.yaml
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a nice day! 👋
Connect to your infrastructure
Kubernetes stores cluster connection information in a file called kubeconfig
. You can retrieve the Kubernetes configuration settings for your EKS cluster and merge them into your local kubeconfig
file by issuing the following command:
$ aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw kubernetes_cluster_id)
$ aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw kubernetes_cluster_id)
Kubernetes stores cluster connection information in a file called kubeconfig
. You can retrieve the Kubernetes configuration settings for your AKS cluster and merge them into your local kubeconfig
file by issuing the following command:
$ az aks get-credentials --resource-group $(terraform output -raw azure_rg_name) --name $(terraform output -raw aks_cluster_name)
$ az aks get-credentials --resource-group $(terraform output -raw azure_rg_name) --name $(terraform output -raw aks_cluster_name)
Kubernetes stores cluster connection information in a file called kubeconfig
. When you create the kind
cluster, it automatically updates your kubectl
context to target the cluster. Verify that your Kubernetes context is set to the kind
cluster by issuing the following command:
$ kubectl cluster-info --context kind-kind Kubernetes control plane is running at https://127.0.0.1:54233 CoreDNS is running at https://127.0.0.1:54233/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl cluster-info --context kind-kind
Kubernetes control plane is running at https://127.0.0.1:54233
CoreDNS is running at https://127.0.0.1:54233/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Deploy Consul datacenter
Review Consul server configuration
You will now review the Helm chart for deploying a Consul datacenter in your Kubernetes cluster using the consul-k8s
CLI or Helm installation methods.
To deploy Consul on Kubernetes, go directly to Deploy Consul.
Review helm/values-v1.yaml
. This file defines the Consul datacenter you will deploy to Kubernetes. Review the comments in the file for an explanation of each parameter.
# Contains values that affect multiple components of the chart. global: # The main enabled/disabled setting. # If true, servers, clients, Consul DNS and the Consul UI will be enabled. enabled: true # The prefix used for all resources created in the Helm chart. name: consul # The consul image version. image: hashicorp/consul:1.16.0 # The name of the datacenter that the agents should register as. datacenter: dc1 # Enables TLS across the cluster to verify authenticity of the Consul servers and clients. tls: enabled: true # Enables ACLs across the cluster to secure access to data and APIs. acls: # If true, automatically manage ACL tokens and policies for all Consul components. manageSystemACLs: true # Configures values that configure the Consul server cluster. server: enabled: true # The number of server agents to run. This determines the fault tolerance of the cluster. replicas: 3 # Contains values that configure the Consul UI. ui: enabled: true # Registers a Kubernetes Service for the Consul UI as a LoadBalancer. service: type: LoadBalancer # Configures and installs the automatic Consul Connect sidecar injector. connectInject: enabled: true
# Contains values that affect multiple components of the chart.
global:
# The main enabled/disabled setting.
# If true, servers, clients, Consul DNS and the Consul UI will be enabled.
enabled: true
# The prefix used for all resources created in the Helm chart.
name: consul
# The consul image version.
image: hashicorp/consul:1.16.0
# The name of the datacenter that the agents should register as.
datacenter: dc1
# Enables TLS across the cluster to verify authenticity of the Consul servers and clients.
tls:
enabled: true
# Enables ACLs across the cluster to secure access to data and APIs.
acls:
# If true, automatically manage ACL tokens and policies for all Consul components.
manageSystemACLs: true
# Configures values that configure the Consul server cluster.
server:
enabled: true
# The number of server agents to run. This determines the fault tolerance of the cluster.
replicas: 3
# Contains values that configure the Consul UI.
ui:
enabled: true
# Registers a Kubernetes Service for the Consul UI as a LoadBalancer.
service:
type: LoadBalancer
# Configures and installs the automatic Consul Connect sidecar injector.
connectInject:
enabled: true
For a complete list of Helm chart parameters and configuration, refer to the Consul Helm chart documentation.
Deploy Consul datacenter
Deploy a Consul datacenter to your Kubernetes environment with the Consul K8S CLI or Helm.
Install Consul to your Kubernetes cluster with the Consul K8S CLI. Confirm the run by entering y
.
$ consul-k8s install -config-file=helm/values-v1.yaml ## ... Proceed with installation? (y/N) y ==> Installing Consul ## ... ✓ Consul installed in namespace "consul".
$ consul-k8s install -config-file=helm/values-v1.yaml
## ...
Proceed with installation? (y/N) y
==> Installing Consul
## ...
✓ Consul installed in namespace "consul".
Notice that the Consul K8s CLI installs Consul into the consul
namespace.
Refer to the Consul K8S CLI documentation to learn more about additional settings.
Add the HashiCorp Helm Chart repository.
$ helm repo add hashicorp https://helm.releases.hashicorp.com "hashicorp" has been added to your repositories
$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
Install Consul to your Kubernetes cluster with the Helm chart. Notice that this command installs Consul into the consul
namespace.
$ helm install --values helm/values-v1.yaml consul hashicorp/consul --create-namespace --namespace consul --version "1.2.0"
$ helm install --values helm/values-v1.yaml consul hashicorp/consul --create-namespace --namespace consul --version "1.2.0"
Review Consul server configuration
You will now review the Helm chart for deploying a Consul datacenter in your Kubernetes cluster using the consul-k8s
CLI or Helm installation methods.
To deploy Consul on Kubernetes, go directly to Deploy Consul.
Review helm/values-v1.yaml
. This file defines the Consul datacenter you will deploy to Kubernetes. Review the comments in the file for an explanation of each parameter.
# Contains values that affect multiple components of the chart. global: # The main enabled/disabled setting. # If true, servers, clients, Consul DNS and the Consul UI will be enabled. enabled: true # The prefix used for all resources created in the Helm chart. name: consul # The consul image version. image: hashicorp/consul:1.16.0 # The name of the datacenter that the agents should register as. datacenter: dc1 # Enables TLS across the cluster to verify authenticity of the Consul servers and clients. tls: enabled: true # Enables ACLs across the cluster to secure access to data and APIs. acls: # If true, automatically manage ACL tokens and policies for all Consul components. manageSystemACLs: true # Configures values that configure the Consul server cluster. server: enabled: true # The number of server agents to run. This determines the fault tolerance of the cluster. replicas: 3 # Contains values that configure the Consul UI. ui: enabled: true # Registers a Kubernetes Service for the Consul UI as a LoadBalancer. service: type: LoadBalancer # Configures and installs the automatic Consul Connect sidecar injector. connectInject: enabled: true
# Contains values that affect multiple components of the chart.
global:
# The main enabled/disabled setting.
# If true, servers, clients, Consul DNS and the Consul UI will be enabled.
enabled: true
# The prefix used for all resources created in the Helm chart.
name: consul
# The consul image version.
image: hashicorp/consul:1.16.0
# The name of the datacenter that the agents should register as.
datacenter: dc1
# Enables TLS across the cluster to verify authenticity of the Consul servers and clients.
tls:
enabled: true
# Enables ACLs across the cluster to secure access to data and APIs.
acls:
# If true, automatically manage ACL tokens and policies for all Consul components.
manageSystemACLs: true
# Configures values that configure the Consul server cluster.
server:
enabled: true
# The number of server agents to run. This determines the fault tolerance of the cluster.
replicas: 3
# Contains values that configure the Consul UI.
ui:
enabled: true
# Registers a Kubernetes Service for the Consul UI as a LoadBalancer.
service:
type: LoadBalancer
# Configures and installs the automatic Consul Connect sidecar injector.
connectInject:
enabled: true
For a complete list of Helm chart parameters and configuration, refer to the Consul Helm chart documentation.
Deploy Consul datacenter
Deploy a Consul datacenter to your Kubernetes environment with the Consul K8S CLI or Helm.
Install Consul to your Kubernetes cluster with the Consul K8S CLI. Confirm the run by entering y
.
$ consul-k8s install -config-file=helm/values-v1.yaml ## ... Proceed with installation? (y/N) y ==> Installing Consul ## ... ✓ Consul installed in namespace "consul".
$ consul-k8s install -config-file=helm/values-v1.yaml
## ...
Proceed with installation? (y/N) y
==> Installing Consul
## ...
✓ Consul installed in namespace "consul".
Notice that the Consul K8s CLI installs Consul into the consul
namespace.
Refer to the Consul K8S CLI documentation to learn more about additional settings.
Add the HashiCorp Helm Chart repository.
$ helm repo add hashicorp https://helm.releases.hashicorp.com "hashicorp" has been added to your repositories
$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
Install Consul to your Kubernetes cluster with the Helm chart. Notice that this command installs Consul into the consul
namespace.
$ helm install --values helm/values-v1.yaml consul hashicorp/consul --create-namespace --namespace consul --version "1.2.0"
$ helm install --values helm/values-v1.yaml consul hashicorp/consul --create-namespace --namespace consul --version "1.2.0"
Review Consul server configuration
You will now review the Helm chart for deploying a Consul datacenter in your Kubernetes cluster using the consul-k8s
CLI or Helm installation methods.
To deploy Consul on Kubernetes, go directly to Deploy Consul.
Review helm/values-v1.yaml
. This file defines the Consul datacenter you will deploy to Kubernetes. Review the comments in the file for an explanation of each parameter.
# Contains values that affect multiple components of the chart. global: # The main enabled/disabled setting. # If true, servers, clients, Consul DNS and the Consul UI will be enabled. enabled: true # The prefix used for all resources created in the Helm chart. name: consul # The consul image version. image: hashicorp/consul:1.16.0 # The name of the datacenter that the agents should register as. datacenter: dc1 # Enables TLS across the cluster to verify authenticity of the Consul servers and clients. tls: enabled: true # Enables ACLs across the cluster to secure access to data and APIs. acls: # If true, automatically manage ACL tokens and policies for all Consul components. manageSystemACLs: true # Exposes Prometheus metrics for the Consul service mesh and sidecars. metrics: enabled: true # Enables Consul servers and clients metrics. enableAgentMetrics: true # Configures the retention time for metrics in Consul servers and clients. agentMetricsRetentionTime: "1m" # Configures values that configure the Consul server cluster. server: enabled: true # The number of server agents to run. This determines the fault tolerance of the cluster. replicas: 1 # Contains values that configure the Consul UI. ui: enabled: true # Defines the type of service created for the Consul UI (e.g. LoadBalancer, ClusterIP, NodePort). # NodePort is primarily used for local deployments. service: type: NodePort # Enables displaying metrics in the Consul UI. metrics: enabled: true # The metrics provider specification. provider: "prometheus" # The URL of the prometheus metrics server. baseURL: http://prometheus-server.default.svc.cluster.local # Configures and installs the automatic Consul Connect sidecar injector. connectInject: enabled: true # Enables metrics for Consul Connect sidecars. metrics: defaultEnabled: true
# Contains values that affect multiple components of the chart.
global:
# The main enabled/disabled setting.
# If true, servers, clients, Consul DNS and the Consul UI will be enabled.
enabled: true
# The prefix used for all resources created in the Helm chart.
name: consul
# The consul image version.
image: hashicorp/consul:1.16.0
# The name of the datacenter that the agents should register as.
datacenter: dc1
# Enables TLS across the cluster to verify authenticity of the Consul servers and clients.
tls:
enabled: true
# Enables ACLs across the cluster to secure access to data and APIs.
acls:
# If true, automatically manage ACL tokens and policies for all Consul components.
manageSystemACLs: true
# Exposes Prometheus metrics for the Consul service mesh and sidecars.
metrics:
enabled: true
# Enables Consul servers and clients metrics.
enableAgentMetrics: true
# Configures the retention time for metrics in Consul servers and clients.
agentMetricsRetentionTime: "1m"
# Configures values that configure the Consul server cluster.
server:
enabled: true
# The number of server agents to run. This determines the fault tolerance of the cluster.
replicas: 1
# Contains values that configure the Consul UI.
ui:
enabled: true
# Defines the type of service created for the Consul UI (e.g. LoadBalancer, ClusterIP, NodePort).
# NodePort is primarily used for local deployments.
service:
type: NodePort
# Enables displaying metrics in the Consul UI.
metrics:
enabled: true
# The metrics provider specification.
provider: "prometheus"
# The URL of the prometheus metrics server.
baseURL: http://prometheus-server.default.svc.cluster.local
# Configures and installs the automatic Consul Connect sidecar injector.
connectInject:
enabled: true
# Enables metrics for Consul Connect sidecars.
metrics:
defaultEnabled: true
For a complete list of Helm chart parameters and configuration, refer to the Consul Helm chart documentation.
Deploy Consul datacenter
Deploy a Consul datacenter to your Kubernetes environment with the Consul K8S CLI or Helm.
Install Consul to your Kubernetes cluster with the Consul K8S CLI. Confirm the run by entering y
.
$ consul-k8s install -config-file=helm/values-v1.yaml ## ... Proceed with installation? (y/N) y ==> Installing Consul ## ... ✓ Consul installed in namespace "consul".
$ consul-k8s install -config-file=helm/values-v1.yaml
## ...
Proceed with installation? (y/N) y
==> Installing Consul
## ...
✓ Consul installed in namespace "consul".
Notice that the Consul K8s CLI installs Consul into the consul
namespace.
Refer to the Consul K8S CLI documentation to learn more about additional settings.
Add the HashiCorp Helm Chart repository.
$ helm repo add hashicorp https://helm.releases.hashicorp.com "hashicorp" has been added to your repositories
$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
Install Consul to your Kubernetes cluster with the Helm chart. Notice that this command installs Consul into the consul
namespace.
$ helm install --values helm/values-v1.yaml consul hashicorp/consul --create-namespace --namespace consul --version "1.2.0"
$ helm install --values helm/values-v1.yaml consul hashicorp/consul --create-namespace --namespace consul --version "1.2.0"
Configure your CLI to interact with Consul datacenter
In this section, you will set environment variables in your terminal so your Consul CLI can interact with your Consul datacenter. The Consul CLI reads these environment variables for behavior defaults and will reference these values when you run consul
commands.
Tokens are artifacts in the ACL system used to authenticate users, services, and Consul agents. Since ACLs are enabled in this Consul datacenter, entities requesting access to a resource must include a token that is linked with a policy, service identity, or node identity that grants permission to the resource. The ACL system checks the token and grants or denies access to resources based on the associated permissions. A bootstrap token has unrestricted privileges to all resources and APIs.
Retrieve the ACL bootstrap token from the respective Kubernetes secret and set it as an environment variable.
$ export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/consul-bootstrap-acl-token --template={{.data.token}} | base64 -d)
$ export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/consul-bootstrap-acl-token --template={{.data.token}} | base64 -d)
Set the Consul destination address. By default, Consul runs on port 8500
for http
and 8501
for https
.
$ export CONSUL_HTTP_ADDR=https://$(kubectl get services/consul-ui --namespace consul -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ export CONSUL_HTTP_ADDR=https://$(kubectl get services/consul-ui --namespace consul -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
Remove SSL verification checks to simplify communication to your Consul datacenter.
$ export CONSUL_HTTP_SSL_VERIFY=false
$ export CONSUL_HTTP_SSL_VERIFY=false
Note: In a production environment, we recommend keeping this SSL verification set to true
. Only remove this verification for if you have a Consul datacenter without TLS configured in development environment and demonstration purposes.
In this section, you will set environment variables in your terminal so your Consul CLI can interact with your Consul datacenter. The Consul CLI reads these environment variables for behavior defaults and will reference these values when you run consul
commands.
Tokens are artifacts in the ACL system used to authenticate users, services, and Consul agents. Since ACLs are enabled in this Consul datacenter, entities requesting access to a resource must include a token that is linked with a policy, service identity, or node identity that grants permission to the resource. The ACL system checks the token and grants or denies access to resources based on the associated permissions. A bootstrap token has unrestricted privileges to all resources and APIs.
Retrieve the ACL bootstrap token from the respective Kubernetes secret and set it as an environment variable.
$ export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/consul-bootstrap-acl-token --template={{.data.token}} | base64 -d)
$ export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/consul-bootstrap-acl-token --template={{.data.token}} | base64 -d)
Set the Consul destination address. By default, Consul runs on port 8500
for http
and 8501
for https
.
$ export CONSUL_HTTP_ADDR=https://$(kubectl get services/consul-ui --namespace consul -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ export CONSUL_HTTP_ADDR=https://$(kubectl get services/consul-ui --namespace consul -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Remove SSL verification checks to simplify communication to your Consul datacenter.
$ export CONSUL_HTTP_SSL_VERIFY=false
$ export CONSUL_HTTP_SSL_VERIFY=false
Note: In a production environment, we recommend keeping this SSL verification set to true
. Only remove this verification for if you have a Consul datacenter without TLS configured in development environment and demonstration purposes.
In this section, you will set environment variables in your terminal so your Consul CLI can interact with your Consul datacenter. The Consul CLI reads these environment variables for behavior defaults and will reference these values when you run consul
commands.
Tokens are artifacts in the ACL system used to authenticate users, services, and Consul agents. Since ACLs are enabled in this Consul datacenter, entities requesting access to a resource must include a token that is linked with a policy, service identity, or node identity that grants permission to the resource. The ACL system checks the token and grants or denies access to resources based on the associated permissions. A bootstrap token has unrestricted privileges to all resources and APIs.
Retrieve the ACL bootstrap token from the respective Kubernetes secret and set it as an environment variable.
$ export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/consul-bootstrap-acl-token --template={{.data.token}} | base64 -d)
$ export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/consul-bootstrap-acl-token --template={{.data.token}} | base64 -d)
Set the Consul destination address. By default, Consul runs on port 8500
for http
and 8501
for https
.
$ export CONSUL_HTTP_ADDR=https://127.0.0.1:8501
$ export CONSUL_HTTP_ADDR=https://127.0.0.1:8501
Remove SSL verification checks to simplify communication to your Consul datacenter.
$ export CONSUL_HTTP_SSL_VERIFY=false
$ export CONSUL_HTTP_SSL_VERIFY=false
Note: In a production environment, we recommend keeping this SSL verification set to true
. Only remove this verification for if you have a Consul datacenter without TLS configured in development environment and demonstration purposes.
View Consul services
In this section, you will view your Consul services with the CLI, UI, and/or API to explore the details of your service mesh.
In your terminal, run the CLI command consul catalog services
to return the list of services registered in Consul. Notice this returns only the consul
service since it is the only running service in your Consul datacenter.
$ consul catalog services consul
$ consul catalog services
consul
Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running.
Run the CLI command consul members
to return the list of Consul agents in your environment.
$ consul members Node Address Status Type Build Protocol DC Partition Segment consul-server-0 10.0.4.13:8301 alive server 1.16.0 2 dc1 default <all> consul-server-1 10.0.6.43:8301 alive server 1.16.0 2 dc1 default <all> consul-server-2 10.0.5.64:8301 alive server 1.16.0 2 dc1 default <all>
$ consul members
Node Address Status Type Build Protocol DC Partition Segment
consul-server-0 10.0.4.13:8301 alive server 1.16.0 2 dc1 default <all>
consul-server-1 10.0.6.43:8301 alive server 1.16.0 2 dc1 default <all>
consul-server-2 10.0.5.64:8301 alive server 1.16.0 2 dc1 default <all>
Output the HCP Consul Dedicated URL value to your terminal and paste it in your browser to find the Consul UI. Since this environment uses a self-signed TLS certificate for its resources, click to proceed through the certificate warnings.
$ echo $CONSUL_HTTP_ADDR https://a55925452f7214a1cad0a0d564ae1872-646141274.us-west-2.elb.amazonaws.com
$ echo $CONSUL_HTTP_ADDR
https://a55925452f7214a1cad0a0d564ae1872-646141274.us-west-2.elb.amazonaws.com
Output the token value to your terminal and copy the value to your clipboard. You will use this ACL token to authenticate in the Consul UI.
$ echo $CONSUL_HTTP_TOKEN fe0dd5c3-f2e1-81e8-cde8-49d26cee5efc
$ echo $CONSUL_HTTP_TOKEN
fe0dd5c3-f2e1-81e8-cde8-49d26cee5efc
On the left navigation pane, click Services to review your deployed services. At this time, you will only find the consul
service.
By default, the anonymous ACL policy allows you to view the contents of Consul services, nodes, and intentions. To make changes and see more details within the Consul UI, click Log In in the top right and insert your bootstrap ACL token.
After successfully authenticating with your ACL token, you are now able to view additional Consul components and make changes in the UI. Notice you can view and manage more options under the Access Controls section on the left navigation pane.
On the left navigation pane, click on Nodes.
Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running.
In your terminal, view the list of services registered in Consul.
$ curl -k \ --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ $CONSUL_HTTP_ADDR/v1/catalog/services
$ curl -k \
--header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
$CONSUL_HTTP_ADDR/v1/catalog/services
Sample output:
{"consul":[]}
{"consul":[]}
Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running.
View the list of server and client Consul agents in your environment.
$ curl -k \ --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ $CONSUL_HTTP_ADDR/v1/agent/members
$ curl -k \
--header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
$CONSUL_HTTP_ADDR/v1/agent/members
Sample output:
[{"Name":"consul-server-0","Addr":"10.244.0.24","Port":8301,"Tags":{"acls":"1","bootstrap":"1","build":"1.16.0:c6d0f9ec","dc":"dc1","ft_fs":"1","ft_si":"1","grpc_port":"8503","id":"9da2304b-3829-4af8-7256-bc240d57d42b","port":"8300","raft_vsn":"3","role":"consul","segment":"","use_tls":"1","vsn":"2","vsn_max":"3","vsn_min":"2","wan_join_port":"8302"},"Status":1,"ProtocolMin":1,"ProtocolMax":5,"ProtocolCur":2,"DelegateMin":2,"DelegateMax":5,"DelegateCur":4}]
[{"Name":"consul-server-0","Addr":"10.244.0.24","Port":8301,"Tags":{"acls":"1","bootstrap":"1","build":"1.16.0:c6d0f9ec","dc":"dc1","ft_fs":"1","ft_si":"1","grpc_port":"8503","id":"9da2304b-3829-4af8-7256-bc240d57d42b","port":"8300","raft_vsn":"3","role":"consul","segment":"","use_tls":"1","vsn":"2","vsn_max":"3","vsn_min":"2","wan_join_port":"8302"},"Status":1,"ProtocolMin":1,"ProtocolMax":5,"ProtocolCur":2,"DelegateMin":2,"DelegateMax":5,"DelegateCur":4}]
All services listed in your Consul catalog are empowered with Consul's service discovery capabilities that simplify scalability challenges and improve application resiliency. Review the Service Discovery overview page to learn more.
In this section, you will view your Consul services with the CLI, UI, and/or API to explore the details of your service mesh.
In your terminal, run the CLI command consul catalog services
to return the list of services registered in Consul. Notice this returns only the consul
service since it is the only running service in your Consul datacenter.
$ consul catalog services consul
$ consul catalog services
consul
Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running.
Run the CLI command consul members
to return the list of Consul agents in your environment.
$ consul members Node Address Status Type Build Protocol DC Partition Segment consul-server-0 10.0.4.13:8301 alive server 1.16.0 2 dc1 default <all> consul-server-1 10.0.6.43:8301 alive server 1.16.0 2 dc1 default <all> consul-server-2 10.0.5.64:8301 alive server 1.16.0 2 dc1 default <all>
$ consul members
Node Address Status Type Build Protocol DC Partition Segment
consul-server-0 10.0.4.13:8301 alive server 1.16.0 2 dc1 default <all>
consul-server-1 10.0.6.43:8301 alive server 1.16.0 2 dc1 default <all>
consul-server-2 10.0.5.64:8301 alive server 1.16.0 2 dc1 default <all>
Output the HCP Consul Dedicated URL value to your terminal and paste it in your browser to find the Consul UI. Since this environment uses a self-signed TLS certificate for its resources, click to proceed through the certificate warnings.
$ echo $CONSUL_HTTP_ADDR https://a55925452f7214a1cad0a0d564ae1872-646141274.us-west.azure.com
$ echo $CONSUL_HTTP_ADDR
https://a55925452f7214a1cad0a0d564ae1872-646141274.us-west.azure.com
Note
In a production environment, we recommend keeping this SSL verification set to true
. Only remove this verification for if you have a Consul cluster without TLS configured in development environment and demonstration purposes.
Output the token value to your terminal and copy the value to your clipboard. You will use this ACL token to authenticate in the Consul UI.
$ echo $CONSUL_HTTP_TOKEN fe0dd5c3-f2e1-81e8-cde8-49d26cee5efc
$ echo $CONSUL_HTTP_TOKEN
fe0dd5c3-f2e1-81e8-cde8-49d26cee5efc
On the left navigation pane, click Services to review your deployed services. At this time, you will only find the consul
service.
By default, the anonymous ACL policy allows you to view the contents of Consul services, nodes, and intentions. To make changes and see more details within the Consul UI, click Log In in the top right and insert your bootstrap ACL token.
After successfully authenticating with your ACL token, you are now able to view additional Consul components and make changes in the UI. Notice you can view and manage more options under the Access Controls section on the left navigation pane.
On the left navigation pane, click on Nodes.
Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running.
In your terminal, view the list of services registered in Consul.
$ curl -k \ --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ $CONSUL_HTTP_ADDR/v1/catalog/services
$ curl -k \
--header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
$CONSUL_HTTP_ADDR/v1/catalog/services
Sample output:
{"consul":[]}
{"consul":[]}
Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running.
View the list of server and client Consul agents in your environment.
$ curl -k \ --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ $CONSUL_HTTP_ADDR/v1/agent/members
$ curl -k \
--header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
$CONSUL_HTTP_ADDR/v1/agent/members
Sample output:
[{"Name":"consul-server-0","Addr":"10.244.0.24","Port":8301,"Tags":{"acls":"1","bootstrap":"1","build":"1.16.0:c6d0f9ec","dc":"dc1","ft_fs":"1","ft_si":"1","grpc_port":"8503","id":"9da2304b-3829-4af8-7256-bc240d57d42b","port":"8300","raft_vsn":"3","role":"consul","segment":"","use_tls":"1","vsn":"2","vsn_max":"3","vsn_min":"2","wan_join_port":"8302"},"Status":1,"ProtocolMin":1,"ProtocolMax":5,"ProtocolCur":2,"DelegateMin":2,"DelegateMax":5,"DelegateCur":4}]
[{"Name":"consul-server-0","Addr":"10.244.0.24","Port":8301,"Tags":{"acls":"1","bootstrap":"1","build":"1.16.0:c6d0f9ec","dc":"dc1","ft_fs":"1","ft_si":"1","grpc_port":"8503","id":"9da2304b-3829-4af8-7256-bc240d57d42b","port":"8300","raft_vsn":"3","role":"consul","segment":"","use_tls":"1","vsn":"2","vsn_max":"3","vsn_min":"2","wan_join_port":"8302"},"Status":1,"ProtocolMin":1,"ProtocolMax":5,"ProtocolCur":2,"DelegateMin":2,"DelegateMax":5,"DelegateCur":4}]
All services listed in your Consul catalog are empowered with Consul's service discovery capabilities that simplify scalability challenges and improve application resiliency. Review the Service Discovery overview page to learn more.
In this section, you will view your Consul services with the CLI, UI, and/or API to explore the details of your service mesh.
Open a separate terminal window and expose the Consul server with kubectl port-forward
using the consul-ui
service name as the target.
$ kubectl port-forward svc/consul-ui --namespace consul 8501:443
$ kubectl port-forward svc/consul-ui --namespace consul 8501:443
In your original terminal, run the CLI command consul catalog services
to return the list of services registered in Consul. Notice this returns only the consul
service since it is the only running service in your Consul datacenter.
$ consul catalog services consul
$ consul catalog services
consul
Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running.
Run the CLI command consul members
to return the list of Consul agents in your environment.
$ consul members Node Address Status Type Build Protocol DC Partition Segment consul-server-0 10.0.4.13:8301 alive server 1.16.0 2 dc1 default <all>
$ consul members
Node Address Status Type Build Protocol DC Partition Segment
consul-server-0 10.0.4.13:8301 alive server 1.16.0 2 dc1 default <all>
Output the token value to your terminal and copy the value to your clipboard. You will use this ACL token to authenticate in the Consul UI.
$ echo $CONSUL_HTTP_TOKEN fe0dd5c3-f2e1-81e8-cde8-49d26cee5efc
$ echo $CONSUL_HTTP_TOKEN
fe0dd5c3-f2e1-81e8-cde8-49d26cee5efc
Open a separate terminal window and expose the Consul UI with kubectl port-forward
using the consul-ui
service name as the target.
$ kubectl port-forward svc/consul-ui --namespace consul 8501:443
$ kubectl port-forward svc/consul-ui --namespace consul 8501:443
Open https://localhost:8501 in your browser to find the Consul UI. Since this environment uses a self-signed TLS certificate for its resources, click to proceed through the certificate warnings.
On the left navigation pane, click Services to review your deployed services. At this time, you will only find the consul
service.
By default, the anonymous ACL policy allows you to view the contents of Consul services, nodes, and intentions. To make changes and see more details within the Consul UI, click Log In in the top right and insert your bootstrap ACL token.
After successfully authenticating with your ACL token, you are now able to view additional Consul components and make changes in the UI. Notice you can view and manage more options under the Access Controls section on the left navigation pane.
On the left navigation pane, click on Nodes.
Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running.
Open a separate terminal window and expose the Consul server with kubectl port-forward
using the consul-ui
service name as the target.
$ kubectl port-forward svc/consul-ui --namespace consul 8501:443
$ kubectl port-forward svc/consul-ui --namespace consul 8501:443
In your original terminal, view the list of services registered in Consul.
$ curl -k \ --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ $CONSUL_HTTP_ADDR/v1/catalog/services
$ curl -k \
--header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
$CONSUL_HTTP_ADDR/v1/catalog/services
Sample output:
{"consul":[]}
{"consul":[]}
Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running.
View the list of server and client Consul agents in your environment.
$ curl -k \ --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ $CONSUL_HTTP_ADDR/v1/agent/members
$ curl -k \
--header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
$CONSUL_HTTP_ADDR/v1/agent/members
Sample output:
[{"Name":"consul-server-0","Addr":"10.244.0.24","Port":8301,"Tags":{"acls":"1","bootstrap":"1","build":"1.16.0:c6d0f9ec","dc":"dc1","ft_fs":"1","ft_si":"1","grpc_port":"8503","id":"9da2304b-3829-4af8-7256-bc240d57d42b","port":"8300","raft_vsn":"3","role":"consul","segment":"","use_tls":"1","vsn":"2","vsn_max":"3","vsn_min":"2","wan_join_port":"8302"},"Status":1,"ProtocolMin":1,"ProtocolMax":5,"ProtocolCur":2,"DelegateMin":2,"DelegateMax":5,"DelegateCur":4}]
[{"Name":"consul-server-0","Addr":"10.244.0.24","Port":8301,"Tags":{"acls":"1","bootstrap":"1","build":"1.16.0:c6d0f9ec","dc":"dc1","ft_fs":"1","ft_si":"1","grpc_port":"8503","id":"9da2304b-3829-4af8-7256-bc240d57d42b","port":"8300","raft_vsn":"3","role":"consul","segment":"","use_tls":"1","vsn":"2","vsn_max":"3","vsn_min":"2","wan_join_port":"8302"},"Status":1,"ProtocolMin":1,"ProtocolMax":5,"ProtocolCur":2,"DelegateMin":2,"DelegateMax":5,"DelegateCur":4}]
All services listed in your Consul catalog are empowered with Consul's service discovery capabilities that simplify scalability challenges and improve application resiliency. Review the Service Discovery overview page to learn more.
Next steps
In this tutorial, you integrated Consul into your Kubernetes environment. After deploying Consul, you interacted with Consul using the CLI, UI, and API.
In the next tutorial, you will deploy HashiCups, a demo application, onto your Kubernetes cluster to explore how to use Consul service mesh for service-to-service traffic management.
For more information about the topics covered in this tutorial, refer to the following resources: