Creating a Kubernetes cluster on GCP

creating a kubernetes cluster on GCP
Creating a kubernetes cluster on GCP

Did you know that Google starts 2 Billion (yes, BILLION) containers a week? Everything at Google runs in a container!
How do they do it? Well part of the success on using Containers at Google is Kubernetes. In this post we will go through the steps for Creating a Kubernetes cluster on GCP (Google Compute Platform)

Kubernetes (K8s) is an open source project that was released by Google in June, 2014. Google released the project as part of an effort to share their own infrastructure and technology advantage with the community at large.

We are going to run the installation script from a Linux Machine Running Ubuntu 18.04. You will also need an account on Google Compute Platform (GCP).

How to create a Kubernetes cluster on GCP

Updating packages

First, let’s make sure that our environment is properly set up before we install Kubernetes. Start by updating packages:

sudo apt-get update

Install Python and curl if they are not present:

 sudo apt-get install python
 sudo apt-get install curl

Install the gcloud SDK:

 curl | bash

Configure your Google Cloud Platform (GCP) account information. Run the command below to open a browser where we can log in to our Google Cloud account and authorize the SDK:

 gcloud auth login

If you have problems with login or want to use another browser, you can optionally use the –no-launch-browser command. Copy and paste the URL to the machine and/or browser of your choice. Log in with your Google Cloud credentials and click on Allow on the permissions page. Finally, you should receive an authorization code that you can copy and paste back into the shell where the prompt is waiting.

OK, now that gcloud is set up we can see what is our current default project running:

gcloud config list project

Now that we have our environment set up, installing the latest Kubernetes version using GCP as provider can be done running this simple command:

curl -sS | bash

Install gcloud component called alpha & beta

Then we need to install gcloud component called alpha & beta with these commands:

gcloud components install alpha
gcloud components install beta

Create a project for our kubernetes deployment

Now we are going to create a project for our kubernetes deployment. Projects ID need to be unique across the whole Google Compute Engine platform so we recommend something related to your account or something that you know is likely to be unique. We will use ‘kubernetesaustraltech’ so we run the following command:

gcloud projects create kubernetesaustraltech

And then we can set gcloud to use this project with the set command:

gcloud config set project kubernetesaustraltech

Now you need to open up a browser and go to, select your project and link your billing account to the project as shown in the image below.

create a project for our kubernetes deployment
Creating a Kubernetes cluster on GCP

Install Kubernetes using GCP

You can finally launch the kube-up script to install Kubernetes using GCP:


You may get a message like the following during the installation:

API [] not enabled on project [835711932246]. 
 Would you like to enable and retry (this will take a few minutes)? 

if that is the case just type ‘y’

The setup script will continue running configuring everything for you. Lets see a few lines of the setup process:

… Starting cluster in us-central1-b using provider gce
 … calling verify-prereqs
 … calling verify-kube-binaries
 … calling verify-release-tars
 … calling kube-up
 Project: kubernetesaustraltech
 Network Project: kubernetesaustraltech
 Zone: us-central1-b

The preceding section shows the checks for prerequisites as well as makes sure that all components are up to date. This is specific to each provider. In the case of GCE, it will check that the SDK is installed and that all components are up to date. If not, you will see a prompt at this point to install or update.

Creating gs://kubernetes-staging-605a3e8716
 Creating gs://kubernetes-staging-605a3e8716/…
 +++ Staging tars to Google Storage: gs://kubernetes-staging-605a3e8716/kubernetes-devel
 +++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = b0360428f1ab8fdd8f7e423f280b142b11a748ab)
 +++ kubernetes-manifests.tar.gz uploaded (sha1 = dac5d56c0b3242d264cc315361dc28c20484323c)
 API [] not enabled on project [835711932246]. 

Now the script is turning up the cluster. Again, this is specific to the provider. For GCE, it first checks to make sure that the SDK is configured for a default project and zone. If they are set, you’ll see those in the output.
Next, it uploads the server binaries to Google Cloud storage, as seen in the Creating gs:\ … lines.

Looking for already existing resources
 Found existing network default in AUTO mode.
 Creating firewall…
 …..Creating firewall…
 …IP aliases are disabled.
 ………Creating firewall…
 ….Found subnet for region us-central1 in network default: default
 Starting master and configuring firewalls
 …………Creating firewall…

Creating the cluster

It then checks for any pieces of a cluster already running. Then, we finally start creating the cluster. In the output we see it creating the master server, IP address, and appropriate firewall configurations for the cluster.

Group is stable
 NODE_NAMES=kubernetes-minion-group-9xrz kubernetes-minion-group-ck7h kubernetes-minion-group-hmqb
 Trying to find master named 'kubernetes-master'
 Looking for address 'kubernetes-master-ip'
 Using master: kubernetes-master (external IP:
 Waiting up to 300 seconds for cluster initialization.

This will continually check to see if the API for kubernetes is reachable.
This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "kubernetesaustraltech_kubernetes" set.
User "kubernetesaustraltech_kubernetes" set.
Context "kubernetesaustraltech_kubernetes" created.
Switched to context "kubernetesaustraltech_kubernetes".
User "kubernetesaustraltech_kubernetes-basic-auth" set.
Wrote config for kubernetesaustraltech_kubernetes to /home/ubuntu/.kube/config

Kubernetes cluster is running.  The master is running at:

The user name and password to use is located in /home/ubuntu/.kube/config.

Create the nodes for our cluster

Finally, it creates the nodes for our cluster. This is where our container workloads will actually run. It will continually loop and wait while all the nodes start up.

... calling validate-cluster
Validating gce cluster, MULTIZONE=
Project: kubernetesaustraltech
Network Project: kubernetesaustraltech
Zone: us-central1-b
No resources found.
Waiting for 4 ready nodes. 0 ready nodes, 0 registered. Retrying.
No resources found.
Waiting for 4 ready nodes. 0 ready nodes, 0 registered. Retrying.
Found 4 node(s).
NAME                           STATUS                     ROLES    AGE   VERSION
kubernetes-master              Ready,SchedulingDisabled   <none>   8s    v1.14.1
kubernetes-minion-group-9xrz   Ready                      <none>   10s   v1.14.1
kubernetes-minion-group-ck7h   Ready                      <none>   10s   v1.14.1
kubernetes-minion-group-hmqb   Ready                      <none>   11s   v1.14.1
Validate output:
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
Cluster validation succeeded

Then, the script will validate the cluster. At this point, we are no longer running provider-specific code. The validation script will query the cluster via the script. This is the central script for managing our cluster. In this case, it checks the number of minions found, registered, and in a ready state. It loops through giving the cluster up to 10 minutes to finish initialization.After a successful startup, a summary of the minions and the cluster component health is printed to the screen:

Done, listing cluster services:

Kubernetes master is running at
GLBCDefaultBackend is running at
Heapster is running at
CoreDNS is running at
kubernetes-dashboard is running at
Metrics-server is running at

Now that everything is created, the cluster is initialized and started. Assuming that everything goes well, we will get an IP address for the master. Also, note that configuration along with the cluster management credentials are stored in home//.kube/config .

Finally, a kubectl cluster-info command is run, which outputs the URL for the master services as well as DNS, UI, and monitoring.