Google Kubernetes Engine

  • Author: Ronald Fung

  • Creation Date: 8 June 2023

  • Next Modified Date: 8 June 2024


A. Introduction

Google Kubernetes Engine (GKE), a managed Kubernetes service that you can use to deploy and operate containerized applications at scale using Google’s infrastructure. This page is intended for platform administrators who are looking for a scalable, automated, managed Kubernetes solution. You should already be familiar with Kubernetes concepts.

GKE is a Google-managed implementation of the Kubernetes open source container orchestration platform. Kubernetes was developed by Google, drawing on years of experience operating production workloads at scale on Borg, our in-house cluster management system.


B. How is it used at Seagen

Seagen can use Google Kubernetes Engine (GKE) to deploy, manage, and scale their containerized applications on Google’s infrastructure. Here are some steps to get started with GKE:

  1. Create a Google Cloud account: Seagen can create a Google Cloud account in the Google Cloud Console. This will give them access to GKE and other Google Cloud services.

  2. Create a project: Seagen can create a new project in the Google Cloud Console. The project will be associated with a Google Cloud billing account and can be used to manage GKE clusters and other resources.

  3. Create a GKE cluster: Seagen can create a GKE cluster in the Google Cloud Console. They can specify the number of nodes, the machine type, and the number of nodes per zone. They can also choose from a variety of other settings, such as auto-scaling and node pools.

  4. Deploy applications: Seagen can deploy their containerized applications on the GKE cluster. They can use Kubernetes YAML files to define their deployments, services, and other resources. They can also use Helm charts to manage their applications.

  5. Scale resources: Seagen can scale their resources as needed using GKE’s auto-scaling and load balancing features. This allows them to easily manage their applications and resources, while ensuring high availability and performance.

  6. Monitor performance: Seagen can monitor the performance of their GKE cluster using the Google Cloud Console. They can use the GKE dashboard to view metrics such as CPU utilization, memory usage, and network traffic.

Overall, by using GKE, Seagen can easily deploy and manage their containerized applications in the cloud. With its scalable and customizable infrastructure, high availability, and flexible pricing, GKE is an excellent choice for businesses and individuals who need to run their applications in containers in the cloud.


C. Features

Google Kubernetes Engine (GKE) is a powerful container orchestration platform that allows users to deploy, manage, and scale their containerized applications on Google’s infrastructure. Some of the key features of GKE include:

  1. Scalable infrastructure: GKE provides users with a scalable infrastructure that allows them to easily add and remove resources as needed. This allows users to scale their resources up or down based on their application demands.

  2. Kubernetes compatibility: GKE is fully compatible with Kubernetes, the popular open-source container orchestration platform. This means that users can use Kubernetes tools and APIs to manage their GKE clusters and applications.

  3. Automated operations: GKE provides users with automated operations, such as automatic upgrades and repairs, that help ensure that their clusters are always running the latest versions of Kubernetes and other components.

  4. High availability: GKE is designed to be highly available, with built-in redundancy and failover capabilities. This ensures that users’ applications and workloads are always available, even in the event of hardware failures or other issues.

  5. Security: GKE provides users with a secure infrastructure, with features such as firewalls, private networking, and encryption. This ensures that users’ data and applications are protected from unauthorized access and other security threats.

  6. Auto-scaling and load balancing: GKE provides users with auto-scaling and load balancing features that allow them to easily manage their resources and ensure high availability and performance. Users can configure their clusters to automatically scale up or down based on demand, and they can also use load balancing to distribute traffic across multiple instances.

  7. Flexible pricing: GKE provides users with flexible pricing options, including per-second billing and sustained use discounts. This allows users to pay only for the resources they use, and to take advantage of cost savings for long-running workloads.

Overall, GKE provides users with a powerful set of features to deploy, manage, and scale their containerized applications efficiently and cost-effectively in the cloud. With its scalable and customizable infrastructure, high availability, security, and flexible pricing, GKE is an excellent choice for businesses and individuals who need to run their applications in containers in the cloud.


D. Where Implemented

LeanIX


E. How it is tested

Testing Google Kubernetes Engine (GKE) involves ensuring that the containers and applications are running correctly and efficiently on the GKE cluster. Here are some steps to test GKE:

  1. Create a test environment: Create a test environment that mimics the production environment as closely as possible. This includes creating test data, configuring GKE clusters, and setting up test infrastructure.

  2. Create a GKE cluster: Create a GKE cluster in the test environment and configure the cluster with the appropriate settings, including node size, number of nodes, and networking.

  3. Deploy applications: Deploy containerized applications on the GKE cluster using Kubernetes YAML files or Helm charts. Ensure that the applications are running correctly and that data is being processed correctly.

  4. Test scalability: Test the scalability of the GKE cluster by simulating high traffic and load on the application. Use GKE’s auto-scaling and load balancing features to scale the resources up or down based on demand.

  5. Test high availability: Test the high availability of the GKE cluster by simulating hardware failures or other issues. Verify that the application continues to run correctly and that users can access the application without interruption.

  6. Monitor performance: Monitor the performance of the GKE cluster using GKE’s monitoring and logging features. Analyze the data to identify any performance issues or bottlenecks and optimize the cluster accordingly.

Overall, testing GKE involves creating a test environment, creating a GKE cluster, deploying applications, testing scalability and high availability, and monitoring performance. By thoroughly testing GKE, users can ensure that their applications and workloads are running correctly and efficiently on Google’s infrastructure.


F. 2023 Roadmap

????


G. 2024 Roadmap

????


H. Known Issues

While Google Kubernetes Engine (GKE) is generally a stable and reliable container orchestration platform, there are some known issues that users may encounter. Here are some of the known issues for GKE:

  1. Upgrade issues: Users may encounter issues when upgrading their GKE clusters to a new version of Kubernetes. This can occur if there are compatibility issues with the applications or if there are issues with the underlying infrastructure.

  2. Node pool issues: Users may encounter issues with their GKE node pools, such as node pool scaling issues or node pool deletion issues. This can occur if the node pool is not configured correctly or if there are issues with the underlying infrastructure.

  3. Networking issues: Users may encounter issues with networking in GKE, such as issues with load balancing or issues with pod-to-pod communication. This can occur if the network configuration is incorrect or if there are issues with the underlying networking infrastructure.

  4. Storage issues: Users may encounter issues with storage in GKE, such as issues with persistent volumes or issues with the underlying storage infrastructure. This can occur if the storage configuration is incorrect or if there are issues with the underlying storage infrastructure.

  5. Service mesh issues: Users may encounter issues with service mesh in GKE, such as issues with Istio or issues with other service mesh components. This can occur if the service mesh configuration is incorrect or if there are issues with the underlying infrastructure.

Overall, while these issues may impact some users, GKE remains a powerful and flexible container orchestration platform that allows users to deploy, manage, and scale their containerized applications in the cloud. By carefully monitoring their GKE clusters and reviewing their usage reports and logs, users can ensure that their GKE resources are secure and accessible, and that they are only paying for the resources they use. Additionally, users can reach out to GKE support for help with any known issues or other technical challenges they may encounter.


[x] Reviewed by Enterprise Architecture

[x] Reviewed by Application Development

[x] Reviewed by Data Architecture