Overview

Kubernetes (often abbreviated as k8s) is an open-source container orchestration platform that automates containerized applications’ deployment, scaling, and management. In the ever-evolving landscape of containerized applications, Kubernetes has emerged as a game-changer, providing unparalleled capabilities for managing, scaling, and ensuring resilience in modern IT infrastructures. Rolling updates is one of the deployment strategies in Kubernetes, and it involves gradually replacing instances of the old version of your application with the new one. This ensures that your application remains available throughout the update process. In this article, we delve into the intricacies of Rolling Updates within the Kubernetes ecosystem, emphasizing the importance of graceful shutdown procedures in ensuring continuous service availability and provide a step-by-step guide on deploying them effectively, discuss their advantages and disadvantages, and conclude with insights into their significance in modern Kubernetes-based environments.

Kubernetes Rolling Updates and Deployment Strategies

Rolling Updates: Rolling updates are a cornerstone of Kubernetes, empowering organizations to update their applications gradually and precisely. Rolling updates are essential to maintaining a healthy and up-to-date application infrastructure. They allow you to deploy new versions of your application while ensuring minimal downtime and minimal impact on user experience. This process is particularly crucial in mission-critical environments where uninterrupted service is paramount.

Other Kubernetes Deployment Strategies

  • Recreate Deployment: Unlike rolling updates, a Recreate Deployment strategy involves terminating all existing pods of the previous version before launching pods of the new version. This approach can lead to a brief application downtime during the update but ensures a clean transition. It’s suitable for applications that can tolerate short interruptions and where consistency is crucial.
  • Blue/Green Deployment: Blue/Green Deployment involves running two identical environments, the “Blue” (current) and “Green” (new) environments. The update is performed by routing traffic from the Blue environment to the Green one once it’s deemed stable. This strategy minimizes downtime and enables easy rollbacks if issues arise, making it suitable for critical applications.
  • Canary Deployment: Canary Deployment gradually introduces the new version to a subset of users or traffic, typically starting with a small percentage. This allows for real-world testing of the update’s impact and performance. If the Canary phase goes well, more traffic will gradually be shifted to the new version. This strategy is excellent for risk mitigation and validating updates before full deployment.
  • A/B Testing: A/B Testing is similar to Canary Deployment but focuses on comparing two or more versions of an application with distinct features or changes. It allows for controlled experiments, where different user groups receive different versions, and their performance and user experience are analyzed. Based on user feedback and metrics, this strategy helps make data-driven decisions on which version to deploy.

Features of Rolling Update Strategy

Rolling Updates

Rolling updates, as the name suggests, allow you to update your application in a gradual and controlled manner. This strategy ensures that your application remains available to users throughout the update process. Here’s how rolling updates work:

  • Incremental Updates: Rolling updates in Kubernetes involves gradually replacing old instances of an application with new ones. This incremental approach ensures that only a portion of the instances are updated at any given time. This helps minimize risk by allowing you to monitor and validate the new version’s behavior before fully rolling it out.
  • Health Checks: Kubernetes offers robust health-checking mechanisms to ensure the stability of your application during updates. Before replacing an old pod with a new one, Kubernetes checks the health of the new pod. If it’s healthy, the old pod is terminated, and the process continues. If not, Kubernetes will wait until the new pod is deemed healthy, preventing potential issues.
  • Version Control: Kubernetes enables version control of your application deployments. This means you can define and manage different versions of your application, making it easy to roll back to a previous version in case of issues with the new release. This feature enhances the reliability of your deployment process.
  • Configuration Management: Rolling updates also encompass the management of configuration changes. Kubernetes lets you update configurations like environment variables or secrets alongside your application updates. This ensures consistency and accuracy throughout the deployment process.
  • Customization: You have the flexibility to customize the rolling update strategy to match your application’s specific requirements. You can control parameters like the maximum number of unavailable pods, the maximum surge of new pods, and the update strategy, making it adaptable to various scenarios.

Graceful Shutdown

Graceful shutdown is a critical aspect of rolling updates. When a pod is scheduled for termination, it must ensure it finishes processing requests and gracefully releases resources. Key features of the graceful shutdown include:

  • Connection Draining: Incoming connections can complete their transactions before the pod terminates. This prevents abrupt service interruptions for clients.
  • Resource Cleanup: Resources such as files, sockets, and database connections are released properly before the pod terminates. This prevents resource leaks and ensures efficient resource utilization.
  • Grace Period: Pods are given a configurable grace period to complete their tasks before being forcefully terminated.

Steps for deploying using rolling updates strategy

Now that we understand the features, let’s dive into the steps for deploying an application using the rolling updates strategy on Kubernetes.

By gracefully shutting down the services, specifically through an NGINX deployment, we can seamlessly demonstrate the process of this upgrade.

Illustrating Rolling Update Deployment:

  • Create a replica set with version one representing the old version and version two representing the new update.
  • Deploy the updated application, resulting in the creation of a new pod in the designated precast zone.
  • Gradually remove one pod at a time from the old version (e.g., pod ‘x’ or ‘Toronto’) associated with the replica set V1.
  • Repeat the process until a new pod is created and old pods are phased out in a controlled, step-by-step approach.

The diagram (Fig 1)  below illustrates the use of the rolling update deployment strategy in one aspect: deployment. 

Fig 1: Rolling Updates

Enabling Rolling Update Strategy via YAML File:

To accomplish a flawless upgrade without downtime, we can delve into implementing the rolling update deployment strategy. By gracefully shutting down the services, specifically through an NGINX deployment, we can seamlessly demonstrate the process of this upgrade.

  • Use a YAML file alongside the standard deployment manifest.
  • Augment the YAML file with a rolling update strategy, including parameters like maxSurge and maxUnavailable.
  • Example:
Copy to Clipboard

Implementing Readiness Probes:

To facilitate seamless rolling updates, Readiness Probes were implemented. These probes ensure that a container is operational and ready to handle requests. If a readiness probe indicates a failed state, Kubernetes will remove the container’s IP address from the endpoints of all associated Services.

  • Implement TCP socket checks for health using the readinessProbe with parameters like port, periodSeconds, successThreshold, and failureThreshold.
  • When using TCP socket checks, Kubernetes attempts to open a socket to the container. The container is considered healthy if the check can establish a successful connection.
  • Example:
Copy to Clipboard
  • Use HTTP probes to check the health of an application with parameters like path and port.
  • Example:
Copy to Clipboard

Enabling Graceful shutdown of application

  • Execute a preStop hook during deployment to shut down the services gracefully.
  • The preStop Hook is a special command or http request sent to the pod’s containers.
  • During the deployment process, when the old pod is being deleted, traffic is not routed to it anymore. Adding the preStopHook to the Kubernetes YAML file is recommended to ensure this. This allows Kubelet to pause upon receiving the deleting pod event, giving Kube-proxy sufficient time to update the network rules before the pod deletion begins.
  • Use a sleep command within the preStop hook to pause for a specified duration (e.g., 30 seconds).
  • Example:
Copy to Clipboard
  • Adjust the terminationGracePeriodSeconds to enhance system functionality.
  • Example:
Copy to Clipboard

Enabling Graceful Shutdown for Spring Boot Applications:

  • Modify Spring Boot configuration files (config file or application.properties) to include shutdown settings.
  • Example (in config file):
Copy to Clipboard

Example (in application.properties file):

Copy to Clipboard

Spring Boot Graceful Shutdown Configuration:

  • Spring Boot stops accepting new requests upon receiving SIGTERM.
  • Completes processing ongoing requests within the provided timeout gracefully.
  • Reference the appropriate duration as the timeout value for optimal performance.

Readiness Probe Configuration:

  • Set the readiness probe’s period to five seconds, indicating checks every five seconds.
  • Utilize a pre-stop hook to gracefully shut down the component, allowing a 30-second grace period.
  • During this time, the input controller deregisters the endpoint before sending a termination signal to the NGINX process.

Graceful Shutdown Time Frame:

  • The application allows a limited time frame for graceful shutdown.
  • The default time period is set to 45 seconds.
  • Customize this duration based on specific application needs for a smooth shutdown process.

Port Shutdown During Deployment:

When a port is in use between ongoing operations in Kubernetes:

  • Allow an appropriate period for the port to shut down gracefully.
  • Ensure newly created instances handle new tasks and requests while existing instances are terminated effectively.

Seamless Transition Between Old and New Code:

  • Maintain smooth operation without interrupting ongoing services.
  • Use the provided manifest to trigger deployment and regain control over the required ports.

Observing Ports and Deployment Status:

  • Observe an NGINX deployment with two ports in the “ready” state.
  • Trigger deployment after sending periodic requests to targeted aliens.
  • Review access logs in the appropriate window for comprehensive monitoring.

Advantages and Disadvantages of Rolling Update Strategy

Advantages

  • Minimal Downtime:  Rolling updates are designed to minimize service disruption. They gradually replace old instances with new ones, ensuring that a portion of your application remains available throughout the update process.
  • Continuous Availability: Since not all instances are updated simultaneously, the application remains available to users during the update. This is critical for services with high availability requirements.
  • Easy Rollback:  If issues or errors are detected during the update, rolling back to the previous version is straightforward. You can simply pause or halt the update process.

Disadvantages

  • Slower Updates: Rolling updates are typically slower compared to other strategies like blue-green or canary deployments because they update instances one at a time
  • Complexity in Stateful Services: For stateful applications, such as databases, rolling updates can be more complex because you need to ensure data consistency and synchronization between old and new instances
  • Version Compatibility: Compatibility between different versions of your application can be challenging to manage, especially if there are breaking changes in the new version.

Conclusion

Rolling updates and graceful shutdown strategies are essential for maintaining the availability and reliability of containerized applications on Kubernetes. You can achieve seamless updates without downtime by incrementally replacing pods and allowing them to shut down gracefully. While rolling updates offer significant advantages such as zero downtime and fault tolerance, they require careful planning and configuration. As Kubernetes continues to evolve, mastering these techniques will ensure the smooth operation of your containerized applications in production environments.

About The Author

Hari Shankar

Senior Cloud DevOps Engineer | Cloud Control

Cloud DevOps Engineer with more than five years of experience in supporting, automating, and optimizing deployments to hybrid cloud platforms using DevOps processes, tools, CI/CD, Docker containers, and K8s in both Production and Development environments.

About Cloud Control

Cloud Control simplifies cloud management with AppZ, DataZ, and ManageZ, optimizing operations, enhancing security, and accelerating time-to-market. We help businesses achieve cloud goals efficiently and reliably.

2024
GITEX
14-18 October

Dubai, UAE