
Containerization and Kubernetes
This course is more than just a collection of knowledge — it’s a springboard to a successful career in Cloud Native and DevOps engineering. It will provide you with a solid foundation for working with microservice and ML systems in a cluster environment and open the door to the world of cutting‑edge technological solutions, positioning you at the forefront of modern software engineering practices.
This course will guide you from the basic principles of container virtualization to the design of full‑fledged production clusters. Step by step, you’ll master key industry technologies — from Docker and Kubernetes to Helm, CI/CD, and GitOps. You’ll learn not just to use tools, but to think like an engineer: to systematically analyze tasks, design reliable architectures, and ensure uninterrupted operation of complex software systems.
A distinctive feature of the course is the integration of large language models (LLMs) into the learning process. AI will become your intelligent assistant, helping you understand complex terms, analyze configurations, verify architectural solutions, and provide valuable recommendations for improvement. You’ll master three modes of working with LLMs. As a reference tool, it will enable quick information retrieval. In the role of a personal tutor, it will support interactive learning tailored to your knowledge level. Acting as an expert auditor, it will help you audit solutions and identify potential risks.
The learning journey follows a clear engineering trajectory: from container to cluster, from scaling to automation, from basic setup to operational maturity. In the first semester, you’ll master the fundamentals of container virtualization and orchestration, gaining a deep understanding of Docker and Kubernetes architecture. You’ll acquire practical skills in containerizing microservices and deploying them in a cluster environment. As you move into the second semester, the focus shifts to production operations. Here, you’ll dive deeper into critical aspects such as security and system observability, explore the principles of continuous integration and delivery, and become proficient in using advanced tools including Helm and GitOps.
A hands‑on approach lies at the core of the course. Through engaging lectures with in‑depth architectural analysis and masterclasses led by practicing engineers, you’ll gain theoretical knowledge and real‑world insights. Practical skills will be honed through lab work with real‑world case studies and project activities that closely simulate actual industry challenges, allowing you to apply what you’ve learned in realistic scenarios.
By the end of the course, you’ll be fully equipped to independently design and maintain production‑ready systems. You’ll know how to effectively apply LLMs to solve engineering tasks and will be able to justify your architectural decisions with solid reasoning, demonstrating a deep understanding of system design principles. The final assessment will put these skills to the test: through architectural case analysis, Kubernetes configuration diagnostics, production incident analysis, and an oral defense of your solutions, you’ll have the opportunity to fully validate your professional expertise.
OBJECTIVES
Developing a systemic understanding of containerization and orchestration;
Mastering Docker and Kubernetes tools for engineering design;
Development of skills in the operation and sustainability of distributed systems;
DevOps engineering competencies;
Development of skills in applying LLM to analyze, generate and verify configurations and architectural solutions.
KEY TASKS
Developing an understanding of process isolation mechanisms (namespaces, cgroups);
Mastering the principles of application containerization;
Designing microservice architecture;
Kubernetes resources ( Pod , Deployment , Service, StatefulSet);
Implementation of CI/CD (continuous integration and delivery);
Mastering Helm ( Kubernetes package manager );
Development of incident diagnostic skills ( Root Cause Analysis (root cause analysis);
Security (RBAC - role-based access control, Network Policies );
Using LLMs in reference, tutorial, and controller modes.
Main topics of the course:
Semester 1: Containerization and Basic Orchestration
1. Cloud‑Native Architecture, Containerization and Virtualization.
Exploring the differences between virtual machines and containers; studying Linux namespaces and cgroups for resource control. Hands‑on practice with Docker installation and image layer analysis. Using LLM for glossary lookup and verification tasks.
2. Docker Architecture and Dockerfile.
Understanding the concepts of images, layers and containers. Learning to write Dockerfiles and analyze errors. Containerizing backend services and optimizing image layers. LLM assists with syntax reference and Dockerfile audit.
3. Multi‑stage Build and Image Security.
Working with builder and runtime stages, performing image scanning. Rewriting Dockerfiles to improve efficiency. Minimizing images and analyzing vulnerabilities. LLM provides alternative strategies and structure audit.
4. Docker Compose, Networking and Volumes.
Orchestrating multi‑container applications with Docker Compose. Setting up networking and storage volumes. Practical work with backend services and PostgreSQL, integrating Redis. LLM supports with Docker Compose reference and configuration checks.
5. Microservices Architecture and 12‑Factor App.
Decomposing monolithic applications into microservices. Applying the 12 principles of application design. Designing a 3‑service system. LLM helps with questions about service boundaries and schema audit.
6. Introduction to Kubernetes and Cluster Architecture.
Studying cluster components: Control Plane, Worker Nodes and etcd. Hands‑on experience with Minikube and kubectl. Creating cluster diagrams. LLM provides component role reference and diagram analysis.
7. Pods, ReplicaSets and Deployments.
Deploying applications and managing replicas. Analyzing update strategies for rolling updates. LLM assists with availability analysis and YAML checking.
8. Services: ClusterIP, NodePort and LoadBalancer.
Publishing services and configuring external access. Working with different service types to manage traffic. LLM offers routing reference and configuration checking.
9. ConfigMaps and Secrets.
Managing application configuration and sensitive data. Injecting environment variables and extracting configuration. LLM supports threat analysis and security checks.
10. Persistent Volumes and StorageClasses.
Setting up persistent storage for applications. Connecting databases and modeling node loss scenarios. LLM assists with resilience analysis and YAML configuration checking.
11. Horizontal Pod Autoscaler and Scaling.
Implementing CPU‑based autoscaling. Performing load testing to evaluate system performance. LLM predicts behavior and generates reports.
12. Diagnostics and Debugging: CrashLoopBackOff.
Analyzing and resolving Kubernetes incidents. Conducting Root Cause Analysis (RCA). LLM generates incident scenarios and verifies reports.
13. Mini‑project: Containerization of a Microservice System.
Applying learned concepts to containerize a microservice system. Receiving architectural consultation and implementing the project. LLM acts as an architectural reviewer and YAML checker.
14. Deploying a Project to Kubernetes.
Auditing configurations and refining the project before deployment. LLM serves as a controller and error expert.
15. Analysis of Common Mistakes and Exam Preparation.
Reviewing typical errors and preparing for the final assessment.
Semester 2: Production Kubernetes and DevOps.
1. StatefulSets, DaemonSets and Jobs.
Deploying stateful applications like databases using StatefulSets. Comparing Deployment and StatefulSet strategies. LLM provides YAML control and reference support.
2. Ingress and TLS.
Configuring HTTPS and routing incoming traffic. Setting up domain routing. LLM assists with TLS reference and configuration checking.
3. RBAC (Role‑Based Access Control).
Implementing access restrictions and designing multi‑tenant models. LLM performs rights audit and role control.
4. Network Policies and Zero‑Trust Model.
Isolating namespaces and modeling attack scenarios. LLM supports learning and security control.
5. Helm and Helm Charts.
Creating and parameterizing Helm charts. Working with Values.yaml files. LLM provides template reference and structure checking.
6. CI/CD and Build Pipelines.
Automating Docker builds and designing CI/CD pipelines. LLM acts as an architectural consultant and logic controller.
7. GitOps and Infrastructure as Code.
Implementing automated deployment via Git. Designing GitOps strategies. LLM supports declarativity checking and educational guidance.
8. Observability, Monitoring and Logging.
Setting up monitoring with Prometheus. Analyzing load graphs and logs. LLM assists with metrics analysis and report generation.
9. Distributed Tracing and Latency.
Identifying bottlenecks and analyzing latency. Creating latency reports. LLM supports analysis and training.
10. Production Security, Image Scanning and Secrets Management.
Auditing images and managing secrets securely. Conducting security reviews. LLM performs security audit and recommendation checking.
11. Final Project: Architectural Design of a Production System.
Designing a full‑scale production system architecture.
12. Implementation: Helm + CI/CD + HPA.
Integrating Helm, CI/CD pipelines and Horizontal Pod Autoscaler in a real‑world scenario.
13. Engineering Audit of the System.
Conducting a comprehensive audit of the system architecture. LLM acts as an architectural opponent and controller.
14. Preparing for Defense and Operational Risk Analysis.
Evaluating operational risks and preparing for project defense.