What are the advantages of adopting a GitOps approach for your deployments?
Reduce failed deployments, operational costs, and fragile release processes.
Reduce failed deployments, configuration drift, and fragile release processes.
Reduce failed deployments, operational costs, and learn git.
Reduce failed deployments, configuration drift and improve your reputation.
The correct answer isB: GitOps helps reduce failed deployments, reduce configuration drift, and reduce fragile release processes. GitOps is an operating model whereGit is the source of truthfor declarative configuration (Kubernetes manifests, Helm releases, Kustomize overlays). A GitOps controller (like Flux or Argo CD) continuously reconciles the cluster’s actual state to match what’s declared in Git. This creates a stable, repeatable deployment pipeline and minimizes “snowflake†environments.
Reducing failed deployments: changes go through pull requests, code review, automated checks, and controlled merges. Deployments become predictable because the controller applies known-good, versioned configuration rather than ad-hoc manual commands. Rollbacks are also simpler—reverting a Git commit returns the cluster to the prior desired state.
Reducing configuration drift: without GitOps, clusters often drift because humans apply hotfixes directly in production or because different environments diverge over time. With GitOps, the controller detects drift and either alerts or automatically corrects it, restoring alignment with Git.
Reducing fragile release processes: releases become standardized and auditable. Git history provides an immutable record of who changed what and when. Promotion between environments becomes systematic (merge/branch/tag), and the same declarative artifacts are used consistently.
The other options include items that are either not the primary GitOps promise (like “learn gitâ€) or subjective (“improve your reputationâ€). Operational cost reduction can happen indirectly through fewer incidents and more automation, but the most canonical and direct GitOps advantages in Kubernetes delivery are reliability and drift control—captured precisely inB.
=========
What do Deployments and StatefulSets have in common?
They manage Pods that are based on an identical container spec.
They support the OnDelete update strategy.
They support an ordered, graceful deployment and scaling.
They maintain a sticky identity for each of their Pods.
BothDeploymentsandStatefulSetsare Kubernetes workload controllers that manage a set of Pods created from a Pod template, meaning they manage Pods based on anidentical container specification(a shared Pod template). That is whyAis correct. In both cases, you declare a desired state (replicas, container images, environment variables, volumes, probes, etc.) in spec.template, and the controller ensures the cluster converges toward that state by creating, updating, or replacing Pods.
The differences are what make the other options incorrect.OnDeleteupdate strategy is associated with StatefulSets (it’s one of their update strategies), but it is not a shared, defining behavior across both controllers, so B is not “in common.â€Ordered, graceful deployment and scalingis a hallmark of StatefulSets (ordered pod creation/termination and stable identities) rather than Deployments, so C is not shared.Sticky identity per Pod(stable network identity and stable storage identity per replica, commonly via StatefulSet + headless Service) is specifically a StatefulSet characteristic, not a Deployment feature, so D is not common.
A useful way to think about it is: both controllers managereplicas of a Pod template, but they differ in semantics. Deployments are designed primarily forstatelessworkloads and typically focus on rolling updates and scalable replicas where any instance is interchangeable. StatefulSets are designed forstatefulworkloads and add identity and ordering guarantees: each replica gets a stable name (like db-0, db-1) and often stable PersistentVolumeClaims.
So the shared commonality the question is testing is the basic workload-controller pattern: both controllers manage Pods created from a common template (identical container spec). Therefore,Ais the verified answer.
=========
Which of these events will cause the kube-scheduler to assign a Pod to a node?
When the Pod crashes because of an error.
When a new node is added to the Kubernetes cluster.
When the CPU load on the node becomes too high.
When a new Pod is created and has no assigned node.
The kube-scheduler assigns a node to a Pod when the Pod isunscheduled—meaning it exists in the API server but hasno spec.nodeName set. The event that triggers scheduling is therefore:a new Pod is created and has no assigned node, which is optionD.
Kubernetes scheduling is declarative and event-driven. The scheduler continuously watches for Pods that are in a “Pending†unscheduled state. When it sees one, it runs a scheduling cycle: filtering nodes that cannot run the Pod (insufficient resources based on requests, taints/tolerations, node selectors/affinity rules, topology spread constraints), then scoring the remaining feasible nodes to pick the best candidate. Once selected, the scheduler “binds†the Pod to that node by updating the Pod’s spec.nodeName. After that, kubelet on the chosen node takes over to pull images and start containers.
Option A (Pod crashes) does not directly cause scheduling. If a container crashes, kubelet may restart it on the same node according to restart policy. If the Pod itself is replaced (e.g., by a controller like a Deployment creating a new Pod), thatnewPod will be scheduled because it’s unscheduled—but the crash event itself isn’t the scheduler’s trigger. Option B (new node added) might create more capacity and affect future scheduling decisions, but it does not by itself trigger assigning a particular Pod; scheduling still happens because there are unscheduled Pods. Option C (CPU load high) is not a scheduling trigger; scheduling is based on declared requests and constraints, not instantaneous node CPU load (that’s a common misconception).
So the correct, Kubernetes-architecture answer isD: kube-scheduler assigns nodes to Pods that are newly created (or otherwise pending) and have no assigned node.
=========
What is the main purpose of a DaemonSet?
A DaemonSet ensures that all (or certain) nodes run a copy of a Pod.
A DaemonSet ensures that the kubelet is constantly up and running.
A DaemonSet ensures that there are as many pods running as specified in the replicas field.
A DaemonSet ensures that a process (agent) runs on every node.
The correct answer isA. ADaemonSetis a workload controller whose job is to ensure that a specific Pod runs onall nodes(or on a selected subset of nodes) in the cluster. This is fundamentally different from Deployments/ReplicaSets, which aim to maintain a certainreplica countregardless of node count. With a DaemonSet, the number of Pods is implicitly tied to the number of eligible nodes: add a node, and the DaemonSet automatically schedules a Pod there; remove a node, and its Pod goes away.
DaemonSets are commonly used for node-level services and background agents: log collectors, node monitoring agents, storage daemons, CNI components, or security agents—anything where you want a presence on each node to interact with node resources. This aligns with option D’s phrasing (“agent on every nodeâ€), but option A is the canonical definition and is slightly broader because it covers “all or certain nodes†(via node selectors/affinity/taints-tolerations) and the fact that the unit is a Pod.
Why the other options are wrong: DaemonSets do not “keep kubelet running†(B); kubelet is a node service managed by the OS. DaemonSets do not use a replicas field to maintain a specific count (C); that’s Deployment/ReplicaSet behavior.
Operationally, DaemonSets matter for cluster operations because they provide consistent node coverage and automatically react to node pool scaling. They also require careful scheduling constraints so they land only where intended (e.g., only Linux nodes, only GPU nodes). But the main purpose remains: ensure a copy of a Pod runs on each relevant node—optionA.
=========
What is the name of the Kubernetes resource used to expose an application?
Port
Service
DNS
Deployment
To expose an application running on Pods so that other components can reliably reach it, Kubernetes uses aService, makingBthe correct answer. Pods are ephemeral: they can be recreated, rescheduled, and scaled, which means Pod IPs change. A Service provides astable endpoint(virtual IP and DNS name) and load-balances traffic across the set of Pods selected by its label selector.
Services come in multiple forms. The default isClusterIP, which exposes the application inside the cluster.NodePortexposes the Service on a static port on each node, andLoadBalancer(in supported clouds) provisions an external load balancer that routes traffic to the Service.ExternalNamemaps a Service name to an external DNS name. But across these variants, the abstraction is consistent: a Service defineshowto access a logical group of Pods.
Option A (Port) is not a Kubernetes resource type; ports are fields within resources. Option C (DNS) is a supporting mechanism (CoreDNS creates DNS entries for Services), but DNS is not the resource you create to expose the app. Option D (Deployment) manages Pod replicas and rollouts, but it does not directly provide stable networking access; you typically pair a Deployment with a Service to expose it.
This is a core cloud-native pattern:controllers manage compute,Services manage stable connectivity, and higher-level gateways likeIngressprovide L7 routing for HTTP/HTTPS. So, the Kubernetes resource used to expose an application isService (B).
=========
Which of the following is a correct definition of a Helm chart?
A Helm chart is a collection of YAML files bundled in a tar.gz file and can be applied without decompressing it.
A Helm chart is a collection of JSON files and contains all the resource definitions to run an application on Kubernetes.
A Helm chart is a collection of YAML files that can be applied on Kubernetes by using the kubectl tool.
A Helm chart is similar to a package and contains all the resource definitions to run an application on Kubernetes.
A Helm chart is best described as apackagefor Kubernetes applications, containing the resource definitions (as templates) and metadata needed to install and manage an application—soDis correct. Helm is a package manager for Kubernetes; the chart is the packaging format. Charts include a Chart.yaml (metadata), a values.yaml (default configuration values), and a templates/ directory containing Kubernetes manifests written as templates. When you install a chart, Helm renders those templates into concrete Kubernetes YAML manifests by substituting values, then applies them to the cluster.
Option A is misleading/incomplete. While charts are often distributed as a compressed tarball (.tgz), the defining feature is not “YAML bundled in tar.gz†but the packaging and templating model that supports install/upgrade/rollback. Option B is incorrect because Helm charts are not “collections of JSON files†by definition; Kubernetes resources can be expressed as YAML or JSON, but Helm charts overwhelmingly use templated YAML. Option C is incorrect because charts are not simply YAML applied by kubectl; Helm manages releases, tracks installed resources, and supports upgrades and rollbacks. Helm uses Kubernetes APIs under the hood, but the value of Helm is the lifecycle and packaging system, not “kubectl apply.â€
In cloud-native application delivery, Helm helps standardize deployments across environments (dev/stage/prod) by externalizing configuration through values. It reduces copy/paste and supports reuse via dependencies and subcharts. Helm also supports versioning of application packages, allowing teams to upgrade predictably and roll back if needed—critical for production change management.
So, the correct and verified definition isD: a Helm chart is like a package containing the resource definitions needed to run an application on Kubernetes.
=========
At which layer would distributed tracing be implemented in a cloud native deployment?
Network
Application
Database
Infrastructure
Distributed tracing is implemented primarily at theapplication layer, soBis correct. The reason is simple: tracing is about capturing the end-to-end path of a request as it traverses services, libraries, queues, and databases. That “request context†(trace ID, span ID, baggage) must be created, propagated, and enriched as code executes. While infrastructure components (proxies, gateways, service meshes) can generate or augment trace spans, the fundamental unit of tracing is still tied to application operations (an HTTP handler, a gRPC call, a database query, a cache lookup).
In Kubernetes-based microservices, distributed tracing typically uses standards likeOpenTelemetryfor instrumentation and context propagation. Application frameworks emit spans for key operations, attach attributes (route, status code, tenant, retry count), and propagate context via headers (e.g., W3C Trace Context). This is what lets you reconstruct “Service A → Service B → Service C†for one user request and identify the slow or failing hop.
Why other layers are not the best answer:
Networkfocuses on packets/flows, but tracing is not a packet-capture problem; it’s a causal request-path problem across services.
Databasespans are part of traces, but tracing is not “implemented in the database layer†overall; DB spans are one component.
Infrastructureprovides the platform and can observe traffic, but without application context it can’t fully represent business operations (and many useful attributes live in app code).
So the correct layer for “where tracing is implemented†is theapplication layer—even when a mesh or proxy helps, it’s still describing application request execution across components.
=========
In the Kubernetes platform, which component is responsible for running containers?
etcd
CRI-O
cloud-controller-manager
kube-controller-manager
In Kubernetes, the actual act ofrunning containerson a node is performed by thecontainer runtime. The kubelet instructs the runtime via CRI, and the runtime pulls images, creates containers, and manages their lifecycle. Among the options provided,CRI-Ois the only container runtime, soBis correct.
It’s important to be precise: the component that “runs containers†is not the control plane and not etcd.etcd(option A) stores cluster state (API objects) as the backing datastore. It never runs containers.cloud-controller-manager(option C) integrates with cloud APIs for infrastructure like load balancers and nodes.kube-controller-manager(option D) runs controllers that reconcile Kubernetes objects (Deployments, Jobs, Nodes, etc.) but does not execute containers on worker nodes.
CRI-O is a CRI implementation that is optimized for Kubernetes and typically uses an OCI runtime (like runc) under the hood to start containers. Another widely used runtime is containerd. The runtime is installed on nodes and is a prerequisite for kubelet to start Pods. When a Pod is scheduled to a node, kubelet reads the PodSpec and asks the runtime to create a “pod sandbox†and then start the container processes. Runtime behavior also includes pulling images, setting up namespaces/cgroups, and exposing logs/stdout streams back to Kubernetes tooling.
So while “the container runtime†is the most general answer, the question’s option list makesCRI-Othe correct selection because it is a container runtime responsible for running containers in Kubernetes.
=========
Which of the following is a definition of Hybrid Cloud?
A combination of services running in public and private data centers, only including data centers from the same cloud provider.
A cloud native architecture that uses services running in public clouds, excluding data centers in different availability zones.
A cloud native architecture that uses services running in different public and private clouds, including on-premises data centers.
A combination of services running in public and private data centers, excluding serverless functions.
Ahybrid cloudarchitecture combinespublic cloudandprivate/on-premisesenvironments, often spanning multiple infrastructure domains while maintaining some level of portability, connectivity, and unified operations. OptionCcaptures the commonly accepted definition: services run acrosspublic and private clouds, includingon-premises data centers, soCis correct.
Hybrid cloud is not limited to a single cloud provider (which is why A is too restrictive). Many organizations adopt hybrid cloud to meet regulatory requirements, data residency constraints, latency needs, or to preserve existing investments while still using public cloud elasticity. In Kubernetes terms, hybrid strategies often include running clusters both on-prem and in one or more public clouds, then standardizing deployment through Kubernetes APIs, GitOps, and consistent security/observability practices.
Option B is incorrect because excluding data centers in different availability zones is not a defining property; in fact, hybrid deployments commonly use multiple zones/regions for resilience. Option D is a distraction: serverless inclusion or exclusion does not define hybrid cloud. Hybrid is about the combination ofinfrastructure environments, not a specific compute model.
A practical cloud-native view is that hybrid architectures introduce challenges around identity, networking, policy enforcement, and consistent observability across environments. Kubernetes helps because it provides a consistent control plane API and workload model regardless of where it runs. Tools like service meshes, federated identity, and unified monitoring can further reduce fragmentation.
So, the most accurate definition in the given choices isC: hybrid cloud combines public and private clouds, including on-premises infrastructure, to run services in a coordinated architecture.
=========
Which of the following will view the snapshot of previously terminated ruby container logs from Pod web-1?
kubectl logs -p -c ruby web-1
kubectl logs -c ruby web-1
kubectl logs -p ruby web-1
kubectl logs -p -c web-1 ruby
To view logs from thepreviously terminated instanceof a container, you use kubectl logs -p. To select a specific container in a multi-container Pod, you use -c
The -p (or --previous) flag instructs kubectl to fetch logs for the prior container instance. This is most useful when the container has restarted due to a crash (CrashLoopBackOff) or was terminated and restarted. Without -p, kubectl logs shows logs for the currently running container instance (or the most recent if it’s completed, depending on state).
Option B is close but wrong for the question: it selects the ruby container (-c ruby) but does not request the previous instance snapshot, so it returns current logs, not the prior-terminated logs. Option C is missing the -c container selector and is also malformed: kubectl logs -p expects the Pod name (and optionally container); ruby is not a flag positionally correct here. Option D has argument order incorrect and mixes Pod and container names in the wrong places.
Operationally, this is a common Kubernetes troubleshooting workflow: if a container restarts quickly, current logs may be short or empty, and the actionable crash output is in the previous instance logs. Using kubectl logs -p often reveals stack traces, fatal errors, or misconfiguration messages. In multi-container Pods, always pair -p with -c to ensure you’re looking at the right container.
Therefore, the verified correct answer isA.
=========
What is the main purpose of etcd in Kubernetes?
etcd stores all cluster data in a key value store.
etcd stores the containers running in the cluster for disaster recovery.
etcd stores copies of the Kubernetes config files that live /etc/.
etcd stores the YAML definitions for all the cluster components.
The main purpose ofetcdin Kubernetes is to store the cluster’s state as adistributed key-value store, soAis correct. Kubernetes is API-driven: objects like Pods, Deployments, Services, ConfigMaps, Secrets, Nodes, and RBAC rules are persisted by the API server into etcd. Controllers, schedulers, and other components then watch the API for changes and reconcile the cluster accordingly. This makes etcd the “source of truth†for desired and observed cluster state.
Options B, C, and D are misconceptions. etcd does not store the running containers; that’s the job of the kubelet/container runtime on each node, and container state is ephemeral. etcd does not store /etc configuration file copies. And while you may author objects as YAML manifests, Kubernetes stores them internally as API objects (serialized) in etcd—not as “YAML definitions for all components.†The data is structured key/value entries representing Kubernetes resources and metadata.
Because etcd is so critical, its performance and reliability directly affect the cluster. Slow disk I/O or poor network latency increases API request latency and can delay controller reconciliation, leading to cascading operational problems (slow rollouts, delayed scheduling, timeouts). That’s why etcd is typically run on fast, reliable storage and in an HA configuration (often 3 or 5 members) to maintain quorum and tolerate failures. Backups (snapshots) and restore procedures are also central to disaster recovery: if etcd is lost, the cluster loses its state.
Security is also important: etcd can contain sensitive information (especially Secrets unless encrypted at rest). Proper TLS, restricted access, and encryption-at-rest configuration are standard best practices.
So, the verified correct answer isA: etcd stores all cluster data/state in a key-value store.
=========
Why do administrators need a container orchestration tool?
To manage the lifecycle of an elevated number of containers.
To assess the security risks of the container images used in production.
To learn how to transform monolithic applications into microservices.
Container orchestration tools such as Kubernetes are the future.
The correct answer isA. Container orchestration exists because running containers at scale is hard: you need to schedule workloads onto machines, keep them healthy, scale them up and down, roll out updates safely, and recover from failures automatically. Administrators (and platform teams) use orchestration tools like Kubernetes to manage thelifecycleof many containers across many nodes—handling placement, restart, rescheduling, networking/service discovery, and desired-state reconciliation.
At small scale, you can run containers manually or with basic scripts. But at “elevated†scale (many services, many replicas, many nodes), manual management becomes unreliable and brittle. Orchestration provides primitives and controllers that continuously converge actual state toward desired state: if a container crashes, it is restarted; if a node dies, replacement Pods are scheduled; if traffic increases, replicas can be increased via autoscaling; if configuration changes, rolling updates can be coordinated with readiness checks.
Option B (security risk assessment) is important, but it’s not why orchestration tools exist. Image scanning and supply-chain security are typically handled by CI/CD tooling and registries, not by orchestration as the primary purpose. Option C is a separate architectural modernization effort; orchestration can support microservices, but it isn’t required “to learn transformation.†Option D is an opinion statement rather than a functional need.
So the core administrator need is lifecycle management at scale: ensuring workloads run reliably, predictably, and efficiently across a fleet. That is exactly what optionAstates.
=========
What is the order of 4C’s in Cloud Native Security, starting with the layer that a user has the most control over?
Cloud -> Container -> Cluster -> Code
Container -> Cluster -> Code -> Cloud
Cluster -> Container -> Code -> Cloud
Code -> Container -> Cluster -> Cloud
The Cloud Native Security “4C’s†model is commonly presented asCode, Container, Cluster, Cloud, ordered from the layer you control most directly to the one you control least—thereforeDis correct. The idea is defense-in-depth across layers, recognizing that responsibilities are shared between developers, platform teams, and cloud providers.
Codeis where users have the most direct control: application logic, dependencies, secure coding practices, secrets handling patterns, and testing. This includes validating inputs, avoiding vulnerabilities, and scanning dependencies. Next is theContainerlayer: building secure images, minimizing image size/attack surface, using non-root users, setting file permissions, and scanning images for known CVEs. Container security is about ensuring the artifact you run is trustworthy and hardened.
Then comes theClusterlayer: Kubernetes configuration and runtime controls, including RBAC, admission policies (OPA/Gatekeeper), Pod Security standards, network policies, runtime security, audit logging, and node hardening practices. Cluster controls determine what can run and how workloads interact. Finally, theCloudlayer includes the infrastructure and provider controls—IAM, VPC/networking, KMS, managed control plane protections, and physical security—which users influence through configuration but do not fully own.
The model’s value is prioritization: start with what you control most (code), then harden the container artifact, then enforce cluster policy and runtime protections, and finally ensure cloud controls are configured properly. This layered approach aligns well with Kubernetes security guidance and modern shared-responsibility models.
Which storage operator in Kubernetes can help the system to self-scale, self-heal, etc?
Rook
Kubernetes
Helm
Container Storage Interface (CSI)
Rookis a Kubernetesstorage operatorthat helps manage and automate storage systems in a Kubernetes-native way, soAis correct. The key phrase in the question is “storage operator … self-scale, self-heal.†Operators extend Kubernetes by using controllers to reconcile a desired state. Rook applies that model to storage, commonly by managing storage backends like Ceph (and other systems depending on configuration).
With an operator approach, you declare how you want storage to look (cluster size, pools, replication, placement, failure domains), and the operator works continuously to maintain that state. That includes operational behaviors that feel “self-healing†such as reacting to failed storage Pods, rebalancing, or restoring desired replication counts (the exact behavior depends on the backend and configuration). The important KCNA-level idea is that Rook uses Kubernetes controllers to automate day-2 operations for storage in a way consistent with Kubernetes’ reconciliation loops.
The other options do not match the question: “Kubernetes†is the orchestrator itself, not a storage operator. “Helm†is a package manager for Kubernetes apps—it can install storage software, but it is not an operator that continuously reconciles and self-manages. “CSI†(Container Storage Interface) is an interface specification that enables pluggable storage drivers; CSI drivers provision and attach volumes, but CSI itself is not a “storage operator†with the broader self-managing operator semantics described here.
So, for “storage operator that can help with self-* behaviors,â€Rookis the correct choice.
=========
The IPv4/IPv6 dual stack in Kubernetes:
Translates an IPv4 request from a Service to an IPv6 Service.
Allows you to access the IPv4 address by using the IPv6 address.
Requires NetworkPolicies to prevent Services from mixing requests.
Allows you to create IPv4 and IPv6 dual stack Services.
The correct answer isD: Kubernetes dual-stack support allows you to create Services (and Pods, depending on configuration) that useboth IPv4 and IPv6addressing. Dual-stack means the cluster is configured to allocate and route traffic for both IP families. For Services, this can mean assigning both an IPv4 ClusterIP and an IPv6 ClusterIP so clients can connect using either family, depending on their network stack and DNS resolution.
Option A is incorrect because dual-stack is not about protocol translation (that would be NAT64/other gateway mechanisms, not the core Kubernetes dual-stack feature). Option B is also a form of translation/aliasing that isn’t what Kubernetes dual-stack implies; having both addresses available is different from “access IPv4 via IPv6.†Option C is incorrect: dual-stack does not inherently require NetworkPolicies to “prevent mixing requests.†NetworkPolicies are about traffic control, not IP family separation.
In Kubernetes, dual-stack requires support across components: the network plugin (CNI) must support IPv4/IPv6, the cluster must be configured with both Pod CIDRs and Service CIDRs, and DNS should return appropriate A and AAAA records for Service names. Once configured, you can specify preferences such as ipFamilyPolicy (e.g., PreferDualStack) and ipFamilies (IPv4, IPv6 order) for Services to influence allocation behavior.
Operationally, dual-stack is useful for environments transitioning to IPv6, supporting IPv6-only clients, or running in mixed networks. But it adds complexity: address planning, firewalling, and troubleshooting need to consider two IP families. Still, the definition in the question is straightforward: Kubernetes dual-stack enablesdual-stack Services, which is optionD.
=========
What methods can you use to scale a Deployment?
With kubectl edit deployment exclusively.
With kubectl scale-up deployment exclusively.
With kubectl scale deployment and kubectl edit deployment.
With kubectl scale deployment exclusively.
A Deployment’s replica count is controlled by spec.replicas. You can scale a Deployment by changing that field—either directly editing the object or using kubectl’s scaling helper. ThereforeCis correct: you can scale usingkubectl scaleand also viakubectl edit.
kubectl scale deployment
kubectl edit deployment
Option B is invalid because kubectl scale-up deployment is not a standard kubectl command. Option A is incorrect because kubectl edit is not theonlymethod; scaling is commonly done with kubectl scale. Option D is also incorrect because while kubectl scale is a primary method, kubectl edit is also a valid method to change replicas.
In production, you often scale with autoscalers (HPA/VPA), but the question is asking about kubectl methods. The key Kubernetes concept is that scaling is achieved by updating desired state (spec.replicas), and controllers reconcile Pods to match.
=========
A Kubernetes _____ is an abstraction that defines a logical set of Pods and a policy by which to access them.
Selector
Controller
Service
Job
A KubernetesServiceis the abstraction that defines a logical set of Pods and the policy for accessing them, soCis correct. Pods are ephemeral: their IPs change as they are recreated, rescheduled, or scaled. A Service solves this by providing a stable endpoint (DNS name and virtual IP) and routing rules that send traffic to the current healthy Pods backing the Service.
A Service typically uses alabel selectorto identify which Pods belong to it. Kubernetes then maintains endpoint data (Endpoints/EndpointSlice) for those Pods and uses the cluster dataplane (kube-proxy or eBPF-based implementations) to forward traffic from the Service IP/port to one of the backend Pod IPs. This is what the question means by “logical set of Pods†and “policy by which to access them†(for example, round-robin-like distribution depending on dataplane, session affinity options, and how ports map via targetPort).
Option A (Selector) is only the query mechanism used by Services and controllers; it is not itself the access abstraction. Option B (Controller) is too generic; controllers reconcile desired state but do not provide stable network access policies. Option D (Job) manages run-to-completion tasks and is unrelated to network access abstraction.
Services can be exposed in different ways: ClusterIP (internal), NodePort, LoadBalancer, and ExternalName. Regardless of type, the core Service concept remains: stable access to a dynamic set of Pods. This is foundational to Kubernetes networking and microservice communication, and it is why Service discovery via DNS works effectively across rolling updates and scaling events.
Thus, the correct answer isService (C).
=========
In which framework do the developers no longer have to deal with capacity, deployments, scaling and fault tolerance, and OS?
DockerSwarm
Kubernetes
Mesos
Serverless
Serverlessis the model where developers most directly avoid managingserver capacity, OS operations, and much of the deployment/scaling/fault-tolerance mechanics, which is whyDis correct. In serverless computing (commonly Function-as-a-Service, FaaS, and managed serverless container platforms), the provider abstracts away the underlying servers. You typically deploy code (functions) or a container image, define triggers (HTTP events, queues, schedules), and the platform automatically provisions the required compute, scales it based on demand, and handles much of the availability and fault tolerance behind the scenes.
It’s important to compare this to Kubernetes: Kubernetesdoesautomate scheduling, self-healing, rolling updates, and scaling, but it still requires you (or your platform team) to design and operate cluster capacity, node pools, upgrades, runtime configuration, networking, and baseline reliability controls. Even in managed Kubernetes services, you still choose node sizes, scale policies, and operational configuration. Kubernetes reduces toil, but it does not eliminate infrastructure concerns in the same way serverless does.
Docker Swarm and Mesos are orchestration platforms that schedule workloads, but they also require managing the underlying capacity and OS-level aspects. They are not “no longer have to deal with capacity and OS†frameworks.
From a cloud native viewpoint, serverless is about consuming compute as an on-demand utility. Kubernetes can be a foundation for a serverless experience (for example, with event-driven autoscaling or serverless frameworks), butthe pure framework that removes the most operational burden from developers is serverless.
What is Serverless computing?
A computing method of providing backend services on an as-used basis.
A computing method of providing services for AI and ML operating systems.
A computing method of providing services for quantum computing operating systems.
A computing method of providing services for cloud computing operating systems.
Serverless computingis a cloud execution model where the provider manages infrastructure concerns and you consume compute as a service, typically billed based on actual usage (requests, execution time, memory), which matchesA. In other words, you deploy code (functions) or sometimes containers, configure triggers (HTTP events, queues, schedules), and the platform automatically provisions capacity, scales it up/down, and handles much of availability and fault tolerance behind the scenes.
From a cloud-native architecture standpoint, “serverless†doesn’t mean there are no servers; it meansdevelopers don’t manage servers. The platform abstracts away node provisioning, OS patching, and much of runtime scaling logic. This aligns with the “as-used basis†phrasing: you pay for what you run rather than maintaining always-on capacity.
It’s also useful to distinguish serverless from Kubernetes. Kubernetes automates orchestration (scheduling, self-healing, scaling), but operating Kubernetes still involves cluster-level capacity decisions, node pools, upgrades, networking baseline, and policy. With serverless, those responsibilities are pushed further toward the provider/platform. Kubernetes canenableserverless experiences (for example, event-driven autoscaling frameworks), but serverless as a model is about a higher level of abstraction than “orchestrate containers yourself.â€
Options B, C, and D are incorrect because they describe specialized or vague “operating system†services rather than the commonly accepted definition. Serverless is not specifically about AI/ML OSs or quantum OSs; it’s a general compute delivery model that can host many kinds of workloads.
Therefore, the correct definition in this question isA: providing backend services on an as-used basis.
=========
How do you deploy a workload to Kubernetes without additional tools?
Create a Bash script and run it on a worker node.
Create a Helm Chart and install it with helm.
Create a manifest and apply it with kubectl.
Create a Python script and run it with kubectl.
The standard way to deploy workloads to Kubernetes using only built-in tooling is to createKubernetes manifests(YAML/JSON definitions of API objects) and apply them withkubectl, soCis correct. Kubernetes is a declarative system: you describe the desired state of resources (e.g., a Deployment, Service, ConfigMap, Ingress) in a manifest file, then submit that desired state to the API server. Controllers reconcile the actual cluster state to match what you declared.
A manifest typically includes mandatory fields like apiVersion, kind, and metadata, and then a spec describing desired behavior. For example, a Deployment manifest declares replicas and the Pod template (containers, images, ports, probes, resources). Applying the manifest with kubectl apply -f
Option B (Helm) is indeed a popular deployment tool, but Helm is explicitly an “additional tool†beyond kubectl and the Kubernetes API. The question asks “without additional tools,†so Helm is excluded by definition. Option A (running Bash scripts on worker nodes) bypasses Kubernetes’ desired-state control and is not how Kubernetes workload deployment is intended; it also breaks portability and operational safety. Option D is not a standard Kubernetes deployment mechanism; kubectl does not “run Python scripts†to deploy workloads (though scripts can automate kubectl, that’s still not the primary mechanism).
From a cloud native delivery standpoint, manifests support GitOps, reviewable changes, and repeatable deployments across environments. The Kubernetes-native approach is:declareresources in manifests andapplythem to the cluster. Therefore,Cis the verified correct answer.
What Kubernetes component handles network communications inside and outside of a cluster, using operating system packet filtering if available?
kube-proxy
kubelet
etcd
kube-controller-manager
kube-proxyis the Kubernetes component responsible for implementingService networkingon nodes, commonly by programming operating system packet filtering / forwarding rules (like iptables or IPVS), which makesAcorrect.
Kubernetes Services provide stable virtual IPs and ports that route traffic to a dynamic set of Pod endpoints. kube-proxy watches the API server for Service and EndpointSlice/Endpoints updates and then configures the node’s networking so that traffic to a Service is correctly forwarded to one of the backend Pods. In iptables mode, kube-proxy installs NAT and forwarding rules; in IPVS mode, it programs kernel load-balancing tables. In both cases, it leverages OS-level packet handling to efficiently steer traffic. This is the “packet filtering if available†concept referenced in the question.
kube-proxy’s work affects both “inside†and “outside†paths in typical setups. Internal cluster clients reach Services via ClusterIP and DNS, and kube-proxy rules forward that traffic to Pods. For external traffic, paths often involve NodePort or LoadBalancer Services or Ingress controllers that ultimately forward into Services/Pods—again relying on node-level service rules. While some modern CNI/eBPF dataplanes can replace or bypass kube-proxy, the classic Kubernetes architecture still defines kube-proxy as the component implementing Service connectivity.
The other options are not networking dataplane components:kubeletruns Pods and reports status;etcdstores cluster state;kube-controller-managerruns control loops for API objects. None of these handle node-level packet routing for Services. Therefore, the correct verified answer isA: kube-proxy.
In CNCF, who develops specifications for industry standards around container formats and runtimes?
Open Container Initiative (OCI)
Linux Foundation Certification Group (LFCG)
Container Network Interface (CNI)
Container Runtime Interface (CRI)
The organization responsible for defining widely adopted standards aroundcontainer formatsandruntime specificationsis theOpen Container Initiative (OCI), soAis correct. OCI defines the image specification (how container images are structured and stored) and the runtime specification (how to run a container), enabling interoperability across tooling and vendors. This is foundational to the cloud-native ecosystem because it allows different build tools, registries, runtimes, and orchestration platforms to work together reliably.
Within Kubernetes and CNCF-adjacent ecosystems, OCI standards are the reason an image built by one tool can be pushed to a registry and pulled/run by many different runtimes. For example, a Kubernetes node running containerd or CRI-O can run OCI-compliant images consistently. OCI standardization reduces fragmentation and vendor lock-in, which is a core motivation in open source cloud-native architecture.
The other options are not correct for this question.CNI(Container Network Interface) is a standard for configuring container networking, not container image formats and runtimes.CRI(Container Runtime Interface) is a Kubernetes-specific interface between kubelet and container runtimes—it enables pluggable runtimes for Kubernetes, but it is not the industry standard body for container format/runtime specifications. “LFCG†is not the recognized standards body here.
In short: OCI defines the “language†for container images and runtime behavior, which is why the same image can be executed across environments. Kubernetes relies on those standards indirectly through runtimes and tooling, but the specification work is owned byOCI. Therefore, the verified correct answer isA.
=========
What is a cloud native application?
It is a monolithic application that has been containerized and is running now on the cloud.
It is an application designed to be scalable and take advantage of services running on the cloud.
It is an application designed to run all its functions in separate containers.
It is any application that runs in a cloud provider and uses its services.
Bis correct. A cloud native application is designed to bescalable, resilient, and adaptable, and toleverage cloud/platform capabilitiesrather than merely being “hosted†on a cloud VM. Cloud-native design emphasizes principles like elasticity (scale up/down), automation, fault tolerance, and rapid, reliable delivery. While containers and Kubernetes are common enablers, the key is the architectural intent: build applications that embrace distributed systems patterns and cloud-managed primitives.
Option A is not enough. Simply containerizing a monolith and running it in the cloud does not automatically make it cloud native; that may be “lift-and-shift†packaging. The application might still be tightly coupled, hard to scale, and operationally fragile. Option C is too narrow and prescriptive; cloud native does not require “all functions in separate containers†(microservices are common but not mandatory). Many cloud-native apps use a mix of services, and even monoliths can be made more cloud native by adopting statelessness, externalized state, and automated delivery. Option D is too broad; “any app running in a cloud provider†includes legacy apps that don’t benefit from elasticity or cloud-native operational models.
Cloud-native applications typically align with patterns: stateless service tiers, declarative configuration, health endpoints, horizontal scaling, graceful shutdown, and reliance on managed backing services (databases, queues, identity, observability). They are built to run reliably in dynamic environments where instances are replaced routinely—an assumption that matches Kubernetes’ reconciliation and self-healing model.
So, the best verified definition among these options isB.
=========
How long should a stable API element in Kubernetes be supported (at minimum) after deprecation?
9 months
24 months
12 months
6 months
Kubernetes has a formal API deprecation policy to balance stability for users with the ability to evolve the platform. For astable (GA) API element, Kubernetes commits to supporting that API for a minimum period after it is deprecated. The correct minimum in this question is12 months, which corresponds to optionC.
In practice, Kubernetes releases occur roughly every three to four months, and the deprecation policy is commonly communicated in terms of “releases†as well as time. A GA API that is deprecated in one release is typically kept available for multiple subsequent releases, giving cluster operators and application teams time to migrate manifests, client libraries, controllers, and automation. This matters because Kubernetes is often at the center of production delivery pipelines; abrupt API removals would break deployments, upgrades, and tooling. By guaranteeing a minimum support window, Kubernetes enables predictable upgrades and safer lifecycle management.
This policy also encourages teams to track API versions and plan migrations. For example, workloads might start on a beta API (which can change), but once an API reaches stable, users can expect a stronger compatibility promise. Deprecation warnings help surface risk early. In many clusters, you’ll see API server warnings and tooling hints when manifests use deprecated fields/versions, allowing proactive remediation before the removal release.
Options 6 or 9 months would be too short for many enterprises to coordinate changes across multiple teams and environments. 24 months may be true for some ecosystems, but the Kubernetes stated minimum in this exam-style framing is 12 months. The key operational takeaway is:don’t ignore deprecation notices—they’re your clock for migration planning. Treat API version upgrades as part of routine cluster lifecycle hygiene to avoid being blocked during Kubernetes version upgrades when deprecated APIs are finally removed.
=========
What kubectl command is used to retrieve the resource consumption (CPU and memory) for nodes or Pods?
kubectl cluster-info
kubectl version
kubectl top
kubectl api-resources
To retrieve CPU and memory consumption for nodes or Pods, you usekubectl top, soCis correct. kubectl top nodes shows per-node resource usage, and kubectl top pods shows per-Pod (and optionally per-container) usage. This data comes from the Kubernetes resource metrics pipeline, most commonlymetrics-server, which scrapes kubelet/cAdvisor stats and exposes them via themetrics.k8s.ioAPI.
It’s important to recognize that kubectl top providescurrentresource usage snapshots, not long-term historical trending. For long-term metrics and alerting, clusters typically use Prometheus and related tooling. But for quick operational checks—“Is this Pod CPU-bound?†“Are nodes near memory saturation?â€â€”kubectl top is the built-in day-to-day tool.
Option A (kubectl cluster-info) shows general cluster endpoints and info about control plane services, not resource usage. Option B (kubectl version) prints client/server version info. Option D (kubectl api-resources) lists resource types available in the cluster. None of those report CPU/memory usage.
In observability practice, kubectl top is often used during incidents to correlate symptoms with resource pressure. For example, if a node is high on memory, you might see Pods being OOMKilled or the kubelet evicting Pods under pressure. Similarly, sustained high CPU utilization might explain latency spikes or throttling if limits are set. Note that kubectl top requires metrics-server (or an equivalent provider) to be installed and functioning; otherwise it may return errors like “metrics not available.â€
So, the correct command for retrieving node/Pod CPU and memory usage iskubectl top.
=========
What is the default eviction timeout when the Ready condition of a node is Unknown or False?
Thirty seconds.
Thirty minutes.
One minute.
Five minutes.
The verified correct answer isD (Five minutes). In Kubernetes, node health is continuously monitored. When a node stops reporting status (heartbeats from the kubelet) or is otherwise considered unreachable, the Node controller updates the Node’s Ready condition toUnknown(or it can becomeFalse). From that point, Kubernetes has to balance two risks: acting too quickly might cause unnecessary disruption (e.g., transient network hiccups), but acting too slowly prolongs outage for workloads that were running on the failed node.
The “default eviction timeout†refers to the control plane behavior that determines how long Kubernetes waits before evicting Pods from a node that appears unhealthy/unreachable. After this timeout elapses, Kubernetes begins eviction of Pods so controllers (like Deployments) can recreate them on healthy nodes, restoring the desired replica count and availability.
This is tightly connected to high availability and self-healing: Kubernetes does not “move†Pods from a dead node; it replaces them. The eviction timeout gives the cluster time to confirm the node is truly unavailable, avoiding flapping in unstable networks. Once eviction begins, replacement Pods can be scheduled elsewhere (assuming capacity exists), which is the normal recovery path for stateless workloads.
It’s also worth noting that graceful operational handling can be influenced by PodDisruptionBudgets (for voluntary disruptions) and by workload design (replicas across nodes/zones). But the question is testing the default timer value, which isfive minutesin this context.
Therefore, among the choices provided, the correct answer isD.
=========
What's the most adopted way of conflict resolution and decision-making for the open-source projects under the CNCF umbrella?
Financial Analysis
Discussion and Voting
Flipism Technique
Project Founder Say
B (Discussion and Voting)is correct. CNCF-hosted open-source projects generally operate with open governance practices that emphasize transparency, community participation, and documented decision-making. While each project can have its own governance model (maintainers, technical steering committees, SIGs, TOC interactions, etc.), a very common and widely adopted approach to resolving disagreements and making decisions is to first pursuediscussion(often on GitHub issues/PRs, mailing lists, or community meetings) and then usevoting/consensus mechanismswhen needed.
This approach is important because open-source communities are made up of diverse contributors across companies and geographies. “Project Founder Say†(D) is not a sustainable or typical CNCF governance norm for mature projects; CNCF explicitly encourages neutral, community-led governance rather than single-person control. “Financial Analysis†(A) is not a conflict resolution mechanism for technical decisions, and “Flipism Technique†(C) is not a real governance practice.
In Kubernetes specifically, community decisions are often made within structured groups (e.g., SIGs) using discussion and consensus-building, sometimes followed by formal votes where governance requires it. The goal is to ensure decisions are fair, recorded, and aligned with the project’s mission and contributor expectations. This also reduces risk of vendor capture and builds trust: anyone can review the rationale in meeting notes, issues, or PR threads, and decisions can be revisited with new evidence.
Therefore, the most adopted conflict resolution and decision-making method across CNCF open-source projects isdiscussion and voting, makingBthe verified correct answer.
=========
What is a Kubernetes Service Endpoint?
It is the API endpoint of our Kubernetes cluster.
It is a name of special Pod in kube-system namespace.
It is an IP address that we can access from the Internet.
It is an object that gets IP addresses of individual Pods assigned to it.
A Kubernetes Service routes traffic to a dynamic set of backends (usually Pods). The set of backend IPs and ports is represented by endpoint-tracking resources. Historically this was theEndpointsobject; today Kubernetes commonly usesEndpointSlicefor scalability, but the concept remains the same: endpoints represent the concrete network destinations behind a Service. That’s whyDis correct: a Service endpoint is an object that contains the IP addresses (and ports) of the individual Pods (or other backends) associated with that Service.
When a Service has a selector, Kubernetes automatically maintains endpoints by watching which Pods match the selector and are Ready, then publishing those Pod IPs into Endpoints/EndpointSlices. Consumers don’t usually use endpoints directly; instead they call the Service DNS name, and kube-proxy (or an alternate dataplane) forwards traffic to one of the endpoints. Still, endpoints are critical because they are what make Service routing accurate and up to date during scaling events, rolling updates, and failures.
Option A confuses this with the Kubernetes API server endpoint (the cluster API URL). Option B is incorrect; there’s no special “Service Endpoint Pod.†Option C describes an external/public IP concept, which may exist for LoadBalancer Services, but “Service endpoint†in Kubernetes vocabulary is about the backend destinations, not the public entrypoint.
Operationally, endpoints are useful for debugging: if a Service isn’t routing traffic, checking Endpoints/EndpointSlices shows whether the Service actually has backends and whether readiness is excluding Pods. This ties directly into Kubernetes service discovery and load balancing: the Service is the stable front door; endpoints are the actual backends.
=========
CI/CD stands for:
Continuous Information / Continuous Development
Continuous Integration / Continuous Development
Cloud Integration / Cloud Development
Continuous Integration / Continuous Deployment
CI/CD is a foundational practice for delivering software rapidly and reliably, and it maps strongly to cloud native delivery workflows commonly used with Kubernetes.CIstands forContinuous Integration: developers merge code changes frequently into a shared repository, and automated systems build and test those changes to detect issues early.CDis commonly used to meanContinuous DeliveryorContinuous Deploymentdepending on how far automation goes. In many certification contexts and simplified definitions like this question, CD is interpreted asContinuous Deployment, meaning every change that passes the automated pipeline is automatically released to production. That matches optionD.
In a Kubernetes context, CI typically produces artifacts such as container images (built from Dockerfiles or similar build definitions), runs unit/integration tests, scans dependencies, and pushes images to a registry. CD then promotes those images into environments by updating Kubernetes manifests (Deployments, Helm charts, Kustomize overlays, etc.). Progressive delivery patterns (rolling updates, canary, blue/green) often use Kubernetes-native controllers and Service routing to reduce risk.
Why the other options are incorrect: “Continuous Development†isn’t the standard “D†term; it’s ambiguous and not the established acronym expansion. “Cloud Integration/Cloud Development†is unrelated. Continuous Delivery (in the stricter sense) means changes are always in a deployable state and releases may still require a manual approval step, while Continuous Deployment removes that final manual gate. But because the option set explicitly includes “Continuous Deployment,†and that is one of the accepted canonical expansions for CD,Dis the correct selection here.
Practically, CI/CD complements Kubernetes’ declarative model: pipelines update desired state (Git or manifests), and Kubernetes reconciles it. This combination enables frequent releases, repeatability, reduced human error, and faster recovery through automated rollbacks and controlled rollout strategies.
=========
What is the main role of the Kubernetes DNS within a cluster?
Acts as a DNS server for virtual machines that are running outside the cluster.
Provides a DNS as a Service, allowing users to create zones and registries for domains that they own.
Allows Pods running in dual stack to convert IPv6 calls into IPv4 calls.
Provides consistent DNS names for Pods and Services for workloads that need to communicate with each other.
Kubernetes DNS (commonly implemented byCoreDNS) providesservice discoveryinside the cluster by assigning stable, consistent DNS names to Services and (optionally) Pods, which makesDcorrect. In a Kubernetes environment, Pods are ephemeral—IP addresses can change when Pods restart or move between nodes. DNS-based discovery allows applications to communicate using stable names rather than hardcoded IPs.
For Services, Kubernetes creates DNS records like service-name.namespace.svc.cluster.local, which resolve to the Service’s virtual IP (ClusterIP) or, for headless Services, to the set of Pod endpoints. This supports both load-balanced communication (standard Service) and per-Pod addressing (headless Service, commonly used with StatefulSets). Kubernetes DNS is therefore a core building block that enables microservices to locate each other reliably.
Option A is not Kubernetes DNS’s purpose; it serves cluster workloads rather than external VMs. Option B describes a managed DNS hosting product (creating zones/registries), which is outside the scope of cluster DNS. Option C describes protocol translation, which is not the role of DNS. Dual-stack support relates to IP families and networking configuration, not DNS translating IPv6 to IPv4.
In day-to-day Kubernetes operations, DNS reliability impacts everything: if DNS is unhealthy, Pods may fail to resolve Services, causing cascading outages. That’s why CoreDNS is typically deployed as a highly available add-on in kube-system, and why DNS caching and scaling are important for large clusters.
So the correct statement isD: Kubernetes DNS provides consistent DNS names so workloads can communicate reliably.
=========
In the DevOps framework and culture, who builds, automates, and offers continuous delivery tools for developer teams?
Application Users
Application Developers
Platform Engineers
Cluster Operators
The correct answer isC (Platform Engineers). In modern DevOps and platform operating models, platform engineering teams build and maintain theshared delivery capabilitiesthat product/application teams use to ship software safely and quickly. This includes CI/CD pipeline templates, standardized build and test automation, artifact management (registries), deployment tooling (Helm/Kustomize/GitOps), secrets management patterns, policy guardrails, and paved-road workflows that reduce cognitive load for developers.
While application developers (B) write the application code and often contribute pipeline steps for their service, the “build, automate, and offer tooling for developer teams†responsibility maps directly to platform engineering: they provide the internal platform that turns Kubernetes and cloud services into a consumable product. This is especially common in Kubernetes-based organizations where you want consistent deployment standards, repeatable security checks, and uniform observability.
Cluster operators (D) typically focus on the health and lifecycle of the Kubernetes clusters themselves: upgrades, node pools, networking, storage, cluster security posture, and control plane reliability. They may work closely with platform engineers, but “continuous delivery tools for developer teams†is broader than cluster operations. Application users (A) are consumers of the software, not builders of delivery tooling.
In cloud-native application delivery, this division of labor is important: platform engineers enablehigher velocity with safetyby automating the software supply chain—builds, tests, scans, deploys, progressive delivery, and rollback. Kubernetes provides the runtime substrate, but the platform team makes it easy and safe for developers to use it repeatedly and consistently across many services.
Therefore,Platform Engineers (C)is the verified correct choice.
=========
Imagine there is a requirement to run a database backup every day. Which Kubernetes resource could be used to achieve that?
kube-scheduler
CronJob
Task
Job
To run a workload on a repeating schedule (like “every dayâ€), Kubernetes providesCronJob, makingBcorrect. A CronJob creates Jobs according to a cron-formatted schedule, and then each Job creates one or more Pods that run to completion. This is the Kubernetes-native replacement for traditional cron scheduling, but implemented as a declarative resource managed by controllers in the cluster.
For a daily database backup, you’d define a CronJob with a schedule (e.g., "0 2 * * *" for 2:00 AM daily), and specify the Pod template that performs the backup (invokes backup scripts/tools, writes output to durable storage, uploads to object storage, etc.). Kubernetes will then create a Job at each scheduled time. CronJobs also support operational controls like concurrencyPolicy (Allow/Forbid/Replace) to decide what happens if a previous backup is still running, startingDeadlineSeconds to handle missed schedules, and history limits to retain recent successful/failed Job records for debugging.
Option D (Job) is close but not sufficient for “every day.†A Job runs a workload until completion once; you would need an external scheduler to create a Job every day. Option A (kube-scheduler) is a control plane component responsible for placing Pods onto nodes and does not schedule recurring tasks. Option C (“Taskâ€) is not a standard Kubernetes workload resource.
This question is fundamentally about mapping a recurring operational requirement (backup cadence) to Kubernetes primitives. The correct design is:CronJobtriggersJobcreation on a schedule;Jobruns Pods to completion. Therefore, the correct answer isB.
=========
What is the main purpose of the Ingress in Kubernetes?
Access HTTP and HTTPS services running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their path.
Access HTTP and HTTPS services running in the cluster based on their path.
Dis correct. Ingress is a Kubernetes API object that defines rules forexternal access to HTTP/HTTPS servicesin a cluster. The defining capability is Layer 7 routing—commonlyhost-basedandpath-basedrouting—so you can route requests like example.com/app1 to one Service and example.com/app2 to another. While the question mentions “based on their path,†that’s a classic and correct Ingress use case (and host routing is also common).
Ingress itself is only thespecificationof routing rules. AnIngress controller(e.g., NGINX Ingress Controller, HAProxy, Traefik, cloud-provider controllers) is what actually implements those rules by configuring a reverse proxy/load balancer. Ingress typically terminates TLS (HTTPS) and forwards traffic to internal Services, giving a more expressive alternative to exposing every service via NodePort/LoadBalancer.
Why the other options are wrong:
Asuggests routing by IP address; Ingress is fundamentally about HTTP(S) routing rules (host/path), not direct Service IP access.
BandCdescribe non-HTTP protocols; Ingress is specifically for HTTP/HTTPS. For TCP/UDP or other protocols, you generally use Services of type LoadBalancer/NodePort, Gateway API implementations, or controller-specific TCP/UDP configuration.
Ingress is a foundational building block for cloud-native application delivery because it centralizes edge routing, enables TLS management, and supports gradual adoption patterns (multiple services under one domain). Therefore, the main purpose described here matchesD.
=========
What is ephemeral storage?
Storage space that need not persist across restarts.
Storage that may grow dynamically.
Storage used by multiple consumers (e.g., multiple Pods).
Storage that is always provisioned locally.
The correct answer isA: ephemeral storage isnon-persistentstorage whose data does not need to survive Pod restarts or rescheduling. In Kubernetes, ephemeral storage typically refers to storage tied to the Pod’s lifetime—such as the container writable layer, emptyDir volumes, and other temporary storage types. When a Pod is deleted or moved to a different node, that data is generally lost.
This is different from persistent storage, which is backed by PersistentVolumes and PersistentVolumeClaims and is designed to outlive individual Pod instances. Ephemeral storage is commonly used for caches, scratch space, temporary files, and intermediate build artifacts—data that can be recreated and is not the authoritative system of record.
Option B is incorrect because “may grow dynamically†describes an allocation behavior, not the defining characteristic of ephemeral storage. Option C is incorrect because multiple consumers is about access semantics (ReadWriteMany etc.) and shared volumes, not ephemerality. Option D is incorrect because ephemeral storage is not “always provisioned locally†in a strict sense; while many ephemeral forms are local to the node, the definition is about lifecycle and persistence guarantees, not necessarily physical locality.
Operationally, ephemeral storage is an important scheduling and reliability consideration. Pods can request/limit ephemeral storage similarly to CPU/memory, and nodes can evict Pods under disk pressure. Mismanaged ephemeral storage (logs written to the container filesystem, runaway temp files) can cause node disk exhaustion and cascading failures. Best practices include shipping logs off-node, using emptyDir intentionally with size limits where supported, and using persistent volumes for state that must survive restarts.
So, ephemeral storage is best defined as storage thatdoes not need to persist across restarts/rescheduling, matching optionA.
=========
Which control plane component is responsible for updating the node Ready condition if a node becomes unreachable?
The kube-proxy
The node controller
The kubectl
The kube-apiserver
The correct answer isB: the node controller. In Kubernetes, node health is monitored and reflected through Node conditions such asReady. TheNode Controller(a controller that runs as part of the control plane, within the controller-manager) is responsible for monitoring node heartbeats and updating node status when a node becomes unreachable or unhealthy.
Nodes periodically report status (including kubelet heartbeats) to the API server. The Node Controller watches these updates. If it detects that a node has stopped reporting within expected time windows, it marks the node conditionReadyas Unknown (or otherwise updates conditions) to indicate the control plane can’t confirm node health. This status change then influences higher-level behaviors such as Pod eviction and rescheduling: after grace periods and eviction timeouts, Pods on an unhealthy node may be evicted so the workload can be recreated on healthy nodes (assuming a controller manages replicas).
Option A (kube-proxy) is a node component for Service traffic routing and does not manage node health conditions. Option C (kubectl) is a CLI client; it does not participate in control plane health monitoring. Option D (kube-apiserver) stores and serves Node status, but it doesn’t decide when a node is unreachable; it persists what controllers and kubelets report. The “decision logic†for updating the Ready condition in response to missing heartbeats is the Node Controller’s job.
So, the component that updates the Node Ready condition when a node becomes unreachable is thenode controller, which is optionB.
=========
What is the main purpose of the Open Container Initiative (OCI)?
Accelerating the adoption of containers and Kubernetes in the industry.
Creating open industry standards around container formats and runtimes.
Creating industry standards around container formats and runtimes for private purposes.
Improving the security of standards around container formats and runtimes.
Bis correct: the OCI’s main purpose is to createopen, vendor-neutral industry standardsforcontainer image formatsandcontainer runtimes. Standardization is critical in container orchestration because portability is a core promise: you should be able to build an image once and run it across different environments and runtimes without rewriting packaging or execution logic.
OCI defines (at a high level) two foundational specs:
Image specification: how container images are packaged (layers, metadata, manifests).
Runtime specification: how to run a container (filesystem setup, namespaces/cgroups behavior, lifecycle).These standards enable interoperability across tooling. For example, higher-level runtimes (like containerd or CRI-O) rely on OCI-compliant components (often runc or equivalents) to execute containers consistently.
Why the other options are not the best answer:
A(accelerating adoption) might be an indirect outcome, but it’s not the OCI’s core charter.
Cis contradictory (“industry standards†but “for private purposesâ€)—OCI is explicitly about open standards.
D(improving security) can be helped by standardization and best practices, but OCI is not primarily a security standards body; its central function isformat and runtime interoperability.
In Kubernetes specifically, OCI is part of the “plumbing†that makes runtimes replaceable. Kubernetes talks to runtimes via CRI; runtimes execute containers via OCI. This layering helps Kubernetes remain runtime-agnostic while still benefiting from consistent container behavior everywhere.
Therefore, the correct choice isB: OCI creates open standards around container formats and runtimes.
=========
What is the default value for authorization-mode in Kubernetes API server?
--authorization-mode=RBAC
--authorization-mode=AlwaysAllow
--authorization-mode=AlwaysDeny
--authorization-mode=ABAC
The Kubernetes API server supports multiple authorization modes that determine whether an authenticated request is allowed to perform an action (verb) on a resource. Historically, the API server’sdefaultauthorization mode wasAlwaysAllow, meaning that once a request was authenticated, it would be authorized without further checks. That is why the correct answer here isB.
However, it’s crucial to distinguish “default flag value†from “recommended configuration.†In production clusters, running with AlwaysAllow is insecure because it effectively removes authorization controls—any authenticated user (or component credential) could do anything the API permits. Modern Kubernetes best practices strongly recommend enablingRBAC(Role-Based Access Control), often alongside Node and Webhook authorization, so that permissions are granted explicitly using Roles/ClusterRoles and RoleBindings/ClusterRoleBindings. Many managed Kubernetes distributions and kubeadm-based setups commonly enable RBAC by default as part of cluster bootstrap profiles, even if the API server’s historical default flag value is AlwaysAllow.
So, the exam-style interpretation of this question is about the API server flag default, not what most real clusters should run. With RBAC enabled, authorization becomes granular: you can control who can read Secrets, who can create Deployments, who can exec into Pods, and so on, scoped to namespaces or cluster-wide. ABAC (Attribute-Based Access Control) exists but is generally discouraged compared to RBAC because it relies on policy files and is less ergonomic and less commonly used. AlwaysDeny is useful for hard lockdown testing but not for normal clusters.
In short:AlwaysAllowis the API server’s default mode (answer B), butRBACis the secure, recommended choice you should expect to see enabled in almost any serious Kubernetes environment.
=========
What is the Kubernetes object used for running a recurring workload?
Job
Batch
DaemonSet
CronJob
A recurring workload in Kubernetes is implemented with aCronJob, so the correct choice isD. A CronJob is a controller that createsJobson a schedule defined in standard cron format (minute, hour, day of month, month, day of week). This makes CronJobs ideal for periodic tasks like backups, report generation, log rotation, and cleanup tasks.
AJob(option A) is run-to-completion but is typically aone-timeexecution; it ensures that a specified number of Pods successfully terminate. Youcanuse a Job repeatedly, but something else must create it each time—CronJob is that built-in scheduler. Option B (“Batchâ€) is not a standard workload resource type (batch is an API group, not the object name used here). Option C (DaemonSet) ensures one Pod runs on every node (or selected nodes), which is not “recurring,†it’s “always present per node.â€
CronJobs include operational controls that matter in real clusters. For example, concurrencyPolicy controls what happens if a scheduled run overlaps with a previous run (Allow, Forbid, Replace). startingDeadlineSeconds can handle missed schedules (e.g., if the controller was down). History limits (successfulJobsHistoryLimit, failedJobsHistoryLimit) help manage cleanup and troubleshooting. Each scheduled execution results in a Job with its own Pods, which can be inspected with kubectl get jobs and kubectl logs.
So the correct Kubernetes object for a recurring workload isCronJob (D): it provides native scheduling and creates Jobs automatically according to the defined cadence.
=========
What are the 3 pillars of Observability?
Metrics, Logs, and Traces
Metrics, Logs, and Spans
Metrics, Data, and Traces
Resources, Logs, and Tracing
The correct answer isA: Metrics, Logs, and Traces. These are widely recognized as the “three pillars†because together they provide complementary views into system behavior:
Metricsare numeric time series collected over time (CPU usage, request rate, error rate, latency percentiles). They are best for dashboards, alerting, and capacity planning because they are structured and aggregatable. In Kubernetes, metrics underpin autoscaling and operational visibility (node/pod resource usage, cluster health signals).
Logsare discrete event records (often text) emitted by applications and infrastructure components. Logs provide detailed context for debugging: error messages, stack traces, warnings, and business events. In Kubernetes, logs are commonly collected from container stdout/stderr and aggregated centrally for search and correlation.
Tracescapture the end-to-end journey of a request through a distributed system, breaking it into spans. Tracing is crucial in microservices because a single user request may cross many services; traces show where latency accumulates and which dependency fails. Tracing also enables root cause analysis when metrics indicate degradation but don’t pinpoint the culprit.
Why the other options are wrong: aspanis a componentwithintracing, not a top-level pillar; “data†is too generic; and “resources†are not an observability signal category. The pillars are defined by signal type and how they’re used operationally.
In cloud-native practice, these pillars are often unified via correlation IDs and shared context: metrics alerts link to logs and traces for the same timeframe/request. Tooling like Prometheus (metrics), log aggregators (e.g., Loki/Elastic), and tracing systems (Jaeger/Tempo/OpenTelemetry) work together to provide a complete observability story.
Therefore, the verified correct answer isA.
=========
Which of the following is a valid PromQL query?
SELECT * from http_requests_total WHERE job=apiserver
http_requests_total WHERE (job="apiserver")
SELECT * from http_requests_total
http_requests_total(job="apiserver")
Prometheus Query Language (PromQL) uses a function-and-selector syntax, not SQL. A valid query typically starts with a metric name and optionally includeslabel matchersin curly braces. In the simplified quiz syntax given, the valid PromQL-style selector is best represented byD: http_requests_total(job="apiserver"), soDis correct.
Conceptually, what this query means is “select time series for the metric http_requests_total where the job label equals apiserver.†In standard PromQL formatting you most often see this as: http_requests_total{job="apiserver"}. Many training questions abbreviate braces and focus on the idea of filtering by labels; the key is that PromQL uses label matchers rather than SQL WHERE clauses.
Options A and C are invalid because they use SQL (SELECT * FROM ...) which is not PromQL. Option B is also invalid because PromQL does not use the keyword WHERE. PromQL filtering is done by applying label matchers directly to the metric selector.
In Kubernetes observability, PromQL is central to building dashboards and alerts from cluster metrics. For example, you might compute rates from counters: rate(http_requests_total{job="apiserver"}[5m]), aggregate by labels: sum by (code) (...), or alert on error ratios. Understanding the selector and label-matcher model is foundational because Prometheus metrics are multi-dimensional—labels define the slices you can filter and aggregate on.
So, within the provided options,Dis the only one that follows PromQL’s metric+label-filter style and therefore is the verified correct answer.
=========
If a Pod was waiting for container images to download on the scheduled node, what state would it be in?
Failed
Succeeded
Unknown
Pending
If a Pod is waiting for its container images to be pulled to the node, it remains in thePendingphase, soDis correct. Kubernetes Pod “phase†is a high-level summary of where the Pod is in its lifecycle.Pendingmeans the Pod has been accepted by the cluster but one or more of its containers has not started yet. That can occur because the Pod is waiting to be scheduled, waiting on volume attachment/mount, or—very commonly—waiting for the container runtime to pull the image.
When image pulling is the blocker, kubectl describe pod
Why the other phases don’t apply:
Succeededis for run-to-completion Pods that have finished successfully (typical for Jobs).
Failedmeans the Pod terminated and at least one container terminated in failure (and won’t be restarted, depending on restartPolicy).
Unknownis used when the node can’t be contacted and the Pod’s state can’t be reliably determined (rare in healthy clusters).
A subtle but important Kubernetes detail: status “Waiting†reasons like ImagePullBackOff are container states inside .status.containerStatuses, while the Pod phase can still be Pending. So, “waiting for images to download†maps to Pod Pending, with container waiting reasons providing the deeper diagnosis.
Therefore, the verified correct answer isD: Pending.
=========
Which of the following is the correct command to run an nginx deployment with 2 replicas?
kubectl run deploy nginx --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --replicas=2
kubectl create nginx deployment --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --count=2
The correct answer isB: kubectl create deploy nginx --image=nginx --replicas=2. This uses kubectl create deployment (shorthand create deploy) to generate aDeploymentresource named nginx with the specified container image. The --replicas=2 flag sets the desired replica count, so Kubernetes will create two Pod replicas (via a ReplicaSet) and keep that number stable.
Option A is incorrect because kubectl run is primarily intended to run a Pod (and in older versions could generate other resources, but it’s not the recommended/consistent way to create a Deployment in modern kubectl usage). Option C is invalid syntax: kubectl subcommand order is incorrect; you don’t say kubectl create nginx deployment. Option D uses a non-existent --count flag for Deployment replicas.
From a Kubernetes fundamentals perspective, this question tests two ideas: (1) Deployments are the standard controller for running stateless workloads with a desired number of replicas, and (2) kubectl create deployment is a common imperative shortcut for generating that resource. After running the command, you can confirm with kubectl get deploy nginx, kubectl get rs, and kubectl get pods -l app=nginx (label may vary depending on kubectl version). You’ll see a ReplicaSet created and two Pods brought up.
In production, teams typically use declarative manifests (kubectl apply -f) or GitOps, but knowing the imperative command is useful for quick labs and validation. The key is that replicas are managed by the controller, not by manually starting containers—Kubernetes reconciles the state continuously.
Therefore,Bis the verified correct command.
=========
Which Kubernetes-native deployment strategy supports zero-downtime updates of a workload?
Canary
Recreate
BlueGreen
RollingUpdate
D (RollingUpdate)is correct. In Kubernetes, the Deployment resource’s default update strategy isRollingUpdate, which replaces Podsgraduallyrather than all at once. This supports zero-downtime updates when the workload is properly configured (sufficient replicas, correct readiness probes, and appropriate maxUnavailable / maxSurge settings). As new Pods come up and become Ready, old Pods are terminated in a controlled way, keeping the service available throughout the rollout.
RollingUpdate’s “zero downtime†is achieved by maintaining capacity while transitioning between versions. For example, with multiple replicas, Kubernetes can create new Pods, wait for readiness, then scale down old Pods, ensuring traffic continues to flow to healthy instances. Readiness probes are critical: they prevent traffic from being routed to a Pod until it’s actually ready to serve.
Why other options are not the Kubernetes-native “strategy†answer here:
Recreate (B)explicitly stops old Pods before starting new ones, causing downtime for most services.
Canary (A)andBlueGreen (C)are real deployment patterns, but in “Kubernetes-native deployment strategy†terms, the built-in Deployment strategies areRollingUpdateandRecreate. Canary/BlueGreen typically require additional tooling/controllers (service mesh, ingress controller features, or progressive delivery operators) to manage traffic shifting between versions.
So, for a Kubernetes-native strategy that supports zero-downtime updates, the correct and verified choice isRollingUpdate (D).
=========
Which of the following would fall under the responsibilities of an SRE?
Developing a new application feature.
Creating a monitoring baseline for an application.
Submitting a budget for running an application in a cloud.
Writing policy on how to submit a code change.
Site Reliability Engineering (SRE) focuses onreliability, availability, performance, and operational excellenceusing engineering approaches. Among the options,creating a monitoring baseline for an applicationis a classic SRE responsibility, soBis correct. A monitoring baseline typically includes defining key service-level signals (latency, traffic, errors, saturation), establishing dashboards, setting sensible alert thresholds, and ensuring telemetry is complete enough to support incident response and capacity planning.
In Kubernetes environments, SRE work often involves ensuring that workloads expose health endpoints for probes, that resource requests/limits are set to allow stable scheduling and autoscaling, and that observability pipelines (metrics, logs, traces) are consistent. Building a monitoring baseline also ties into SLO/SLI practices: SREs define what “good†looks like, measure it continuously, and create alerts that notify teams when the system deviates from those expectations.
Option A is primarily an application developer task—SREs may contribute to reliability features, but core product feature development is usually owned by engineering teams. Option C is more aligned with finance, FinOps, or management responsibilities, though SRE data can inform costs. Option D is closer to governance, platform policy, or developer experience/process ownership; SREs might influence processes, but “policy on how to submit code change†is not the defining SRE duty compared to monitoring and reliability engineering.
Therefore, the best verified choice isB, because establishing monitoring baselines is central to operating reliable services on Kubernetes.
=========
Which one of the following is an open source runtime security tool?
lxd
containerd
falco
gVisor
The correct answer isC: Falco.Falcois a widely used open-source runtime security tool (originally created by Sysdig and now a CNCF project) designed to detect suspicious behavior at runtime by monitoring system calls and other kernel-level signals. In Kubernetes environments, Falco helps identify threats such as unexpected shell access in containers, privilege escalation attempts, access to sensitive files, anomalous network tooling, crypto-mining patterns, and other behaviors that indicate compromise or policy violations.
The other options are not primarily “runtime security tools†in the detection/alerting sense:
containerdis a container runtime responsible for executing containers; it’s not a security detection tool.
lxdis a system container and VM manager; again, not a runtime threat detection tool.
gVisoris a sandboxed container runtime that improves isolation by interposing a user-space kernel; it’s a security mechanism, but the question asks for a runtime securitytool(monitoring/detection). Falco fits that definition best.
In cloud-native security practice, Falco typically runs as a DaemonSet so it can observe activity on every node. It uses rules to define what “bad†looks like and can emit alerts to SIEM systems, logging backends, or incident response workflows. This complements preventative controls like RBAC, Pod Security Admission, seccomp, and least privilege configurations. Preventative controls reduce risk; Falco provides visibility and detection when something slips through.
Therefore, among the provided choices, the verified runtime security tool isFalco (C).
=========
What function does kube-proxy provide to a cluster?
Implementing the Ingress resource type for application traffic.
Forwarding data to the correct endpoints for Services.
Managing data egress from the cluster nodes to the network.
Managing access to the Kubernetes API.
kube-proxyis a node-level networking component that helps implement the KubernetesServiceabstraction. Services provide a stable virtual IP and DNS name that route traffic to a set of Pods (endpoints). kube-proxy watches the API for Service and EndpointSlice/Endpoints changes and then programs the node’s networking rules so that traffic sent to a Service is forwarded (load-balanced) to one of the correct backend Pod IPs. This is whyBis correct.
Conceptually, kube-proxy turns the declarative Service configuration into concrete dataplane behavior. Depending on the mode, it may use iptables rules, IPVS, or integrate with eBPF-capable networking stacks (sometimes kube-proxy is replaced or bypassed by CNI implementations, but the classic kube-proxy role remains the canonical answer). In iptables mode, kube-proxy creates NAT rules that rewrite traffic from the Service virtual IP to one of the Pod endpoints. In IPVS mode, it programs kernel load-balancing tables for more scalable service routing. In all cases, the job is to connect “Service IP/port†to “Pod IP/port endpoints.â€
Option A is incorrect becauseIngressis a separate API resource and requires anIngress Controller(like NGINX Ingress, HAProxy, Traefik, etc.) to implement HTTP routing, TLS termination, and host/path rules. kube-proxy is not an Ingress controller. Option C is incorrect because general node egress management is not kube-proxy’s responsibility; egress behavior typically depends on the CNI plugin, NAT configuration, and network policies. Option D is incorrect because API access control is handled by the API server’s authentication/authorization layers (RBAC, webhooks, etc.), not kube-proxy.
So kube-proxy’s essential function is: keep node networking rules in sync so thatService traffic reaches the right Pods. It is one of the key components that makes Services “just work†across nodes without clients needing to know individual Pod IPs.
=========
What does vertical scaling an application deployment describe best?
Adding/removing applications to meet demand.
Adding/removing node instances to the cluster to meet demand.
Adding/removing resources to applications to meet demand.
Adding/removing application instances of the same application to meet demand.
Vertical scalingmeans changing theresources allocated to a single instanceof an application (more or less CPU/memory), which is whyCis correct. In Kubernetes terms, this corresponds to adjusting container resourcerequests and limits(for CPU and memory). Increasing resources can help a workload handle more load per Pod by giving it more compute or memory headroom; decreasing can reduce cost and improve cluster packing efficiency.
This differs fromhorizontal scaling, which changes thenumber of instances(replicas). Option D describes horizontal scaling: adding/removing replicas of the same workload, typically managed by a Deployment and often automated via the Horizontal Pod Autoscaler (HPA). Option B describes scaling theinfrastructure layer(nodes) which is cluster/node autoscaling (Cluster Autoscaler in cloud environments). Option A is not a standard scaling definition.
In practice, vertical scaling in Kubernetes can be manual (edit the Deployment resource requests/limits) or automated using theVertical Pod Autoscaler (VPA), which can recommend or apply new requests based on observed usage. A key nuance is that changing requests/limits often requires Pod restarts to take effect, so vertical scaling is less “instant†than HPA and can disrupt workloads if not planned. That’s why many production teams prefer horizontal scaling for traffic-driven workloads and use vertical scaling to right-size baseline resources or address memory-bound/cpu-bound behavior.
From a cloud-native architecture standpoint, understanding vertical vs horizontal scaling helps you design for elasticity: use vertical scaling to tune per-instance capacity; use horizontal scaling for resilience and throughput; and combine with node autoscaling to ensure the cluster has sufficient capacity. The definition the question is testing is simple:vertical scaling = change resources per application instance, which is optionC.
Which Prometheus metric represents a single value that can go up and down?
Counter
Gauge
Summary
Histogram
In Prometheus, aGaugeis the metric type used to represent a value that canincrease and decreaseover time, soBis correct. Gauges are suited for “current state†measurements such as current memory usage, number of active sessions, queue depth, temperature, or CPU usage—anything that can move up and down as the system changes.
This contrasts with aCounter(A), which is monotonically increasing (it only goes up, except when a process restarts and the counter resets to zero). Counters are ideal for totals like total HTTP requests served, total errors, or bytes sent, and you typically use rate()/irate() in PromQL to convert counters into per-second rates.
ASummary(C) andHistogram(D) are used for distributions, commonly request latency. Histograms record observations into buckets and can produce percentiles using functions like histogram_quantile(). Summaries compute quantiles on the client side and expose them directly, along with counts and sums. Neither of these is the simplest “single value that goes up and down†type.
In Kubernetes observability, Prometheus is often used to scrape metrics from cluster components (API server, kubelet) and applications. Choosing the right metric type matters operationally: use gauges for instantaneous measurements, counters for event totals, and histograms/summaries for latency distributions. That’s why Prometheus documentation and best practices emphasize understanding metric semantics—because misusing types leads to incorrect alerts and dashboards.
So for a single numeric value that can go up and down, the correct metric type isGauge, optionB.
=========
How do you perform a command in a running container of a Pod?
kubectl exec
docker exec
kubectl run
kubectl attach
In Kubernetes, the standard way to execute a command inside a running container iskubectl exec, which is whyAis correct. kubectl exec calls the Kubernetes API (API server), which then coordinates with thekubeleton the target node to run the requested command inside the container using the container runtime’s exec mechanism. The -- separator is important: it tells kubectl that everything after -- is the command to run in the container rather than flags for kubectl itself.
This is fundamentally different from docker exec. In Kubernetes, you don’t normally target containers through Docker/CRI tools directly because Kubernetes abstracts the runtime behind CRI. Also, “Docker†might not even be installed on nodes in modern clusters (containerd/CRI-O are common). So optionBis not the Kubernetes-native approach and often won’t work.
kubectl run (option C) is for creating a new Pod (or generating workload resources), not for executing a command in an existing container. kubectl attach (option D) attaches your terminal to a running container’s process streams (stdin/stdout/stderr), which is useful for interactive sessions, but it does not execute an arbitrary new command like exec does.
In real usage, you often specify the container when a Pod has multiple containers: kubectl exec -it
=========
Which API object is the recommended way to run a scalable, stateless application on your cluster?
ReplicaSet
Deployment
DaemonSet
Pod
For a scalable, stateless application, Kubernetes recommends using aDeploymentbecause it provides a higher-level, declarative management layer over Pods. A Deployment doesn’t just “run replicasâ€; it manages the entire lifecycle of rolling out new versions, scaling up/down, and recovering from failures by continuously reconciling the current cluster state to the desired state you define. Under the hood, a Deployment typically creates and manages aReplicaSet, and that ReplicaSet ensures a specified number of Pod replicas are running at all times. This layering is the key: you get ReplicaSet’s self-healing replica maintenance plus Deployment’s rollout/rollback strategies and revision history.
Why not the other options? APodis the smallest deployable unit, but it’s not a scalable controller—if a Pod dies, nothing automatically replaces it unless a controller owns it. AReplicaSetcan maintain N replicas, but it does not provide the full rollout orchestration (rolling updates, pause/resume, rollbacks, and revision tracking) that you typically want for stateless apps that ship frequent releases. ADaemonSetis for node-scoped workloads (one Pod per node or subset of nodes), like log shippers or node agents, not for “scale by replicas.â€
For stateless applications, the Deployment model is especially appropriate because individual replicas are interchangeable; the application does not require stable network identities or persistent storage per replica. Kubernetes can freely replace or reschedule Pods to maintain availability. Deployment strategies (like RollingUpdate) allow you to upgrade without downtime by gradually replacing old replicas with new ones while keeping the Service endpoints healthy. That combination—declarative desired state, self-healing, andcontrolled updates—makesDeploymentthe recommended object for scalable stateless workloads.
=========
How can you monitor the progress for an updated Deployment/DaemonSets/StatefulSets?
kubectl rollout watch
kubectl rollout progress
kubectl rollout state
kubectl rollout status
To monitor rollout progress for Kubernetes workload updates (most commonly Deployments, and also StatefulSets and DaemonSets where applicable), the standard kubectl command iskubectl rollout status, which makesDcorrect.
Kubernetes manages updates declaratively through controllers. For a Deployment, an update typically creates a new ReplicaSet and gradually shifts replicas from the old to the new according to the strategy (e.g., RollingUpdate with maxUnavailable and maxSurge). For StatefulSets, updates may be ordered and respect stable identities, and for DaemonSets, an update replaces node-level Pods according to update strategy. In all cases, you often want a single command that tells you whether the controller has completed the update and whether the new replicas are available. kubectl rollout status queries the resource status and prints a progress view until completion or timeout.
The other commands listed are not the canonical kubectl subcommands. kubectl rollout watch, kubectl rollout progress, and kubectl rollout state are not standard rollout verbs in kubectl. The supported rollout verbs typically include status, history, undo, pause, and resume (depending on kubectl version and resource type).
Operationally, kubectl rollout status deployment/
=========
What is Flux constructed with?
GitLab Environment Toolkit
GitOps Toolkit
Helm Toolkit
GitHub Actions Toolkit
The correct answer isB: GitOps Toolkit. Flux is a GitOps solution for Kubernetes, and in Flux v2 the project is built as a set of Kubernetes controllers and supporting components collectively referred to as theGitOps Toolkit. This toolkit provides the building blocks for implementing GitOps reconciliation: sourcing artifacts (Git repositories, Helm repositories, OCI artifacts), applying manifests (Kustomize/Helm), and continuously reconciling cluster state to match the desired state declared in Git.
This construction matters because it reflects Flux’s modular architecture. Instead of being a single monolithic daemon, Flux is composed of controllers that each handle a part of the GitOps workflow: fetching sources, rendering configuration, and applying changes. This makes it more Kubernetes-native: everything is declarative, runs in the cluster, and can be managed like other workloads (RBAC, namespaces, upgrades, observability).
Why the other options are wrong:
“GitLab Environment Toolkit†and “GitHub Actions Toolkit†are not what Flux is built from. Flux can integrate with many SCM providers and CI systems, but it is not “constructed with†those.
“Helm Toolkit†is not the named foundational set Flux is built upon. Flux can deploy Helm charts, but that’s a capability, not its underlying construction.
In cloud-native delivery, Flux implements the key GitOps control loop: detect changes in Git (or other declared sources), compute desired Kubernetes state, and apply it while continuously checking for drift. The GitOps Toolkit is the set of controllers enabling that loop.
Therefore, the verified correct answer isB.
=========
Which of the following options include resources cleaned by the Kubernetes garbage collection mechanism?
Stale or expired CertificateSigningRequests (CSRs) and old deployments.
Nodes deleted by a cloud controller manager and obsolete logs from the kubelet.
Unused container and container images, and obsolete logs from the kubelet.
Terminated pods, completed jobs, and objects without owner references.
Kubernetes garbage collection (GC) is about cleaning upAPI objects and related resourcesthat are no longer needed, so the correct answer isD. Two big categories it targets are (1) objects that have finished their lifecycle (liketerminated Podsandcompleted Jobs, depending on controllers and TTL policies), and (2) “dangling†objects that are no longer referenced properly—often described asobjects without owner references(or where owners are gone), which can happen when a higher-level controller is deleted or when dependent resources are left behind.
A key Kubernetes concept here isOwnerReferences: many resources are created “owned†by a controller (e.g., a ReplicaSet owned by a Deployment, Pods owned by a ReplicaSet). When an owning object is deleted, Kubernetes’ garbage collector can remove dependent objects based on deletion propagation policies (foreground/background/orphan). This prevents resource leaks and keeps the cluster tidy and performant.
The other options are incorrect because they refer to cleanup tasks outside Kubernetes GC’s scope. Kubelet logs (B/C) are node-level files and log rotation is handled by node/runtime configuration, not the Kubernetes garbage collector. Unused container images (C) are managed by the container runtime’s image GC and kubelet disk pressure management, not the Kubernetes API GC. Nodes deleted by a cloud controller (B) aren’t “garbage collected†in the same sense; node lifecycle is handled by controllers and cloud integrations, but not as a generic GC cleanup category like ownerRef-based object deletion.
So, when the question asks specifically about “resources cleaned by Kubernetes garbage collection,†it’s pointing to Kubernetes object lifecycle cleanup:terminated Pods, completed Jobs, and orphaned objects—exactly what optionDstates.
=========
What is an ephemeral container?
A specialized container that runs as root for infosec applications.
A specialized container that runs temporarily in an existing Pod.
A specialized container that extends and enhances the main container in a Pod.
A specialized container that runs before the app container in a Pod.
Bis correct: anephemeral containeris a temporary container you can add to anexisting Podfortroubleshooting and debuggingwithout restarting the Pod. This capability is especially useful when a running container image is minimal (distroless) and lacks debugging tools like sh, curl, or ps. Instead of rebuilding the workload image or disrupting the Pod, you attach an ephemeral container that includes the tools you need, then inspect processes, networking, filesystem mounts, and runtime behavior.
Ephemeral containers are not part of the original Pod spec the same way normal containers are. They are added via a dedicated subresource and are generally not restarted automatically like regular containers. They are meant for interactive investigation, not for ongoing workload functionality.
Why the other options are incorrect:
Ddescribesinit containers, which run before app containers start and are used for setup tasks.
Cresembles the “sidecar†concept (a supporting container that runs alongside the main container), but sidecars are normal containers defined in the Pod spec, not ephemeral containers.
Ais not a definition; ephemeral containers are not “root by design†(they can run with various security contexts depending on policy), and they aren’t limited to infosec use cases.
In Kubernetes operations, ephemeral containers complement kubectl exec and logs. If the target container is crash-looping or lacks a shell, exec may not help; adding an ephemeral container provides a safe and Kubernetes-native debugging path. So, the accurate definition isB.
=========
What is CRD?
Custom Resource Definition
Custom Restricted Definition
Customized RUST Definition
Custom RUST Definition
ACRDis aCustomResourceDefinition, makingAcorrect. Kubernetes is built around an API-driven model: resources like Pods, Services, and Deployments are all objects served by the Kubernetes API. CRDs allow you toextend the Kubernetes APIby defining your own resource types. Once a CRD is installed, the API server can store and serve custom objects (Custom Resources) of that new type, and Kubernetes tooling (kubectl, RBAC, admission, watch mechanisms) can interact with them just like built-in resources.
CRDs are a core building block of the Kubernetes ecosystem because they enable operators and platform extensions. A typical pattern is: define a CRD that represents the desired state of some higher-level concept (for example, a database cluster, a certificate request, an application release), and then run a controller (often called an “operatorâ€) that watches those custom resources and reconciles the cluster to match. That controller may create Deployments, StatefulSets, Services, Secrets, or cloud resources to implement the desired state encoded in the custom resource.
The incorrect answers are made-up expansions. CRDs are not related to Rust in Kubernetes terminology, and “custom restricted definition†is not the standard meaning.
So the verified meaning is:CRD = CustomResourceDefinition, used to extend Kubernetes APIs and enable Kubernetes-native automation via controllers/operators.
What does CNCF stand for?
Cloud Native Community Foundation
Cloud Native Computing Foundation
Cloud Neutral Computing Foundation
Cloud Neutral Community Foundation
CNCFstands for theCloud Native Computing Foundation, makingBcorrect. CNCF is the foundation that hosts and sustains many cloud-native open source projects, including Kubernetes, and provides governance, neutral stewardship, and community infrastructure to help projects grow and remain vendor-neutral.
CNCF’s scope includes not only Kubernetes but also a broad ecosystem of projects across observability, networking, service meshes, runtime security, CI/CD, and application delivery. The foundation defines processes for project incubation and graduation, promotes best practices, organizes community events, and supports interoperability and adoption through reference architectures and education.
In the Kubernetes context, CNCF’s role matters because Kubernetes is a massive multi-vendor project. Neutral governance reduces the risk that any single company can unilaterally control direction. This fosters broad contribution and adoption across cloud providers and enterprises. CNCF also supports the broader “cloud native†definition, often associated with containerization, microservices, declarative APIs, automation, and resilience principles.
The incorrect options are close-sounding but not accurate expansions. “Cloud Native Community Foundation†and the “Cloud Neutral …†variants are not the recognized meaning. The correct official name isCloud Native Computing Foundation.
So, the verified answer isB, and understanding CNCF helps connect Kubernetes to its broader ecosystem of standardized, interoperable cloud-native tooling.
=========
TESTED 02 Jan 2026