What Does It Mean To Have a Cloud Native Application?
Building or converting an application to cloud native is not always easy, and include a lot of moving pieces. In this article we cover the most important parts that you need to be aware of, and explain the difference between cloud native or any apps in the cloud.
When folks try out Kubefirst and join our community for the first time, they tend to fall into two categories. First, there are those inside of established organizations, which are balancing their older technology with their ambition to adopt the cloud native landscape. Second, we have technical co-founders and early engineers at youthful startups, who just need to make that leap from “my application runs on localhost” to having development/staging/production Kubernetes environments.
What they share is that they’re looking for ways to make that transition as seamless as possible. Whether you fit into one of those groups or are coming from a different perspective, you’re probably looking for those same tips.
So, while this isn’t meant to be a checklist in the strictest sense, if your application checks these boxes, you can move forward with confidence that your application is ready to stand-up on a fully-functional Kubernetes platform in just hours with Kubefirst.
To help illustrate our Kubernetes-specific points, we’ll often reference our example application, metaphor, which serves as a demonstration for how applications hook into your Kubefirst-built infrastructure and tooling.
What all applications and cloud native ones need
All apps: Source code
Whether you’re thinking about “lifting and shifting” a legacy application to a cloud native environment, or your app is just a dream in a private GitHub repository, you’re always starting from source code that will, eventually, be a tool for your end user.
All: A command that builds the application
Source code doesn’t work in isolation, so you need to transition it from being a few folders of configuration files and imported packages into the user-facing product.
Your application’s language/framework might offer a helper command that builds/compiles/packages your application, like go build
or python3 -m build
, or npm run build
. If you’re already thinking about containers, you might use docker build
to create a Docker container that not only contains your application, but also its dependencies, in a single distributable package.
All: A command that starts the application
Building your application generates new files or a binary, but that alone doesn’t mean it’s ready to go.
You might build a binary directly, which runs directly from your target system, like ./hello-world
. Or your package manager might give you another helper, like npm run serve
, to run the compiled version from a build directory that isn’t source-controlled. And again, if you’re already working in Docker, you’ll run something like docker run --name hello-world -it ubuntu bash
to spin up a container.
All: A port that the application listens on if exposed
Every application needs to be accessible from the outside internet.
Web applications, for example, might listen directly on ports 80/443 for HTTP traffic. Or, your application might listen to an internal network on an unassigned port, with a service like Nginx acting as a reverse proxy to route traffic from app.example.com
directly to your app via that port. When you’re using Docker or another containerized service, the idea is more or less the same. In a Kubernetes environment the reverse proxy is typically managed at the cluster level with an ingress controller like ingress-nginx or an api gateway like envoy proxy.
In the cloud native world, this port must be exposed by your Dockerfile. Speaking of which—
Cloud native apps: A Dockerfile
We mentioned Docker a few times already, but it’s a must-have for cloud native applications. Instead of deploying a directory or a binary to the cloud, you deploy a previously-built and immutable image, which includes a thin operating system layer and all the dependencies your application requires, in a single package. Docker then populates an isolated environment, also known as a container, to run a “copy” of the image.
Dockerfiles are the “recipes” Docker uses to build your image. Take this example, which packages source code, commands to build/run the app, a port to listen on, and could easily be extended to include environment configurations/secrets.
Our metaphor
application has a perfect example of a Dockerfile with what we consider to be the bare minimum of cloud native:
- A multi-stage process that first builds the application using a larger base image and all its dependencies, followed by a run stage that uses a “thinner” base image and additional environment configurations.
- A non-root user for running the application.
Most organizations put Dockerfiles into the root directory of their version-controlled repository, which makes them easy to add to a legacy app.
Cloud native: A place to store your container image
A Kubernetes cluster automatically pulls the Docker images to create resources and ultimately establish your desired state, which means your images need to be accessible from anywhere your cluster might be running.
You can use DockerHub or another public repository, but chances are you’ll actually want a service like Amazon Elastic Container Registry (ECR) to securely push, manage, and pull your images from a trusted source.
Cloud native: A place to store your Helm chart (if applicable)
While you can run a single Docker image directly on a Kubernetes cluster with kubectl run your-app --image=YOUR_IMAGE
, you’ll more likely adopt tooling that helps you manage your applications, like Helm. Helm is a package manager for Kubernetes, helping you to define and install complex Kubernetes applications, even across multiple environments.
As with Docker, you can store your Helm charts in a public repository, like Artifact Hub, but you’ll most likely want to store at least some of your Helm charts in a private repository. You can roll your own or set up a Helm v3 chart repository in Amazon S3, among other options. We’re also big fans of ChartMuseum, an open-source Helm Chart Repository server in Go with support for Amazon S3.
All: A load balancer
It’s generally not a good idea to have a single point of ingress that leads to a single instance of your application running as a process on a single machine. That final part, whether it’s a virtual machine (VM) or a bare-metal system, is a single point of failure that can’t scale based on demand.
Instead, you should employ a load balancer, which uses a single IP to distribute traffic among a “pool” of servers, each of which runs an instance of your application. Your load balancer must also be tied to a routable hostname to associate it with your DNS record, which sends traffic to the app and determines the TLS certificate you need to secure traffic.
All: A route to tell the load balancer that the application is healthy
All load balancers work by consistently pinging an application’s route, like /health
, to determine whether it’s healthy. If this route returns an unexpected response, or not at all, then the load balancer won’t send any traffic to that application.
Kubernetes has since expanded on this single route for health with separate routes for liveness and readiness.
In a Kubernetes cluster, the kubelet uses readiness probes to know when a pod is fully launched and ready to accept traffic, which means all its containers are also ready to accept traffic. Pods that haven’t successfully fired their readiness probe are never added to the load balancers in the first place—without readiness configured, your app simply won’t run.
This route is allowed to check core dependencies to prove full functionality, like selecting a single row from the user database to prove it has a healthy connection.
Add a readiness probe to any container you’ve defined with the readinessProbe
command against some means of your application signaling that it’s launched, like a 200
response on the /ready
route.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
metadata:
labels:
app: my-test-app
containers:
— name: my-test-app
image: nginx:1.14.2
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 5
periodSeconds: 5
Cloud native: Another route / mechanism to tell Kubernetes if the application is alive
Your Kubernetes cluster also wants to verify a given Pod, and its containers, aren't stuck in a broken state. When the kubelet gets a non-zero value in return, it kills the container and restarts it, hopefully restoring service.
Unlike a readiness check, the liveness check is supposed to happen ASAP, without checking dependencies or querying databases.
Adding a liveness probe is almost identical to the readiness probe:
…
readinessProbe:
httpGet:
path: /ready
port: 80
initialDelaySeconds: 5
periodSeconds: 5
All: Environment configurations
Each environment your application runs in—development, staging, and production—will likely require different means of, for example, connecting to its underlying database.
There are as many ways to codify and deploy these configurations, like creating .env
files for each environment in your codebase or adding variables directly to your command-line tooling, like docker run … --env DATABASE_NAME=XYZ --env DATABASE_PORT=12345
.
In Kubernetes environments, environment variables are most easily established by binding them to configmaps
or secrets
in the deployment specification.
Cloud native applications also require different settings based on whether it’s a staging or production environment, for example. But, instead of manually setting these every time you run helm install
, you should codify configuration directly into your Helm chart by adding a values.yaml
file alongside your template, which allows you to set specific values, such as:
replicaCount: 1
You can now reference Values in your template:
apiVersion: apps/v1
kind: Deployment
...
spec:
replicas: {{ .Values.replicaCount }}
The metaphor
project has live examples for both the values.yml and deployment.yaml files.
All: Environment secrets
As with the configurations, you’ll also need different secrets for different deployments (development, staging, production), which include things like API credentials and passwords to read/write from databases.
Unlike configurations, these often can’t be stored within the same repository as the rest of your source code, which means you’ll need to fashion a different strategy, whether it’s a human process or a secrets management platform.
For cloud native applications, secrets can be mapped to properties, not too dissimilar to environment variables. Your secrets should point to an external reference, like HashiCorp Vault, rather than a file, to ensure your token, passwords, certificates, and encryption keys are safe.
We rely on Vault in our metaphor
app. For an example, check the external-secrets.yaml file.
spec:
target:
name: {{ template "metaphor.fullname" . }}
secretStoreRef:
kind: ClusterSecretStore
name: vault-secrets-backend
refreshInterval: "10s"
data:
- remoteRef:
key: {{ .Values.vaultSecretPath }}
property: SECRET_ONE
secretKey: SECRET_ONE
- remoteRef:
key: {{ .Values.vaultSecretPath }}
property: SECRET_TWO
secretKey: SECRET_TWO
All: Runtime output and logs
Once your application is running in the production environment you want to know how it’s performing and behaving. Maybe even more importantly, you’ll need resources to investigate what went wrong in the event of a crash or show-stopping bug. Logging helps you determine which code is causing issues by showing you each step in a chain of executions. Every language and framework comes with some built-in method of producing messages, which can be piped into logfiles and rotated as your application fills them.
You’ll probably also want to establish other layers of monitoring and/or observability, such as system metrics, application performance monitoring, tracing, and more.
Just like the legacy or localhost versions of your apps, you can only troubleshoot cloud native applications if you can peek into what’s happening inside of the isolated container.
From a Kubernetes perspective, you’re ideally printing to stdout
in a structured log format, such as JSON, so that you can collate output and logs from many distributed pods into a single resource for further investigation when that time comes.
We encourage Kubefirst users to use a solid SaaS logging/monitoring/APM solution, like Datadog, over trying to self-host their entire observability setup from the day one. There will come a time in your Kubernetes journey where you’ll want the customization and cost-control of self-hosting, but you can cross that bridge a few years down the road.
Cloud native: Definite resource constraints
While Kubernetes doesn’t strictly enforce limits, it’s a best practice to establish, in your Helm charts or manifests, the request
and limit
for CPU and memory, which help you:
- Prevent single pods from overloading your cluster and creating Out Of Memory (OOM) situations.
- Ensure pods don’t get stuck in a pending state because your nodes can’t allocate the resources as configured because they’re artificially high.
- Strike the right balance between performance/stability and resource allocation/utilization costs from your cloud provider.
You can think of the request
as a baseline, in that the container is allocated that much resources but allowed to use more if the node/cluster can afford it, while the limit
is a hard stop. You must define these constraints directly in your Kubernetes configurations, like so:
---
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: hello-world
image: images.example.com/hello-world:v1
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Remember that the limit.memory
configuration is the only one that’s destructive—if your application exceeds the limit, Kubernetes will destroy that pod and recreate it elsewhere.
Cloud native: Kubernetes resources
Every cloud native application must define a few objects in either plain manifests or via Helm/Kustomize templates.
These resources are defined in manifests. You can use plain YAML manifests, which use folders and tokens to drive the delivery of environment-specific configuration, but most people building cloud native applications will quickly switch to either a Helm Chart, where the values.yaml
file drives configuration, or Kustomize, which overlays YAML to override your default settings.
At Kubefirst, we think Charts are an excellent way to professionally productize your resources.
Service accounts
Your application must have a role that’s consumable by Amazon EKS, which gives it the necessary privileges to take action, like creating nodes/pods when needed.
apiVersion: v1
kind: ServiceAccount
metadata:
name: metaphor-go
automountServiceAccountToken: true
Deployment and application state definitions
Your deployment.yaml
file must define the following:
- Replica count
- The image to be deployed
- Environment configurations from ConfigMaps, Helm values, or secrets
- Resource constraints
- Liveness probe
- Readiness probe
Here is a fully-fledged example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: metaphor-go
annotations:
reloader.stakater.com/auto: "true"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: metaphor-go
app.kubernetes.io/instance: metaphor-go
template:
metadata:
labels:
app.kubernetes.io/name: metaphor-go
app.kubernetes.io/instance: metaphor-go
spec:
serviceAccountName: metaphor-go-sa
securityContext: {}
imagePullSecrets:
- name: metaphor-gh
containers:
- name: metaphor-go
securityContext: {}
image: "ghcr.io/your-company-io/metaphor-go:8d1446fd6e8414a140b8c1cee7f5693f59eda2eb"
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: metaphor-go
- secretRef:
name: metaphor-go
env:
- name: CHART_VERSION
value: "0.4.0"
- name: DOCKER_TAG
value: "8d1446fd6e8414a140b8c1cee7f5693f59eda2eb"
ports:
- name: http
containerPort: 3000
protocol: TCP
livenessProbe:
tcpSocket:
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
failureThreshold: 1
timeoutSeconds: 30
readinessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 30
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 40m
memory: 64Mi
Service
Your service.yaml
file needs to define the service type and port on which it should accept traffic from within the Kubernetes cluster.
apiVersion: v1
kind: Service
metadata:
name: metaphor-go
spec:
type: ClusterIP
ports:
- port: 443
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: metaphor-go
app.kubernetes.io/instance: metaphor-go
Ingress
The ingress for your application, which you define in ingress.yaml
, specifies your hostnames, paths, TLS certificates, and more.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: metaphor-go
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: "metaphor-go-development.your-company.io"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: metaphor-go
port:
number: 443
tls:
- hosts:
- "metaphor-go-development.your-company.io"
secretName: metaphor-go-tls
ConfigMap
These configurations, stored in cm.yaml, help you create environment-specific and non-confidential data in key-value pairs. Your pods can consume ConfigMaps as environment variables, helping you decouple configuration from containers, which makes them portable for development, staging, and production clusters.
apiVersion: v1
kind: ConfigMap
metadata:
name: metaphor-go
data:
CONFIG_ONE: your-first-config
CONFIG_TWO: your-second-config
Secrets & external secrets
As referenced back in the environment secrets section, you should use an external-secrets.yaml
file to interact with external resources for tokens, passwords, and more. You might even extend this to a complete identity provider (IDP), which manages authentication for users and applications.
Cloud native: Identity management
In AWS/EKS, the only cloud platform that Kubefirst works with natively, the access control service is called Identity and Access Management (IAM). IAM is what grants your service account access to the cloud resources your Kubernetes cluster needs to operate, such as S3, RDS, queues, and more.
For example, you can create a unique service account for a single service within your cluster, and bind that service account to a specific IAM role. You could then give your development cluster very specific privileges, like only reading from your RDS database.
Wrapping up
Now that you know about all the pieces of a cloud native application, you might think you’re ready to spin up a Kubernetes cluster in a matter of minutes.
But the difficult truth of the cloud native landscape is that while all the tools are available to you, getting all the pieces to work in sync can take months of initial investment and years of refinement to establish good patterns.
Enter Kubefirst, an opinionated application delivery and infrastructure management open source platform that deploys some of the most popular cloud native tools to support your new application with a single command.
And once you’ve stood up a cluster in minutes, we’d love to hear about your Kubefirst experience in our Slack community—what else, in your opinion, is necessary to have a cloud native application?