What is the difference between cloud and cloud native?
Well, the definition of "cloud" is both simple and complex. According to NIST , cloud computing is:
A model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
It's important to mention that the definition of cloud computing is still evolving and different standards organizations are working on their own versions as we speak. But for now, the National Institute of Standards and Technology (NIST) standard seems pretty solid and most people agree on it in principle. However, when we start digging deeper into the topic, you'll see that there are a lot of different opinions on what it means to be cloud native.
Some people say that cloud native is the next evolution of cloud computing while others think that they're two separate things. To make sense out of this, let's first break down everything in parts.
The first component of the definition given above is "shared pool of configurable computing resources" and many people agree that this applies to both traditional cloud computing as well as cloud-native applications. So if we are to take away one thing from NIST's definition, it's that cloud native means applications built on a shared pool of configurable computing resources.
However, the next part of the definition says that these resources "can be rapidly provisioned and released with minimal management effort or service provider interaction". So if this is cloud native then it means that there's a higher level of abstraction between you and your infrastructure provider such as Amazon Web Services (AWS), Google Cloud Platform (GCP) or Microsoft Azure. This means that you don't have to contact the provider every time you want to provision a new resource.
Now, it gets more interesting because many people also agree that cloud native applications are designed for running in modern (cloud) environments as opposed to traditional on-premise infrastructure. So by this definition, cloud native applications are more focused on running in a distributed environment and should be well-suited for microservices. Instead of building large monolithic applications, these projects aim to deliver small units of functionality and leverage continuous integration/continuous deployment (CI/CD) patterns.
In addition, cloud native applications should also be built with standardized communication protocols such as HTTP/HTTPS and be able to run anywhere from the cloud all the way down to IoT devices.
Some organizations and people also say that cloud native applications should also be built using open-source technologies so that they can take advantage of massive ecosystems and vibrant communities like Linux, Apache, MySQL (LAMP), MongoDB, Node.js and so on.
But the most interesting part of this definition is that Cloud native applications are also deployable anywhere, from the cloud to IoT devices. The key component here is a shared pool of configurable computing resources, which involves technology like containers and microservices. Containers allow you to run your applications in isolation while microservices are very small services that can communicate with each other over network protocols like HTTP.
In addition, cloud native applications should be able to scale out easily as a workload increases or decreases in size. This means they should scale up automatically and not require manual resources provisioning from the administrator. Most importantly, these applications have to be secure by design as opposed to secure by default.
The security component is very important for a number of reasons, including reducing the attack surface and eliminating single points of failure (SPF). Preventing SPFs is particularly important because once an attacker gains access to one thing in your infrastructure, they will always be trying to get into other applications and services as well.
Besides that, cloud native applications have to be resistant against distributed denial of service (DDoS) attacks and must also monitor performance at scale. Performance monitoring is very important because it includes being able to manage resource allocation between multiple services and apps on demand. In addition, this allows you to watch metrics like latency, throughput and CPU utilization.
The second part of the definition says that these applications have to be easy to deploy. And while this sounds easy to do, it's actually quite difficult because deployment automation has historically been one of the hardest problems in computer science. For example, here are a few reasons why deployment is so hard:
* Chaos Monkey - If you don't run this, your application probably isn't cloud native. Chaos Monkey randomly terminates instances of a service in a cluster and helps to ensure high availability.
* Immutable Infrastructure - This is also known as infrastructure as code, which means that you should be able to deploy applications by running scripts on version control systems like git or SVN.
* Microservices - You need to be able to deploy microservices independently and cannot rely on shared resources or data stores. If you use common resources, they will eventually become a bottleneck that results in performance degradation.
* Rolling Updates - It should take only minutes for the application to move from one version to another. This means that the rest of your apps should be able to keep running without any downtime.
* Configuration as Code - The system you're using for CI/CD has to support configuration as code, which means that you can view and change configurations without compiling or deploying an application. If changes are made in a database, you have to be able to update your application's configuration automatically.
* Immutable Infrastructure - When an instance fails, you have to be able to roll back the changes that were applied and replace them with a new instance in order to be able to recover quickly. For example, if you're using Git or SVN for version control, then it should only take seconds to create a new version of your application.
* Telemetry - This is one of the most important components for cloud native applications because it provides insight into how an application functions at various levels, including when it's being used and under what circumstances. The insights obtained from telemetry will help you make better decisions about things like scaling up or down, moving the application to different locations or decommissioning an instance.
* Metrics - It's important to have a system that allows you to collect metrics because these will help you understand how your cloud native applications are functioning at every layer and which parts of the stack need improvement. If something goes wrong, there's no need for using outdated methods like blackbox testing or adding more and more logging code because you can monitor performance in real time.
To make sure that the servers are able to run things as expected, you also need an automated system for infrastructure-as-code. It gives you a single point of truth for managing resources, so if someone misconfigures a server, you won't have any issues.
One of the most important parts of a cloud native environment is configuration management. This means that you should be able to configure every aspect of your application right from the development phase all the way to production. In addition, this also gives you a single point of truth for all configurations and changes made to your application, and it helps with deployment automation. To make matters even easier, you should ideally have a configuration management system that supports configuration as code.
Because cloud native applications are intended to run on multiple platforms and in different scenarios, they need to be highly scalable through services like containers or microservices. This also means that availability is paramount because a service can fail and your application needs to be able to recover from the incident.
While it's important to be able to scale up applications, you must also be able to scale down, which is just as basic as scaling up but frequently overlooked by most companies that aren't considered cloud native yet. If you're using microservices or container technology, you will need a tool to provide the ability to autoscale so your instances can shrink as demand decreases.
The deployment automation part of cloud native applications is one of their most useful features because it eliminates multiple steps that might be required with traditional application development. For example, instead of having to run manual scripts or clones on the production server, you can automate the process with tools like Packer and Terraform. The goal is to make deployment as simple as possible and this helps eliminate potential issues that could arise during manual deployments.
The ability to manage applications from source without having to build or deploy them will also help improve your development speed by making it faster to test new features and version changes. One strategy for this is to use Docker containers, which are lightweight and offer reproducible builds, so you will know exactly what's running on your server or instance by taking a snapshot of the container environment.
It might seem odd but it's also important to think about how your cloud native environments will handle garbage collection and deletion. If you're not able to get rid of your environments, it might take up too much space on the cloud or you could have issues with service-level agreements and pricing.