The rise of Kubernetes from an overly complicated open source project into the industry standard for next-generation applications is nothing short of astonishing. Last year, as it became clear that Kubernetes would become the cloud-native and large enterprise workload orchestrator of choice, Sera4 chose it to expand its secure access and control services and solutions into new industries. The reward for making the switch was remarkable: the rollout of an entire data centre now takes between two to four hours, instead of our previous six-week best.
The road to a containerised future is full of potholes, so here are five lessons that can help you reboot your infrastructure with a container/microservice-based architecture.
Lesson one: scalability
Any time you get a group of developers together for Kubernetes design and development, you’ll get multiple solutions, but only one will scale well. Scaling is a very real concern, especially for businesses with large enterprise customers that don’t have the luxury of canary testing solutions that might work. Projects need to be rolled out rapidly, so you need to find a way to scale and manage Kubernetes workloads much faster than you did previously.
Developing a scaling strategy for IoT
Failure is a very real factor when trying to transform from a virtual and bare metal server farm to a distributed cluster, so determine how your services can scale and communicate if you’re geographically separating your data and customers. Clusters operate differently at scale than your traditional server farms, and containers have a completely different security paradigm than your average virtualised application stack.
Be prepared to tweak your cluster layouts and namespaces as you begin your designs and trials. Become agile with Infrastructure as Code (IAC), and be willing to make multiple proof-of-concepts when deploying. Tests can take hours and teardown and standup can be painful when making micro-tweaks along the way. If you do this, you will remove larger scaling considerations with a good base for faster and larger scale. My advice is to keep your core components close and design for relay points or services when attempting to port into containers, or into multi-cluster designs.
Lesson two: transitioning skills
Reskilling is all about adaptability. If your team doesn’t have container DevOps expertise, their experiences using the tooling to manage VMs may help. Lean on the tooling and toolsets inherent in Kubernetes management platforms as well as support from the Kubernetes community.
Starting strong when building your microservices team
Our thought process for this is simple. If a mechanic knows how to work on a sedan, give them a pickup truck, and they’ll quickly figure out how to transfer their experience. DevOps is similar. Once you dissect what the components are and where to find them, the “how” will fall into place with equivalent tooling. You’d be surprised at what your team can do with containers in a short time, without formal retraining. We looked immediately to Rancher for Kubernetes management, Helm, Terraform, GitOps and others as our foundational tooling.
Lesson three: affinities, reservations, taints and tolerances
The notions of affinity/anti-affinity, resource reservations, taints and tolerances are Day 0 critical requirements in a Kubernetes deployment. Don’t let containers act like spoiled children that get away with demanding too much CPU or memory — and they will if you let them. Use these guards to protect your workloads and prevent them from negatively impacting other containers and applications.
Lesson four: sidecars
Sidecar design patterns, although wonderful conceptually, can either go incredibly right or horribly wrong. Kubernetes sidecars provide non-intrusive capabilities, such as reacting to Kubernetes API calls, setting up config files, or filtering data from the main containers. Sidecars can be deployed together, scaled together and useful when reusing resources, all of which are key to scaling and maximising resources.
Understanding your core services requirements allows you to apply sidecars to complement the functionality each service needs and keep containers true to their core jobs. Scaling microservices takes dedication when applying responsibilities to containers, and sidecars are a great mitigating factor for this.
Lesson five: container security
Securing containers can be a hard and time-consuming lesson. With open source containers, you can find almost any type of application in a small container. And while you can get started quickly, design basic prototypes in hours and launch them – you’ll find the container breaks almost every fundamental security rule of a basic Linux machine.
What to know about open source security
When you finish prototyping, harden your image and deploy your pod security policies – these are the best hours you’ll ever invest in security. Analysing what users are running during the initial process, understanding how credentials are set up or injected, and ensuring a container is only exposed to what it needs to be are three basic steps to take before moving any container into production. Also, be willing to cut your own production images down to barebones and think how to compliment your images with repeatable configurations and secrets, network namespaces and options, security constraints and liveness and readiness checks that make sense.
In summary, while container technology was developed and initially deployed by cloud providers and cloud-native startups, it has matured to the point that it’s within the reach of any technically competent IT organisation. Available software, cloud products and support services allow your teams to “fail fast” and tweak your stack accordingly. The quicker you fail, the faster you can rebuild.