Orchestration, Containerization and Securing it all

INTRODUCTION
Containers have broad appeal because they allow users to easily package an application, and all its dependencies, into a single image that can be promoted from development, to test, and to production—without change. Containers make it easy to ensure consistency across environments and multiple deployment targets like physical servers, virtual machines (VMs), and private or public clouds. This helps teams more easily develop and manage the applications that deliver business value.

So what are containers? Well it depends who you ask.




There are many aspects to "containerization". Each has a method and function to facilitating container deployments, management and orchestration. I will provide a quick overview of the main parts and players in the containerization space. But aren't containers just next generation virtualization. Yes and no.

Here’s an easy way to think about the two:

1. Virtualization lets many operating systems run simultaneously on a single system.
2. Containers share the same operating system kernel and isolate the application processes from the rest of the system.

What does this mean? For starters, having multiple operating systems running on a hypervisor, the software that makes virtualization work, isn’t as lightweight as using containers. When you have finite resources with finite capabilities, you need lightweight apps that can be densely deployed. Linux containers run from that single operating system, sharing it across all of your containers, so your apps and services stay lightweight and run swiftly in parallel.


OPENSHIFT - RedHat
OpenShift v3 is a layered system designed to expose underlying Docker and Kubernetes concepts as accurately as possible, with a focus on easy composition of applications by a developer. For example, install Ruby, push code, and add MySQL.

OpenShift has a microservices-based architecture of smaller, decoupled units that work together. It can run on top of (or alongside) a Kubernetes cluster, with data about the objects stored in etcd, a reliable clustered key-value store.


KUBERNETES - Google
Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

Kubernetes provides a container-centric management environment. It orchestrates computing, networking, and storage infrastructure on behalf of user workloads. This provides much of the simplicity of Platform as a Service (PaaS) with the flexibility of Infrastructure as a Service (IaaS), and enables portability across infrastructure providers.


DOCKER - Docker
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly.

Docker streamlines the development lifecycle by allowing developers to work in standardized environments using local containers which provide your applications and services. Containers are great for continuous integration and continuous delivery (CI/CD) workflows.

SECURING IT ALL
Securing containers is a lot like securing any running process. You need to think about security throughout the layers of the solution stack before you deploy and run your container. You also need to think about security throughout the application and container life cycle.

Containers are not the only part of this equation. There needs to be a "traffic cop" to manage all of the moving parts in a successful and secure deployment. OpenShift provides much needed security to the orchestration and control of contains and how they interact in the environment.

1. Container Host OS and Multitenancy

2. Container Content

3. Container Registries

4. Container Orchestration

5. Network Isolation

6. Storage

7. Application Programming Interface (API) Management

8. Federated Clusters

9. Tools


1. Container Host OS and Multitenancy

Containers make it easier for developers to build and promote an application and its dependencies as a unit. Containers also make it easy to get the most use of your servers by enabling multitenant application deployments on a shared host. You can easily deploy multiple applications on a single host, spinning up and shutting down individual containers as needed. And, unlike traditional virtualization, you do not need a hypervisor or to manage guest operating systems on each VM. Containers virtualize your application processes, not your hardware.

To take full advantage of this packaging and deployment technology, the operations team needs the right environment for running containers. Operations needs an operating system (OS) that can secure containers at the boundaries: securing the host kernel from container escapes and securing containers from each other.

Containers are Linux processes with isolation and resource confinement that enable you to run sandboxed applications on a shared host kernel. Your approach to securing containers should be the same as your approach to securing any running process on Linux. Dropping privileges is important and still the best practice. Even better is to create containers with the least privilege possible. Containers should run as user, not root. Next, make use of the multiple levels of security available in Linux. Linux namespaces, Security-Enhanced Linux (SELinux), Cgroups, capabilities, and secure computing mode (seccomp) are five of the security features available for securing containers running on Linux.

• Linux namespaces provide the fundamentals of container isolation. A namespace makes it appear to the processes within the namespace that they have their own instance of global resources. Namespaces provide the abstraction that gives the impression you are running on your own operating system when you are inside a container.

• SELinux provides an additional layer of security to keep containers isolated from each other and from the host. SELinux allows administrators to enforce mandatory access controls (MAC) for every user, application, process, and file. SELinux is like a brick wall that will stop you if you manage to break out of (accidentally or on purpose) the namespace abstraction.

• Cgroups (control groups) limit, account for, and isolate the resource usage (e.g., CPU, memory, disk I/O, network) of a collection of processes. Use Cgroups to ensure your container will not be stomped on by another container on the same host. Cgroups can also be used to control pseudodevices—a popular attack vector.

• Linux capabilities can be used to lock down root in a container. Capabilities are distinct units of privilege that can be independently enabled or disabled. Capabilities allow you to do things such as send raw IP packets or bind to ports below 1024. When running containers, you can drop multiple capabilities without impacting the vast majority of containerized applications.

• Finally, a secure computing mode (seccomp) profile can be associated with a container to restrict available system calls.

You can further enhance security for your applications and infrastructure by deploying your containers to a lightweight operating system optimized to run Linux containers, like Red Hat Enterprise Linux Atomic Host. Atomic Host reduces the attack surface by minimizing the host environment and tuning it for containers.

Traditional virtualization also enables multitenancy—but in a very different way. Virtualization relies on a hypervisor initializing guest VMs, each of which has its own OS, as well as the running application and its dependencies. With VMs, the hypervisor isolates the guests from each other and from the host. Fewer individuals and processes have access to the hypervisor, reducing the attack surface on the physical server. However, security must still be monitored for threats; for example, one guest VM may be able to use hypervisor bugs to gain access to another VM or the host kernel. And, when the OS needs patching, it must be patched on all guest VMs using that OS.

Containers can be run inside guest VMs and there may be use cases where this is desirable. For example, if you are deploying a traditional application in a container, perhaps in order to lift and shift an application to the cloud, you may wish to place the container inside a guest VM. However, container multitenancy on a single host provides a more lightweight, flexible, and easier-to-scale deployment solution. This deployment model is particularly appropriate for cloud-native applications.


2. Container Content (use trusted sources)

When it comes to security, what’s inside your container matters. For some time now, applications and infrastructures have been composed from readily available components. Many of these are open source packages, such as the Linux operating system, Apache Web Server, Red Hat JBoss® Enterprise Application Platform, PostgreSQL, and Node.js. Containerized versions of these packages are now also readily available so that you do not have to build your own. But, as with any code you download from an external source, you need to know where the packages originally came from, who built them, and whether there’s any malicious code inside them. Ask yourself:

• Will what’s inside the containers compromise my infrastructure?

• Are there known vulnerabilities in the application layer?

• Are the runtime and OS layers up to date?

• How frequently will the container be updated and how will I know when it’s updated?


With its new Container Health Index, Red Hat exposes the “grade” of each container image, detailing how container images should be curated, consumed, and evaluated to meet the needs of production systems. Containers are graded based in part on the age and impact of unapplied security errata to all components of a container, providing an aggregate rating of just how safe a container is that can be understood by security experts and non-experts alike. When Red Hat releases security updates—such as fixes to glibc, Drown, or Dirty Cow—we also rebuild our container images and push them to our public registry. Red Hat Security Advisories alert you to any newly discovered issues in certified container images and direct you to the updated image so that you can, in turn, update any applications that use the image.

Of course, there will be times when you need content that Red Hat does not provide. We recommend using container scanning tools that use continuously updated vulnerability databases to be sure you always have the latest information on known vulnerabilities when using container images from other sources. Because the list of known vulnerabilities is constantly evolving, you need to check the contents of your container images when you first download them and continue to track vulnerability status over time for all your approved and deployed images.

Red Hat provides a pluggable API in Red Hat Enterprise Linux to support multiple scanners such as OpenSCAP, Black Duck Hub, JFrog Xray, and Twistlock. Red Hat CloudForms can also be used with OpenSCAP to scan container images for security issues. Also, Red Hat OpenShift gives you the ability to use scanners with your continuous integration and continuous delivery (CI/CD) process. This is covered in more detail below.

3. Container Registries (secure access to container images)

Of course, your teams are building containers that layer content on top of the public container images you download. You need to manage access to, and promotion of, the downloaded container images and the internally built images in the same way you manage other types of binaries. There are a number of private registries that support storage of container images. We recommend selecting a private registry that helps you automate policies for the use of container images stored in the registry.

OpenShift includes a private registry that can be used to manage your container images. The OpenShift registry provides role-based access controls that allow you to manage who can pull and push specific container images. OpenShift also supports integration with other private registries you may already be using, such as JFrog’s Artifactory and Docker Trusted Registry. The list of known vulnerabilities is constantly evolving, so you need to track the contents of your deployed container images, as well as newly downloaded images, over time. Your registry should include features that help you manage content based on metatdata about the container, including known vulnerabilities. For example, you can use Red Hat CloudForms SmartState analysis to flag vulnerable images in your registry. Once flagged, OpenShift will prevent that image from being run going forward.

4. Container Orchestration: Securing the container platform

Of course, applications are rarely delivered in a single container. Even simple applications typically have a front end, a back end, and a database.

When managing container deployment at scale, you need to consider:

• Which containers should be deployed to which hosts.

• Which host has more capacity.

• Which containers need access to each other. How will they discover each other?

• How you control access to—and management of—shared resources, like network and storage.

• How you monitor container health.

• How you automatically scale application capacity to meet demand.

• How to enable developer self-service while also meeting security requirements.

Red Hat OpenShift Container Platform delivers container orchestration, automation of scheduling, and running application containers on clusters of physical or virtual machines through inclusion and extension of the open source Kubernetes project. Kubernetes, an open source project started by Google, uses “masters” to manage the complexity of container cluster orchestration. OpenShift also comes with Red Hat CloudForms which, among other things, can be used to monitor the health of containers in your private registry and prevent deployment of containers with newly detected vulnerabilities.

API access control (authentication and authorization) is critical for securing your container platform. The OpenShift master includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to authenticate using the identity provider of your choice, including Lightweight Directory Access Protocol (LDAP) directories.

One of the key values of a container platform is the ability to enable developer self-service, making it easier and faster for your development teams to deliver applications built on approved layers. Multitenancy security is a must for the platform itself to make sure teams do not access each other’s environments without authorization. You need a self-service portal that gives enough control to teams to foster collaboration while still providing security.

OpenShift adds several components to Kubernetes to maintain a secure multitenant master, ensuring that:

• All access to the master is over transport layer security (TLS).

• Access to the API server is X.509 certificate- or token-based

• Project quota is used to limit how much damage a rogue token could do.

• Etcd is not exposed directly to the cluster

5. Networking Isolation


Deploying modern microservices-based applications in containers often means deploying multiple containers distributed across multiple nodes. With network defense in mind, you need a way to isolate applications from one another within a cluster.

A typical public cloud container service, like Google Container Engine (GKE), Azure Container Services, or Amazon Web Services (AWS) Container Service, are single tenant services. They let you run your containers on the VM cluster that you initiate. For secure container multitenancy, you want a container platform that allows you to take a single cluster and segment the traffic to isolate different users, teams, applications, and environments within that cluster.

With network namespaces, each collection of containers (known as a “pod”) gets its own IP and port range to bind to, thereby isolating pod networks from each other on the node. Pods from different namespaces (projects) cannot send packets to or receive packets from pods and services of a different project by default, with exception options noted below. You can use these features to isolate developer, test, and production environments within a cluster.

However, this proliferation of IP addresses and ports makes networking more complicated. In addition, containers are designed to come and go. We recommend investing in tools that handle this complexity for you. A container platform that uses software defined networking (SDN) to provide a unified cluster network that enables communication between containers across the cluster is preferred.

Also preferred is a container platform that provides the ability to control egress traffic using either a router or firewall method so that you can use IP white-listing to control, for example, database access.

6. Storage


Containers are useful for both stateless and stateful applications. Protecting attached storage is a key element of securing stateful services. Red Hat OpenShift Container Platform provides plug-ins for multiple flavors of storage, including network file systems (NFS), AWS Elastic Block Stores (EBS), GCE Persistent Disks, GlusterFS, iSCSI, RADOS (Ceph), and Cinder.

A persistent volume (PV) can be mounted on a host in any way supported by the resource provider. Providers will have different capabilities, and each PV’s access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read only. Each PV gets its own set of access modes describing that specific PV’s capabilities. Such as ReadWriteOnce, ReadOnlyMany, and ReadWriteMany.

For shared storage (NFS, Ceph, Gluster, etc.), the trick is to have the shared storage PV register its group ID (gid) as an annotation on the PV resource. When the PV is claimed by the pod, the annotated gid will be added to the supplemental groups of the pod and give that pod access to the contents of the shared storage.

For block storage (EBS, GCE Persistent Disks, iSCSI, etc.), container platforms can use SELinux capabilities to secure the root of the mounted volume for nonprivileged pods, making the mounted volume owned by, and only visible to, the container it is associated with. Data in transit should be encrypted via https for all container platform components communicating between each other.

7. API Management / End Point Security and Single Sign-On (SSO)

Securing your applications includes managing application and API authentication and authorization.

Web SSO capabilities are a key part of modern applications. Container platforms can come with a number of containerized services for developers to use when building their applications, such as Red Hat SSO (RH-SSO), a fully supported, out-of-the-box SAML 2.0 or OpenID Connect-based authentication, web single sign-on, and federation service based on the upstream Keycloak project. RH-SSO 7.1 features client adapters for Red Hat JBoss Fuse and Red Hat JBoss Enterprise Application Platform (JBoss EAP). RH-SSO 7.1 includes a new Node.js client adapter, which enables authentication and web single sign-on for Node.js applications. RH-SSO can be integrated with LDAP-based directory services, including Microsoft Active Directory and Red Hat Enterprise Linux Identity Management. RH-SSO also integrates with social login providers such as Facebook, Google, and Twitter.

8. Roles and Access Management in a Cluster Federation

In July of 2016, Kubernetes 1.3 introduced Kubernetes Federated Clusters for the first time. This is one of the exciting new features evolving in the Kubernetes upstream, currently in beta in Kubernetes 1.6. Federation is useful for deploying and accessing application services that span multiple clusters, running in the public cloud or enterprise datacenters. Multiple clusters can be useful to enable application high availability across multiple availability zones or to enable common management of deployments or migrations across multiple cloud providers, such as AWS, Google Cloud, and Azure.

When managing federated clusters, you will now need to be sure that your orchestration tools provide the security you need across the different deployment platform instances. As always, authentication and authorization are key—as well as the ability to securely pass data to your applications, wherever they run, and manage application multitenancy across clusters. Kubernetes is extending Cluster Federation to include support for Federated Secrets, Federated Namespaces and Ingress objects.

Federated secrets automatically creates and manages secrets across all clusters in a federation, ensuring that these are kept globally consistent and up to date, even if some clusters are offline when the original updates are applied. Federated namespaces are similar to the traditional Kubernetes namespaces and provide the same functionality. Creating namespaces in the federation control plane ensures that they are synchronized across all the clusters in the federation.

9. Tools

Runtime Security

Qualys Container Security (currently in beta). Ondemand container scanning and evaluation presented in a single pane of glass. https://www.qualys.com/apps/container-security/

Use Cases: OnDemand scanning, evaluation and action suite. Integrates into the Cloud platform.

Sysdig Secure. Sysdig Secure protects your entire infrastructure: containers & hosts as well as the logical services that run on top of them. Sysdig Secure also provides full stack forensics capabilities for pre and post attack investigation. https://www.sysdig.com/product/secure

Use Cases: Runtime security, forensics and audit, hybrid environments (containers and traditional deployment), performance monitoring & troubleshooting, available both as SaaS and on-prem.

Audit

BMC Security Operations Policy Service. Provides constant review of containers at rest benchmarked against the CIS standards for containerization security parameters. http://www.bmc.com/it-solutions/secops-policy-service.html

Use Cases: Pre-production analysis, post-audit, reporting

Anchore Navigator. Scans container images, and enforces security policies for container platforms. Integrates with CI/CD workflows using Jenkins. https://anchore.com/

Use Cases: Pre-production analysis, vulnerability newsfeed.



Redhat (2017, December). Ten layers of container security. Retrieved from http://www.redhat.com



Comments

  1. Nice blog. Security orchestration involves interweaving people, processes, and technology in the most effective manner to strengthen the security posture of an organization.

    ReplyDelete
  2. Well stated, you have furnished the right information that will be useful to everybody. Thank you for sharing your thoughts. Security measures protect your company not only from data breaches, but also from excessive financial losses, a loss of people's trust, and potential risks to brand reputation and future benefits.
    IT infrastructure services
    Cybersecurity Service Provider

    ReplyDelete

Post a Comment

Popular Posts