Deploying and Upgrading a Kubernetes Application with Diamanti: Real-World Example

Diamanti’s plug-and-play, high-performance bare-metal platform makes it seamless to deploy and upgrade your containerized applications on a Kubernetes cluster. This article shows how quickly you can use Diamanti to deploy a WordPress application powered by MariaDB and Kubernetes.

The Hardware

To start, let’s go over the hardware we’ll be using.

The Diamanti cluster appliance comes initially with a minimum of three nodes. Diamanti built a hyperconverged infrastructure, including a minimum of three nodes, each with 10Gb Ethernet Linux network interfaces and NVMe block storage volumes that can deliver as little as 100 microseconds latency. With this bare-metal based infrastructure, Diamanti is able to consume up to 95% of the underlying hardware capabilities for the applications running in containers. When deploying applications at scale, the overall performance gains add up quickly and provide real and long-lasting costs savings.

What really sets the Diamanti cluster apart is the ease of deployment and their guaranteed service levels for each container. By leveraging custom hardware interfaces, Diamanti can prioritize real-time traffic across nodes based on specific bandwidth metrics. As a software developer or application owner, rest assured that your applications will perform as designed and in line with the service levels you have selected. In addition, the self-service infrastructure allows you to automatically deploy and upgrade your container based applications as outlined below.

Over the past few years, Diamanti has made significant contributions to the Kubernetes Open Source project. In fact, their engineering team has contributed to the development of the CSI framework for persistent volumes and continues to do so. Until recently, containers were mainly used for stateless applications due primarily to the ephemeral nature of the storage. With the introduction of persistent volumes, coupled with Diamanti’s internal data mirroring capability, stateful applications such as databases can now run on this high-performance cluster with no data loss.

The Toolset

As a long-time Oracle professional, with only recent overall experience with Docker containers and Kubernetes clusters, I welcomed the Diamanti online dashboards, command line interface (CLI), documentation, and self-service application deployments. While only a few commands are necessary to get started, the ability to review all cluster information (e.g. nodes, pods, network interfaces, persistent volumes, endpoints) at a glance made it easy to get familiar quickly with the entire stack. Here is a view of one of the dashboards showing my database pod after my first deployment.

To get started with a new cluster setup, there is only a set of four commands required. These commands include tasks such as: create the cluster, set up the network, provision a persistent volume and deploy the application. Cluster administrators can create a cluster using Diamanti’s command line interface “dctl”, and add and remove any nodes as needed. Once a cluster is formed, Diamanti pools all the resources available across the nodes to enable Kubernetes to efficiently schedule containers in the cluster based on the service levels selected.

Diamanti’s “dctl” command line tool is available for Linux and Mac OS, and dctl.exe is available for Windows. The “kubectl” command is Kubernetes’ command line tool and is also available for Linux, MacOS and Windows. In order to deploy applications composed of multiple microservices, Helm (Kubernetes’ package manager) can be used to manage the complete deployment stream. The “helm” command is also available for all platforms. These tools can be downloaded to your local machine directly from a link provided within Diamanti’s dashboard. It was just what I needed to get started.

Managing your Cluster

First off, let’s look at Diamanti’s cluster setup and the nodes available using the command line interface. You will need to log into the cluster with “dctl login” and provide your credentials (Cluster user management and security is beyond the scope of this article). Once logged into the cluster, you will be able to perform all other operations. Note that your cluster session will expire after one hour.

Once logged in, use “dctl –help” for a list of all cluster options available (or use the dashboard). Use “dctl cluster status” to monitor the current cluster status and get a list of all running nodes. In our case, the cluster is composed of three nodes with solserv4 as the master node.

While the minimum Diamanti cluster configuration is composed of three nodes, you can add more nodes using the following command (assuming your cluster has additional physical D10 nodes): Hide   Copy Code

dclt cluster add host-4,host-5.

And to remove a node from the cluster, simply use: Hide   Copy Code

dctl cluster remove host-5.

Adding Service Levels

Applications running in a cluster with other transient concurrent applications can suffer when resources are not readily available (known as noisy neighbor syndrome). As mentioned earlier, the cluster appliance allows you to choose from different service levels in order to enforce a minimum network throughput and storage IOPS for all your containers. Your application will be isolated from others and guaranteed a certain level of performance. There are three default built-in service levels to choose from as follows:

  • high: Provides 20,000 IOPS and 500 Mbps of network bandwidth
  • medium: Provides 5,000 IOPS and 125 Mbps of network bandwidth
  • best-effort: There are no minimums when using this configuration

As a cluster administrator, you can create five additional custom service levels to match your own application requirements by simply using the command: Hide   Copy Code

dctl perf-tier create <perf-tier-name> -i <storage iops> -b <network-bandwidth>

To assign a service level for your network interface, include a pod annotation in your Helm chart values.yaml configuration file.

podAnnotations: Hide   Copy Code '{"network":"stef-wp-net","perfTier":"high"}'

In order to assign a service level for your persistent storage using one of Diamanti’s default options, set the storageClass in the WordPress values.yaml file. Hide   Copy Code

 enabled: true
  storageClass: "high"

Check out all the current service levels on your cluster with dctl perf-tier list

Managing the Network

Your application containers can connect to internal network subnets (defined objects) using endpoints. The endpoints determine the service level tier associated with the network object. You can create a network object using the Diamanti dashboard or by issuing the following command: Hide   Copy Code

dctl network create <network_name> --subnet <subnet/class> --start <ip_range_start> --end <ip_range_end> --gateway <gateway> --vlan <vlan_id>

You can then simply list all the networks configured for your cluster with Hide Copy Code

dctl network list

You can specify the network to use in your pod definition file (Helm chart) Hide Copy Code

annotations: ‘{"name":"0", "network":"stef-wp-net","perf-tier":"high"}’

Note that when defining a pod deployment file, all objects (network and storage) must reference the same performance tier (i.e. “high”). If no performance tier is explicitly specified, the “best-effort” setting will be used.

Managing Storage

Physical files in a container are ephemeral in nature. They are lost when a container either stops or crashes. To solve this problem, Kubernetes introduced the notion of persistent volumes and persistent volume claims. A persistent volume (PV) is a resource created in the cluster. It is provisioned by the cluster administrator to reserve physical storage with a lifecycle independent from any pod. A persistent volume claim (PVC) is a request for pre-defined storage by a user or container at run-time.

A persistent volume can be created using the user interface (dashboard) or command line. Use the command below to create a new storage volume. Use the -m option to create volume mirrors. Diamanti will automatically spread the volume mirrors on different nodes within the cluster. You can configure up to a three-way mirror. Hide   Copy Code

dctl create volume <volume-name> -s <size> -m <mirror count>

For example: Hide Copy Code

dctl create volume stef-vol -s 10G -m 1

Once the volume has been created (with one synced copy), you can modify the pod definition file to include the new volume. Diamanti uses their own custom drivers already installed on the cluster. Use the flexVolume plugin configuration item to enable these custom volumes: Hide Copy Code

- flexVolume:
   name: stef-vol
      fsType: xfs
       name: stef-vol
        perfTier: high
        detachPolicy: auto

Use dctl volume describe <volume> to get detailed information about the volume(s) created or available in the cluster.

Final Step: Deploying the Application

For my tests, I’ve used a Helm chart to deploy the WordPress application running on MariaDB with minimal changes. The Helm chart directs Kubernetes to deploy all pre-configured components as a single step; however, each container and associated resource (network and storage) can be provisioned separately as well using Diamanti’s dctl utility as presented above.

In just a few minutes, our entire application is deployed and running on the cluster. Using Diamanti’s dashboard, you can monitor your application and all associated elements.


With Diamanti, you can deploy a containerized application quickly. Diamanti provides the hardware and software you need to host your containers so that you can focus on the application, and not where it lives. As we’ve seen in this article, even a complex Kubernetes-based application that requires persistent storage is simple to deploy with Diamanti.

Chat with Us

This site uses Akismet to reduce spam. Learn how your comment data is processed.