Deploying Hyperledger Fabric on an IBM Bluemix Kubernetes Cluster with Cloudsoft AMP

Editor’s Note: This post was first published April 18 and subsequently updated April 30 when we added and expanded Conclusion and Next Steps

At this year’s Open Technology Summit at IBM InterConnect (#IBMOTS) IBM Fellow, VP and CTO, IBM Cloud Platform Jason McGee gave us a sneak preview of the beta release of the IBM Bluemix Container Service which was formally announced the following day during the IBM InterConnect Day 1 keynote.

Jason’s key message was that IBM was doubling down on Kubernetes as the leading container orchestration platform by offering fully managed Kubernetes clusters on Bluemix. This message was reinforced in a post Kubernetes now available on IBM Bluemix Container Service by Senior Offering Manager, IBM Containers Chris Rosen.

Naturally this piqued my interest as Google and Microsoft offer a similar capability but it wasn’t until the holiday weekend that I had time to delve into IBM’s offering in more detail.

I started by revisiting Chris’ blog post and signin up to the IBM Container Service Slack channel. My objective was to see how far I could get standing up a Kubernetes cluster then deploying the Hyperledger blockchain fabric on this cluster using a Cloudsoft AMP blueprint developed by our Hyperledger expert and resident comedian Mike Zaccardo.

Mike has blogged extensively about his experience with Hyperledger including targeting Kubernetes as a Cloudsoft AMP location. In fact when I re-read his blog Deploying Hyperledger Fabric on Kubernetes with Cloudsoft AMP true to form when I clicked through on his link to alternative means of standing up a Kubernetes cluster I was well and truly trolled. I love a challenge so armed with Mike & Chris’s posts I figured I’d see whether he was right or not!

Step 1: Create a Kubernetes Cluster

Chris’ blog wraps up with a pointer to a Getting Started page so I visited what turned out to be the IBM Bluemix Container Service landing page. This invites you to Create your first cluster so I clicked on this. (By the way I love their Distraction-free containers pitch!)

What followed was an impressively smooth experience once I had upgraded my Bluemix account to Pay-as-you-go. (There is a free tier but this consists of a single worker node whereas I needed a cluster with six worker nodes in order to deploy Hyperledger Fabric.)


Below is the time-lapse photography of the resulting cluster creation which is well documented if you have any queries about specific parameters. In reality the upgrade and the back office association of my IBM Bluemix account with our corporate IBM Bluemix Cloud Infrastructure account took a little while but what impressed me was that the behind-the-scenes workflow to sort this out worked at all! Likewise when one of the worker nodes got stuck Bootstrapping this fault was also cleared automatically.

Once my cluster was deployed and all the worker nodes were running I explored it using the IBM Bluemix GUI. However what I was really looking for were the instructions on how to connect to my cluster programmatically since closed loop runtime management is the focus of our work in the open source community with Apache Brooklyn which is, in turn, the foundation of our Cloudsoft Application Management Platform (AMP).

Step 2: Creating a Kubernetes Location

I found the answer when I selected the Access tab: As well as providing instructions on how to gain access to your cluster via a combination of CLI tools it includes a download link to a ZIP file that contains both a standard Kubernetes kubectl config file and the associated security certificates.

At this point I got stuck as our documentation (cough, cough) didn’t help. Luckily I was able to figure out who wrote our Kubernetes integration (thank you Github!) I reached out to one of our senior engineers, Duncan Grant. Once I explained what I was trying to do Duncan quickly figured out the mapping between the Kubernetes kubectl config:

apiVersion: v1
clusters:
- name: hyperledger_test_cluster
  cluster:
    certificate-authority: ca-prod-dal10-hyperledger_test_cluster.pem
    server: https://169.47.234.18:7080
contexts:
- name: hyperledger_test_cluster
  context:
    cluster: hyperledger_test_cluster
    user: admin
    namespace: default
current-context: hyperledger_test_cluster
kind: Config
users:
- name: admin
  user:
    client-certificate: admin.pem
    client-key: admin-key.pem

And a Kubernetes location as defined in Apache Brooklyn:

brooklyn.catalog:
  id: hyperledger_test_cluster
  name: "hyperledger_test_cluster"
  itemType: location
  item:
    type: kubernetes
    brooklyn.config:
      endpoint: https://169.47.234.18:7080
      caCertFile: ${SOME_PATH}/ca-prod-dal10-hyperledger_test_cluster.pem
      clientCertFile: ${SOME_PATH}/admin.pem
      clientKeyFile: ${SOME_PATH}/admin-key.pem
      image: "cloudsoft/centos:7"
      loginUser.password:
        external(“credentials”, “image-login-user-password”)

Note that as a general rule passwords and other secrets are not embedded in blueprints. As you can see from the above external reference, our recommended approach is to use something like Vault to manage secrets. This approach is covered in detail in the section on Externalized Configuration in the Cloudsoft AMP documentation.

Step 3: Setting up hyperledger_test_cluster location

I took Duncan’s template and substituted ${SOME_PATH} to match where I had stored the certificates on my machine since I was testing this running Cloudsoft AMP on my laptop. Then using Cloudsoft AMP’s Blueprint Importer option it was easy to create the desired location.

Step 4: Deploying Hyperledger Fabric

As I had downloaded a vanilla version of the latest version of Cloudsoft AMP just prior to this experiment, I followed Mike’s instructions to ensure that the latest release of his Hyperledger blueprint was loaded and catalog entry added. (From AMP 4.5 onwards these Hyperledger blueprints will ship with the product saving this step.)

I then used the Quick Launch feature to deploy a Hyperledger Fabric using my newly minted hyperledger_test_cluster location.

Below is the time-lapse photography of the resulting fabric deployment which I verified using kubectl proxy setting KUBECONFIG to point at the original config file downloaded.

Conclusion & Next Steps

Manually creating a Kubernetes cluster using the IBM Bluemix Container Service UI then treating it as a Cloudsoft AMP location and deploying blueprints to it was very straightforward but this is only half the battle as our focus is on end-to-end closed loop automation.

Taking a step back the Cloudsoft Container Service consists of a set of open source blueprints designed to deploy and manage cloud native infrastructure (Docker Swarm, Kubernetes and OpenShift) plus the ability of Cloudsoft AMP to target the resulting environments as first class locations.

Therefore the ideal would be to automate the deployment and management of an IBM Bluemix Kubernetes cluster and add this capability to our container service. The only snafu is that this would require an API. I floated this on Slack with IBM Bluemix Container Service STSM Jake Kitchener as a bucket list (well wish list!) item at which point he promptly revealed that underpinning everything I'd used was a IBM Bluemix Container Service API.

Incroyable mais vrai!

Consequently we are now focused on two concrete next steps -

  1. We plan to extend the Cloudsoft Container Service so that Cloudsoft AMP users can deploy, manage and target the IBM Bluemix Kubernetes clusters using this API.

  2. We aim to give end users the ability to use Cloudsoft AMP to target IBM Bluemix Kubernetes clusters created independent of this.This means expanding the Access tab to provide instructions on how to do this; ideally without anyone having to handcraft anything.

These steps will benefit IBM's customers by facilitating the deployment & management of a wide variety of applications and services on their Kubernetes-as-a-Service - the Hyperledger Fabric being just one example. Furthermore it will enable them to incorporate IBM Bluemix Kubernetes clusters in a hybrid (multi-location or multi-cloud world) where our blueprints really come into their own.