OpenShift and Cloudsoft AMP

Introduction

Cloudsoft AMP makes it easy to deploy and manage applications on OpenShift and Kubernetes. This blog post shows how this process can be performed simply, reliably and repeatedly. We will demonstrate the deployment of applications to an OpenShift Origin cluster, using blueprints to describe the containers and services to be created. The blueprints used contain a description of the individual components, and can include either lists of images to use or references to services defined in the AMP catalogue. Plus there is a link to an Early Access version of AMP so you can get started yourself.

OpenShift and Kubernetes

Red Hat's OpenShift container platform based on Kubernetes has gone through several iterations to reach the current version. OpenShift 2 was based on containerisation approach known as cartridges and although fairly easy to use was not very extensible. With OpenShift 3 the application delivery technology switched to Docker images and containers, based on the open source Docker engine and the Kubernetes container cluster manager. The platform is available as both a hosted and on-premises solution, with commercial support available through the OpenShift Enterprise product. Here we will be using the open source OpenShift Origin edition, which is available on GitHub.

OpenShift enables rapid application development, and long-term lifecycle maintenance for small and large teams and applications. Cloudsoft AMP can help by providing a deployment mechanism using YAML blueprints that are the same across all of its supported cloud and container platforms, with a library of applications and services provided by Apache Brooklyn. AMP can orchestrate the deployment of applications to OpenShift and also provides runtime management using autonomic policies. This helps developers using OpenShift to focus on the important task of designing and building their applications.

To illustrate this we will deploy some multi-container applications, consisting of both Docker images and Brooklyn entities, to an OpenShift Origin cluster using Cloudsoft AMP. The application management capability of AMP lets us deploy and manage Pods using the OpenShift location provided by the Cloudsoft Container Service. This blog post shows how this process can be performed simply, reliably and repeatedly to give immediately usable applications on your OpenShift environments.

Getting Started

To follow along with the examples you will need access to an OpenShift environment, which can be as simple as a local MiniShift development environment using a single VirtualBox VM on your laptop, or a large cluster hosted on a public or private cloud. You will need credentials for a user that has sufficient permissions to deploy applications to an OpenShift project. For some of the examples, we will be creating privileged containers and will also require the ability to run processes as root inside the container. This can be achieved by ensuring that the security context constraints applied to the user are configured correctly, as described in this blog post from Red Hat. Alternatively, you could edit the default constraints to allow root users and privileged containers, if you are simply testing things in a development environment. Finally you will need to install an Early Access release of Cloudsoft AMP that contains the experimental OpenShift location. This can be downloaded from our Artifactory repository.

% wget --quiet https://artifactory.cloudsoftcorp.com/artifactory/libs-release-local/io/cloudsoft/amp/cloudsoft-amp-karaf/4.2.0-20161208.1801/cloudsoft-amp-karaf-4.2.0-20161208.1801.tar.gz
% tar zxf cloudsoft-amp-karaf-4.2.0-20161208.1801.tar.gz
% cd cloudsoft-amp-karaf-4.2.0-20161208.1801/
% ./bin/start
% tail -f log/amp.info.log | grep "access to web console"
2016-12-07 19:33:17,367 INFO  127 o.a.b.r.s.p.BrooklynUserWithRandomPasswordSecurityProvider [FelixStartLevel] Allowing access to web console from localhost or with brooklyn:CJZNcsfGhZ

Now you are able to connect to the AMP console which should be running on port 8081, using the username brooklyn and the password that was output to the logs. You may also download the br utility which will allow you to access the AMP API on the command-line, following the links provided on the console home page.

Deploying Applications

We can run applications on the OpenShift cluster using the oc command-line tool to start a pod directly. However, the Cloudsoft Container Service includes code for AMP which allows Docker, Swarm, Kubernetes and OpenShift to be used as deployment locations, giving applications the ability to run on these platforms through AMP. This means that the pods and containers are created as managed Brooklyn entities, and can have policies, effectors and sensors attached to them.

Cloudsoft AMP uses locations to describe where a blueprint will be deployed. For applications that will run in the cloud, these are usually Apache jclouds targets. These are specified by using the name of the jclouds provider in the location section of the blueprint, and configured to describe the cloud provider, endpoint and credentials to use. 

To use a target infrastructure like Docker Swarm, Kubernetes or OpenShift the location used will instead come from the Cloudsoft Container Service. Here, we will use an OpenShiftLocation which is specified using the location name openshift. The configuration of the location itself is very simple, requiring only the endpoint of the OpenShift API and your credentials or TLS certificate details.

Wordpress Blueprint

The demonstration blueprint from the Container Service repository will serve us well here. This is a simple Wordpress blog application, using a MySQL database to store its data. The links between the Wordpress HTTP service and the database are defined by setting some environment variables on the containers, giving the database hostname and password. The Brooklyn entities are defined using a parent service of type KubernetesPod which contains two children of type DockerContainer representing the Wordpress and MySQL containers. The full YAML blueprint is available at wordpress.yaml with sample configuration that can be edited to suit your environment. The excerpt below shows the services that are deployed, namely a pod and the two containers running the wordpress and mysql Docker images.

services:
  - type: io.cloudsoft.amp.containerservice.kubernetes.entity.KubernetesPod
    brooklyn.children:
      - type: io.cloudsoft.amp.containerservice.dockercontainer.DockerContainer
        id: wordpress-mysql
        name: "MySQL"
        brooklyn.config:
          docker.container.imageName: mysql:5.6
          docker.container.inboundPorts: [ "3306" ]
          provisioning.properties:
            deployment: wordpress-mysql
            env:
              MYSQL_ROOT_PASSWORD: "password"
      - type: io.cloudsoft.amp.containerservice.dockercontainer.DockerContainer
        id: wordpress
        name: "Wordpress"
        brooklyn.config:
          docker.container.imageName: wordpress:4.4-apache
          docker.container.inboundPorts: [ "80" ]
          provisioning.properties:
            Deployment: wordpress
            env:
              WORDPRESS_DB_HOST: "wordpress-mysql"
              WORDPRESS_DB_PASSWORD: "password"

The blueprint must also have a location section, describing the target OpenShift environment. The location type is openshift as explained above, and is expected to be an accessible OpenShift cluster, with the endpoint field giving the root URL used to access the API. This is usually the same as the root of the console URL, with oapi appended. The endpoint will have the oapi path component added by the client automatically if it is not present. In the example below certificates are used to authenticate the user, giving a client certificate and key file as parameters. These must be available as readable files on the AMP server, and their full pathnames should be provided. The certificate data can also be supplied directly in the blueprint along with more detailed configuration of the algorithm used or the passphrase required. If AMP has an external data store configured, such as Vault, this can be used to inject certificate information securely.

location:
  openshift:
    endpoint: "https://192.168.99.100:8443/"
    identity: "admin"
    caCertFile: "~/certs/ca.crt"
    clientCertFile: "~/certs/admin.crt"
    clientKeyFile: "~/certs/admin.key"

Alternative methods of authentication include simple username and password and OAuth tokens. To obtain the token required you must use the oc command-line tool, first to log in to the OpenShift server and then to display the token value, using the whoami command.

% oc login https://192.168.99.100:8443/
Authentication required for https://192.168.99.100:8443 (openshift)
Username: admin
Password:
Login successful.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * default (current)
  * kube-system
  * openshift
  * openshift-infra

Using project "default".
% oc whoami -t
mzUTj0JmWDYLSspumvW5B74rn8geKd6Qll11IPkaqeE

This is then set as the oauthToken field in the location.

location:
  openshift:
    endpoint: "https://192.168.99.100:8443/"
    identity: "admin"
    oauthToken: "mzUTj0JmWDYLSspumvW5B74rn8geKd6Qll11IPkaqeE"

Additional customisation of the location is possible, using a number of other configuration options. These include: setting up secrets and persistent volumes that can be attached to containers; defining the namespace, deployment and service names to use; setting the number of replicas to create; or enforcing container resource limits. The use of these options will be described in further blog posts in this series. Also note that in general any settings for an OpenShift location will also apply to a Kubernetes location, with small differences such as the ability to create projects in OpenShift in addition to Kubernetes namespaces.

To deploy this example, open the downloaded wordpress.yaml blueprint and then change the endpoint and credentials given in the location section to point at your target cluster. Once you have updated the blueprint file, it can be deployed using the br command on the command-line, or by navigating to the AMP Blueprint Composer and selecting the YAML Editor view. Here you may either load the blueprint from the saved file or paste in the contents to the editor, then use the Deploy button to deploy the blueprint to the cluster.

% br deploy wordpress.yaml
Id:       vszeqxnjxk
Name:     wordpress-application
Status:   In progress

Now the Container Service location will create the required Pods and containers, and report success by setting the status of the entities to RUNNING. The target endpoint of the deployed Wordpress application can then be found using the cluster dashboard, or by examining the docker.port.80.mapped.public sensor on the Wordpress container entity. You should be able to navigate there and configure your blog!

Three Tier Application Blueprint

This blueprint is different to the previous example, and is not tailored to OpenShift or Kubernetes in any way, apart from specifying the location to target. The application uses built in definitions for its components, taken from the AMP entity catalogue. It uses a cluster of TomcatServer entities running a War file containing the Java servlet application, a MySqlNode entity as the database and an NginxController entity to load-balance the Tomcat web servers and expose the application to the public Internet. This blueprint could be deployed to any location supported by AMP, such as a public cloud or a group of virtual machines in a data centre. The entities used here consist of the required commands needed to install, customize and launch their services as well as mechanisms to retrieve data from the running service that will be reported back to AMP and can be used by policies or other entities.

The full YAML blueprint is available at three-tier-application.yaml and again requires the location to be customized as before with your OpenShift environment details and credentials

services:
  - type: org.apache.brooklyn.entity.group.DynamicCluster
    id: cluster
    name: "Webapp Cluster"
    brooklyn.config:
      initialSize: 2
      memberSpec:
        $brooklyn:entitySpec:
          type: org.apache.brooklyn.entity.webapp.tomcat.TomcatServer
          id: tomcat
          name: "Tomcat Server"
          brooklyn.config:
            jmx.enabled: false
            wars.root: "https://repo.maven.apache.org/maven2/org/apache/brooklyn/example/brooklyn-example-hello-world-sql-webapp/0.9.0/brooklyn-example-hello-world-sql-webapp-0.9.0.war"
            java.sysprops:
              brooklyn.example.db.url:
                $brooklyn:formatString:
                  - "jdbc:mysql://%s:%d/%s?user=%s&password=%s"
                  - $brooklyn:entity("db").attributeWhenReady("host.subnet.address")
                  - $brooklyn:entity("db").attributeWhenReady("mysql.port")
                  - "visitors"
                  - "brooklyn"
                  - "br00k11n"
  - type: org.apache.brooklyn.entity.proxy.nginx.NginxController
    id: nginx
    name: "Load Balancer (nginx)"
    brooklyn.config:
      loadbalancer.serverpool: $brooklyn:sibling("cluster")
      nginx.sticky: false
    brooklyn.enrichers:
      - type: org.apache.brooklyn.core.network.OnPublicNetworkEnricher
        brooklyn.config:
          sensors:
            - main.uri
  - type: org.apache.brooklyn.entity.database.mysql.MySqlNode
    id: db
    name: "Database (MySQL)"
    brooklyn.config:
      datastore.creation.script.url: "https://raw.githubusercontent.com/apache/brooklyn-library/master/examples/simple-web-cluster/src/main/resources/visitors-creation-script.sql"

As before, the saved blueprint file should be updated with your location and credentials, and then either deployed from the AMP console or by using the br command line tool.

% br deploy three-tier-application.yaml
Id:       raiwkyxxom
Name:     three-tier-application
Status:   In progress

Once the services have been created you may view the running application by accessing the main.uri.mapped.public sensor on the Nginx load balancer entity. This uses the mapped Nginx port forwarded by the service created to point at the running container.

Conclusion

We have seen how Cloudsoft AMP and the Cloudsoft Container Service can be used to deploy blueprints to OpenShift, creating an application that can be monitored, scaled and managed, just like any cloud based application deployment made using AMP. The applications we deployed only scratch the surface of what is possible with this set of tools, though. There is a rich set of policies and autonomic runtime management features available in Brooklyn, and applications are not restricted to a single target location but can be deployed as a mix of container and cloud based microservices. Future posts will look at some of these areas.


To find out more about Apache Brooklyn, documentation is available on the Cloudsoft Docs or Apache websites and you can view the code in the GitHub repository. The Clocker project also has documentation and links to blueprint files for deploying Kubernetes, and we welcome contributions and/or comments.

For production deployments, Cloudsoft AMP provides all the features of Brooklyn and also includes Clocker and the Cloudsoft Container Service. You can also follow Cloudsoft on Twitter and our Youtube channel for demonstrations and latest release information.