Hacking a Controller for OpenShift/Kubernetes, Pt. 3

Part 1: Introduction to the OpenShift client
Part 2: Coding for Kubernetes

In the previous post, I went more in-depth into how Kubernetes works in the command line. For this post, let’s step back to the code from Part 1 of this series and modify it to run continuously, showing the list of pods currently in the cluster and updating every time a new pod is created.

The Watch Interface

Kubernetes’ Watch interface provides the ability to listen for several different types of events in the cluster. Using the channel functionality built into Go, this is perfect for us to set up an asynchronous controller that can run continuously. First we need to update our code to take advantage of channels. Go back to the code we had in Part 1 and change the main() function in your cmd/controller/cmd.go file to look like this:

func main() {
        config, err := clientcmd.DefaultClientConfig(pflag.NewFlagSet("empty", pflag.ContinueOnError)).ClientConfig()
        kubeClient, err := kclient.New(config)
        if err != nil {
                log.Printf("Error creating cluster config: %s", err)
                os.Exit(1)
        }
        openshiftClient, err := osclient.New(config)
        if err != nil {
                log.Printf("Error creating OpenShift client: %s", err)
                os.Exit(2)
        }

        c := controller.NewController(openshiftClient, kubeClient)
        stopChan := make(chan struct{})
        c.Run(stopChan)
        <-stopChan
}

What we’ve done is created a channel that will be used to safely send “stop” signals to our Go routines when the program ends, and passed that channel to our Run() function. Now, update your pkg/controller/controller.go file to look like this:

package controller

import (
        "fmt"
        "time" // New import

        osclient "github.com/openshift/origin/pkg/client"
        "github.com/openshift/origin/pkg/cmd/util/clientcmd"

        "github.com/spf13/pflag"
        kapi "k8s.io/kubernetes/pkg/api"
        "k8s.io/kubernetes/pkg/api/meta"
        kclient "k8s.io/kubernetes/pkg/client/unversioned"
        "k8s.io/kubernetes/pkg/runtime"
        "k8s.io/kubernetes/pkg/util/wait" // New import
        "k8s.io/kubernetes/pkg/watch"  // New import
)

type Controller struct {
        openshiftClient *osclient.Client
        kubeClient      *kclient.Client
        mapper          meta.RESTMapper
        typer           runtime.ObjectTyper
        f               *clientcmd.Factory
}

func NewController(os *osclient.Client, kc *kclient.Client) *Controller {

        f := clientcmd.New(pflag.NewFlagSet("empty", pflag.ContinueOnError))
        mapper, typer := f.Object()

        return &Controller{
                openshiftClient: os,
                kubeClient:      kc,
                mapper:          mapper,
                typer:           typer,
                f:               f,
        }
}

func (c *Controller) Run(stopChan <-chan struct{}) {
        // Run, aysnchronously, until receive a stop signal
        go wait.Until(func() {
                // Create a Watch Interface for Kubernetes Pods
                w, err := c.kubeClient.Pods(kapi.NamespaceAll).Watch(kapi.ListOptions{})
                if err != nil {
                        fmt.Println(err)
                }
                if w == nil {
                        return
                }

                // Listen for events on Watch Interface's channel
                for {
                        select {
                        case event, ok := <-w.ResultChan():
                                c.ProcessEvent(event, ok)
                        }
                }
        }, 1*time.Millisecond, stopChan)
}

// Function to handle incoming events
func (c *Controller) ProcessEvent(event watch.Event, ok bool) {
        if !ok {
                fmt.Println("Error received from watch channel")
        }
        if event.Type == watch.Error {
                fmt.Println("Watch channel error")
        }

        // Type switch, to handle different events
        switch t := event.Object.(type) {
        case *kapi.Pod:
                fmt.Printf("%s pod %s in namespace %s\n", event.Type, t.ObjectMeta.Name, t.ObjectMeta.Namespace)
        default:
                fmt.Printf("Unknown type\n")
        }
}

What we’ve done here is change our Run() function to:

  • [41] Accept our stop channel as a parameter
  • [43] Spawn a Go routine that will run continuously using Kubernetes’ wait.Until() function
  • [45] Create a Watch Interface using the Kubernetes client to listen for pods in all namespaces
  • [56] Listen for events over the Watch Interface’s result channel inside a select statement.
  • [73-78] Process the event for the object type expected (in this case “pod”, but we could handle events for multiple types of objects) and print metadata about that event and the relevant resource object.

Deploy any simple app on your running and configured OpenShift cluster, then build and run your controller. You should see output similar to this:

ADDED pod ruby-hello-world-1-build in namespace test
ADDED pod docker-registry-1-deploy in namespace default

(Your output will obviously vary based on the app you use. I just used the sample Ruby hello world app.)

These events show up because, when starting up, a Watch Interface will receive ADDED events for all currently running pods. If I leave the controller running and, in another terminal, delete my ruby-hello-world pod, I see this output added:

MODIFIED pod ruby-hello-world-1-build in namespace test
MODIFIED pod ruby-hello-world-1-build in namespace test
DELETED pod ruby-hello-world-1-build in namespace test

So you can see how different interactions on your cluster can trigger different types of events.

Note that the OpenShift 3.3 client package includes easy access to Watch Interfaces for several additional types, such as Projects.

Fun: Cumulative Runtimes

As an exercise, let’s modify our controller to keep track of the cumulative runtime of all the pods for each namespace. Update your pkg/controller/controller.go file to change your ProcessEvent() function and add a new function, TimeSince(), like so:

func (c *Controller) ProcessEvent(event watch.Event, ok bool) {
        if !ok {
                fmt.Println("Error received from watch channel")
        }
        if event.Type == watch.Error {
                fmt.Println("Watch channel error")
        }

        var namespace string
        var runtime float64
        switch t := event.Object.(type) {
        case *kapi.Pod:
                podList, err := c.kubeClient.Pods(t.ObjectMeta.Namespace).List(kapi.ListOptions{})
                if err != nil {
                        fmt.Println(err)
                }
		for _, pod := range podList.Items {
                        runtime += c.TimeSince(pod.ObjectMeta.CreationTimestamp.String())
                }
                namespace = t.ObjectMeta.Namespace
        default:
                fmt.Printf("Unknown type\n")
        }
        fmt.Printf("Pods in namespace %v have been running for %v minutes.\n", namespace, runtime)
}

func (c *Controller) TimeSince(t string) float64 {
        startTime, err := time.Parse("2006-01-02 15:04:05 -0700 EDT", t)
        if err != nil {
                fmt.Println(err)
        }
        duration := time.Since(startTime)
        return duration.Minutes()
}

Now, whenever a new pod event is received (such as adding or deleting), we’ll trigger a client call to gather a list of all the running pods in the relevant namespace. From the CreationTimestamp of each pod, we use the time.Since() method to calculate how long it’s been running in minutes. From there it’s just a matter of summing up all the runtimes we’ve calculated. When you run it, the output should be similar to this:

Pods in namespace default have been running for 1112.2476349382832 minutes.
Pods in namespace test have been running for 3.097702110216667 minutes.

Try scaling the pods in a project up or down, and see how it triggers a new calculation each time. This is a very simple example, but hopefully it’s enough to get you on your way writing your own controllers for OpenShift!

Hacking a Controller for OpenShift/Kubernetes, Pt. 2

Part 1: Introduction to the OpenShift client
Part 3: Writing a controller

In my last post, I went over how to set up an OpenShift environment for developing. That tutorial used the OpenShift Client API to make function calls which interacted with our cluster. However, there may be a situation where you need more direct interaction with your cluster’s resources (or perhaps you are interested in contributing to the open-source OpenShift Origin repository), and even the provided function calls aren’t enough to satiate your needs. In this case, it’s good to know how OpenShift interacts with Kubernetes directly to serve your cluster resources hot-and-ready.

Kubernetes Resource Objects

Going back to the code we used in the last post, you can recall that we used an OpenShift Project Interface to get a list of the projects in our cluster. If you examine the code for the Project Interface, you can see that it uses a REST Client to get the requested information. However, certain commands such as oc get (which is really just a wrapper for Kubernetes’ kubectl get command) rely on the Kubernetes Client API to request the necessary resource objects. How exactly Kubernetes achieves this can be a bit confusing, so let’s modify our code from the last blog post to use a Kubernetes client (as opposed to the OpenShift client) and walk through it as an example of the advantages using OpenShift can give you as a developer.

Update your pkg/controller/controller.go file to look like this:

package controller

import (
        "fmt"

        osclient "github.com/openshift/origin/pkg/client"
        "github.com/openshift/origin/pkg/cmd/util/clientcmd"

        "github.com/spf13/pflag"
        kapi "k8s.io/kubernetes/pkg/api"
        "k8s.io/kubernetes/pkg/api/meta"
        "k8s.io/kubernetes/pkg/kubectl/resource"
        kclient "k8s.io/kubernetes/pkg/client/unversioned"
        "k8s.io/kubernetes/pkg/runtime"
)

type Controller struct {
        openshiftClient *osclient.Client
        kubeClient      *kclient.Client
        mapper          meta.RESTMapper
        typer           runtime.ObjectTyper
        f               *clientcmd.Factory
}

func NewController(os *osclient.Client) *Controller {

        // Create mapper and typer objects, for use in call to Resource Builder
        f := clientcmd.New(pflag.NewFlagSet("empty", pflag.ContinueOnError))
        mapper, typer := f.Object()

        return &Controller{
                openshiftClient: os,
                kubeClient:      kc,
                mapper:          mapper,
                typer:           typer,
                f:               f,
        }
}

func (c *Controller) Run() {
        /*                                                                                                                                                                                                           
                // Old code from last post using OpenShift client                                                                                                                                                                          
                projects, err := c.openshiftClient.Projects().List(kapi.ListOptions{})                                                                                                                               
                if err != nil {                                                                                                                                                                                      
                        fmt.Println(err)                                                                                                                                                                             
                }                                                                                                                                                                                                    
                for _, project := range projects.Items {                                                                                                                                                             
                        fmt.Printf("%s\n", project.ObjectMeta.Name)                                                                                                                                                  
                }                                                                                                                                                                                                    
        */

        // Resource Builder function call, to get Result object
        r := resource.NewBuilder(c.mapper, c.typer, resource.ClientMapperFunc(c.f.ClientForMapping), kapi.Codecs.UniversalDecoder()).
                ResourceTypeOrNameArgs(true, "projects").
                Flatten().
                Do()

        // Use Visitor interface to iterate over Infos in previous Result
        err := r.Visit(func(info *resource.Info, err error) error {
                fmt.Printf("%s\n", info.Name)
                return nil
        })
        if err != nil {
                fmt.Println(err)
        }
}

Build and run and you should see the same output as you did before. So what did we change here?

Some new imports

We added the clientcmd and pflag packages so we can use them to create a Factory object, which gives us our mapper and typer (more on that in a bit). This part could have been done in our main cmd/controller/cmd.go file, with the Factory object passed to the new controller as a parameter, but for brevity I just added it here. meta and runtime are also for the mapper and typer, respectively. Finally, resource  allows us to interact with the Kubernetes Resource Builder client functions.

Resource Builder

The resource package provides us with client functions of the Builder type. The call to NewBuilder() takes four arguments: a RESTMapper, an ObjectTyper, a ClientMapper, and a Decoder (the names give a pretty good idea of what each object does, but I’ve linked to their docs pages if you want to know more). The Builder type provides numerous functions which serve as parameters in a request for resources. In this case, I call ResourceTypeOrNameArgs(true, “projects”) and Flatten() on my Builder. The ResourceTypeOrNameArgs() call lets me specify which type of resource I’d like, and request any objects of that specific type by name. Since I just want all of the projects in the cluster, though, I set the first parameter to “true” (which allows me to blankly select all resources). The Resource Builder Flatten() function returns the results as an iterable list of Info objects (but that’s getting a little ahead of ourselves). Finally, Do() returns a Result object.

The “Result”

In my opinion, this is kind of a semantic misnomer. For the developer new to Kubernetes, it would be assumed that the “result” is the data you originally requested. In reality, it’s an object containing metadata about the result as well as a way to access the actual data, through structures called Infos. There are a few ways to get to these Info objects, one is to simply call .Infos() on the Result object to return a list of Infos. Another, slightly more elegant method, is to use the Visitor Interface.

Visitor Function

Calling .Visit() on a Result object allows you to provide a function which will iterate over each Info object in the Result. Infos themselves provide some helpful metadata on the resource they describe, such as Name and Namespace, but they also give you access to the full generic runtime.Object representation of the resource. By casting these objects to their actual types, you can access the fields and methods specific to that type. As an example, let’s update our Visit() function like so:

err := r.Visit(func(info *resource.Info, err error) error {
        switch t := info.Object.(type) {
        case *projectapi.Project:
                fmt.Printf("%s is currently %s\n", t.ObjectMeta.Name, t.Status.Phase)
        default:
                return fmt.Errorf("Unknown type")
        }
        return nil
})

And also add the following line to your imports: projectapi “github.com/openshift/origin/pkg/project/api”. Save, build, and run and you’ll see output like this:

default is currently Active
openshift is currently Active
openshift-infra is currently Active

Now we’re casting the runtime.Object to a Project type and, using the OpenShift Project API, getting information about its name and status. As a side-note, this makes use of Go’s type switching which is very cool.

Summary

To summarize, Kubernetes’ method for retrieving your objects goes through several different types and interfaces: Builder -> Result -> Visitor -> Info -> Object -> Typecast. Normally, this approach would be more appropriate if you were writing your own command-line arguments. It’s very helpful to have an understanding of how Kubernetes interacts with your cluster on a low-level, but as you can see it’s much simpler to use OpenShift client function calls to get the information you want. Our example here is a bit acrobatic, but still demonstrates the flexibility that working with OpenShift and Kubernetes provides.

In the next post, I’ll go over how to actually make your controller run like a controller (asynchronously, listening for updates) using the Watch package.

Click for Part 3: Writing a controller

Hacking a Controller for OpenShift/Kubernetes

Part 2: Coding for Kubernetes
Part 3: Writing a controller

For OpenShift Online, we run several controllers in our cluster which serve functions such as provisioning persistent volumes and providing user analytics. But let’s say you have your own OpenShift cluster, upon which you’d like to run a controller that interacts with the resources in that cluster. I’m going to run you through setting up OpenShift and Kubernetes in a way that allows you to develop your own controller. By the end of this guide, we’ll have a simple controller that shows the cumulative running time for all pods in a namespace.

Setting Up Your Environment

The prerequisites to develop for your OpenShift setup are:

Assuming you’ve followed the instructions on that page to set up your GOPATH and have Origin cloned, the next step is to download the source code dependencies for OpenShift and Kubernetes. Do this with the following commands:

cd $GOPATH/src/github.com/openshift/origin
git checkout release-1.2

git clone git://github.com/kubernetes/kubernetes $GOPATH/src/k8s.io/kubernetes
cd $GOPATH/src/k8s.io/kubernetes
git remote add openshift git://github.com/openshift/kubernetes
git fetch openshift
git checkout v1.2.0-36-g4a3f9c5

git clone https://github.com/go-inf/inf.git $GOPATH/src/speter.net/go/exp/math/dec/inf

cd $GOPATH/src/github.com/openshift/origin
godep restore

What we’re doing here is:

  1. Checking out the most recent release branch of OpenShift
  2. Cloning the Kubernetes repository
  3. Adding OpenShift’s vendored version of Kubernetes as a remote to our Kubernetes repository
  4. Checking out the required release of OpenShift’s Kubernetes
    1. This can be found by opening origin/Godeps/Godeps.json, searching for “Kubernetes” and copying the version number specified in “comment
  5. Cloning another dependency
  6. And finally running godep restore to download the source for all the dependencies needed

At this point, we’re ready to start coding!

Creating Your Project

In this post, we’re going to make a simple run-once program that lists the namespaces (projects) in a cluster. That means this program won’t run continuously like you would think of a controller (we’ll add that in a later post), but is more of a basic introduction to the client tools used to write such a program.

First, create a GitHub repo with the following file structure:

controller/
- cmd/
-- controller/
- pkg/
-- controller/
  • cmd/controller/ will contain the main package file for your controller
  • pkg/controller/ will contain source files for your controller package

Now create a file called cmd/controller/cmd.go with the following contents:

package main

import (
        "fmt"
        "log"
        "os"

        "github.com/damemi/controller/pkg/controller"
        _ "github.com/openshift/origin/pkg/api/install"
        osclient "github.com/openshift/origin/pkg/client"
        "github.com/openshift/origin/pkg/cmd/util/clientcmd"

        kclient "k8s.io/kubernetes/pkg/client/unversioned"
        "github.com/spf13/pflag"
)

func main() {
        var openshiftClient osclient.Interface
        config, err := clientcmd.DefaultClientConfig(pflag.NewFlagSet("empty", pflag.ContinueOnError)).ClientConfig()
        kubeClient, err := kclient.New(config)
        if err != nil {
                log.Printf("Error creating cluster config: %s", err)
                os.Exit(1)
        }
        openshiftClient, err = osclient.New(config)
        if err != nil {
                log.Printf("Error creating OpenShift client: %s", err)
                os.Exit(2)
        }
}

Save the file, close it, and run godep save ./… in your directory. You should see that your file structure has changed to:

controller/
- cmd/
-- controller/
- pkg/
-- controller/
- Godeps/
-- Godeps.json
-- Readme
- vendor/
-- github.com/
--- [...]
-- golang.org/
--- [...]
-- [...]

I’ve excluded some of the files, because they’re just dependency source files. These are now included in your project, so feel free to commit and push this to your repo. One of the cool things about Godep vendoring in this way is that now, you can share your codebase and allow someone else to build it without needing to worry about submodules or other dependency issues.

Note: We won’t be able to build our controller yet due to the fact that this code has some defined-and-unused variables, but it is enough to run Godep. For when we do start building our code, I’ll be using a Makefile  with the following:

all:
        go install github.com/damemi/controller/cmd/controller

Just because it’s easier to type “make” each time.

Adding Some Functionality

As fun as setting up a project is, it’s even more fun to make it do things. Create a file pkg/controller/controller.go with the following contents:

package controller

import (
        "fmt"

        osclient "github.com/openshift/origin/pkg/client"

        kclient "k8s.io/kubernetes/pkg/client/unversioned"
        kapi "k8s.io/kubernetes/pkg/api"
)

// Define an object for our controller to hold references to
// our OpenShift client
type Controller struct {
        openshiftClient *osclient.Client
        kubeClient *kclient.Client
}

// Function to instantiate a controller
func NewController(os *osclient.Client, ) *Controller {
        return &Controller{
                openshiftClient: os,
                kubeClient:      kc,
        }
}

// Our main function call
func (c *Controller) Run() {
        // Get a list of all the projects (namespaces) in the cluster
        // using the OpenShift client
        projects, err := c.openshiftClient.Projects().List(kapi.ListOptions{})
        if err != nil {
                fmt.Println(err)
        }

        // Iterate through the list of projects
        for _, project := range projects.Items {
                fmt.Printf("%s\n", project.ObjectMeta.Name)
        }
}

As you can see, we’re using the OpenShift API to request a Project Interface, which provides plenty of helper functions to interact with the projects in our cluster (in this case, we’re using List()). The Project API is what allows us to actually interact with the meta data about each project object using the kapi.ObjectMeta field. I highly recommend reading through the OpenShift and Kubernetes APIs to get an idea of what’s really available for you.

Now let’s also add the following lines to the main() function in our cmd/controller/cmd.go file:

c := controller.NewController(openshiftClient, kubeClient)
c.Run()

Making that entire function look like:

func main() {
        config, err := clientcmd.DefaultClientConfig(pflag.NewFlagSet("empty", pflag.ContinueOnError)).ClientConfig()
        kubeClient, err := kclient.New(config)
        if err != nil {
                log.Printf("Error creating cluster config: %s", err)
                os.Exit(1)
        }
        openshiftClient, err := osclient.New(config)
        if err != nil {
                log.Printf("Error creating OpenShift client: %s", err)
                os.Exit(2)
        }

        c := controller.NewController(openshiftClient, kubeClient)
        c.Run()
}

Now save, close, and run “make”. Now, you should be able to just run “controller” from your command line and, assuming you have OpenShift running already and are logged in as system:admin, you should see some output like so:

default
openshift
openshift-infra

Hooray! We can connect to OpenShift and get information about our cluster. In the next post, we’ll go into more detail about how OpenShift uses Kubernetes’ Resource API to interact on a lower level with the cluster, as well as how to use the Watch API to make our controller run asynchronously.

Click for Part 2: Kubernetes Resource Objects

Running an IRC Bot in Ruby on OpenShift V3

Note: This post also appears on the Red Hat OpenShift Blog, along with many other cool posts by cool people.

At Red Hat, all of our instant internal messaging is done through IRC. Because of this, many of our channels have a couple bots in them that do things like process links, report new pull requests, and keep track of users’ karma. What’s cool is that a lot of these bots are actually developed and running on OpenShift, so let’s look at how you could get your own IRC bot running in OpenShift Online V3.

For this project, we’ll be using the Cinch IRC Bot Framework, which is written in Ruby. I chose it because it’s a popular framework with lots of open-sourced plugins already created and because I’ve never used Ruby before. So now that we have our platform and framework, let’s get started! For the purpose of this post, I’ll assume you’ve never used OpenShift or Cinch, but have a basic understanding of Git and Ruby (or, in my case, Google).

 

Step 0. Create your Git repo

The first step in any great project is to create the Git repo. So make a new repo on GitHub and call it whatever you want your bot to be named. Then clone it to your local environment.

Step 1. Install the Cinch gems

Create a Gemfile withe the following:

source 'https://rubygems.org'
gem 'rack'
gem 'cinch'

Then run bundle install in the source directory to create a Gemfile.lock file that will tell OpenShift which gems to use.

Step 2. Create config.ru

This is the file that OpenShift will actually try to run to start your application, so create a file called config.ru with the following contents:

require 'cinch'

bot = Cinch::Bot.new do
  configure do |c|
    c.server = "irc.freenode.org"
    c.nick = "OpenShiftBot"
    c.channels = ["#openshiftbot"]
  end

  on :message, "hello" do |m|
    m.reply "Hello, #{m.user.nick}"
  end
end

bot.start

(This is just the main Cinch example slightly modified)

Let’s look at what we’re doing in this code (or skip to Step 3 if you don’t care):

require 'cinch'

This is going to include the Cinch framework

bot = Cinch::Bot.new do

Creates our new bot object

  configure do |c|
    c.server = "irc.freenode.org"
    c.nick = "OpenShiftBot"
    c.channels = ["#openshiftbot"]
  end

This configures the settings for our bot, and is pretty self-explanatory. We’ll be joining the #openshiftbot channel on irc.freenode.org, with the nick “OpenShiftBot”. We’ll go over more settings that are available here in later posts.

  on :message, "hello" do |m|
    m.reply "Hello, #{m.user.nick}"
  end

Here’s the fun part: actually making our bot do stuff! on :message, “hello” do |m|  listens for a user to say “hello”, then uses the resulting Message object to reply to whomever initiated the greeting.

end

bot.start

Ends the bot declaration and start running it.

Now that we have our basic code, let’s commit and push it to our GitHub repo:

git add .
git commit -m "Basic code" 
git push

 

Step 3. Start your OpenShift project

Now for the main purpose of this post, getting our bot to run on OpenShift! To do this, log in to your OpenShift Online Developer Preview account and create a new project. For a template, choose ruby:latest.

Choose the ruby:latest template
Choose the ruby:latest template

Now give your bot a name and paste the link to your GitHub repository.

step2

Continuing back to the Overview page, you should see that a build of your project has started:

fixed-build-started

 

When the build finishes, a new deployment will start. If everything goes smoothly, you should see this once it’s all done:

fixed-build-finished-deployed

And in #openshiftbot on Freenode, you should see something like this:

bot-in-channel
Success!

Congratulations! You’re now running an IRC bot on OpenShift V3. But OpenShift is meant to do a lot cooler stuff than just host a running service forever. For example, we can…

Step 4. Add a build hook

Build hooks are a sweet feature of OpenShift that makes rapid development a breeze. If you were running this bot on a basic server, for example, every time you wanted to make a change you would have to push your code to GitHub, pull the changes to your server, and stop and restart the service yourself. With a build hook on OpenShift, all you need to do is push your code and OpenShift will pull the changes, build a new image, shut down your old service, restart your new service, and do it all with minimal downtime. Here’s how we can do that.

First, choose the build you just created:

choose-build

Then go to the “Configuration” tab:

config

And on the right, under “Triggers” click to show the GitHub webhook URL and copy it to your clipboard.

fixed-hook-url

Now go back to your project page on GitHub and go to Settings > Webhooks and Services. Click Add Webhook and under “Payload URL” paste the link you just copied and finally click “Add Webhook”.

add-hook

Now when you push your code to GitHub, your project will automatically update. Nifty!

5. Try it out!

Let’s update our config.ru file to make our bot a little more friendly. Edit the lines where we defined our message listener like so:

  on :message, "hello" do |m|
    m.reply "HOWDY, #{m.user.nick}!!!"
  end

Commit and push to GitHub, go back to the OpenShift Web Console and you’ll notice that a new build has automatically started:

build2

And when the build and deploy are done, you’ll see that a new “deployment #2” has been created, your old pod has been scaled down, and your new commit message shows up:

deploy2

During this process, OpenShiftBot will temporarily disconnect from IRC as the old pod is scaled down, but it will automatically reconnect as the new pod is scaled up. Now, in our IRC channel, our bot is much more excited to see us:

more_excited

Woohoo! Aren’t build hooks cool? Next week, I’ll go more into depth with the Cinch framework and show you how to extend your bot to use plugins.