Metsi TechTips Knowledge Base


There are three principal means of acquiring knowledge... observation of nature, reflection, and experimentation. Observation collects facts; reflection combines them; experimentation verifies the result of that combination.

Denis Diderot

Our people are our greatest asset – we say this often and with good reason. It is only with the determination, dedication and subject matter expertise of our people that we can serve and generate long-term partnerships with our customers and partners.

Our consultants are constantly discovering gems of wisdom and technical capabilities of the products and solutions they work with on a day to day basis. This knowledge base is means of sharing this with you.

Looking to Build your own Service Catalog?  In this post we look at how a Service is defined and the steps we take in setting up a Service Catalog from scratch.

As technology has developed, most people use technology every single day of their lives and with the development of Smartphone technology, familiarity with web applications in a browser or via a phone app become second nature.  Companies therefore, look towards these technologies to solve many of their common requirements that they are faced with daily.  Their employees already know how to use them. These requirements may require lots of manpower to achieve what could be accomplished using automation tools.

We, at Metsi have found that many companies are looking for a Service Catalog solution to allow it’s employees and/or customers to request services from a single pane of glass.  These services may be simple in terms of definition, but may require some complex processes in order to achieve the result.  A simple “I need a new laptop” request would only be fulfilled after the following questions were answered:

  • How much RAM do you need?
  • What processing power do you need?
  • How much Hard disk space?
  • What size screen?
  • Who in the business will pay for it?
  • Where should it be delivered?

There could be additional requirements attached to this request:

  • Migration of data – who does it and when?
  • What happens to the old laptop – does it get wiped and recycled?

These additional parts of the request may be served by other systems via a ticket.  The complexity of the request can therefore grow to reach multiple systems each with a Create, Read, Update, Delete  requirement.  Owners of the systems need to see where the request is at each stage of the process and act upon it accordingly.  The requestor also needs to see the progress of the request, without needing to call or contact someone else.

Service Definition

service definition

Where do we start with the Service Catalog definition?  Firstly we would understand how the service is currently delivered.  Perhaps a web form is completed or even an excel spreadsheet is filled in and sent via email.

What happens next? Are additional systems involved in the fulfilment process?  If so what is the process?

These processes would be mapped into a flow-chart with each end point defined to understand the data required to be collected.

Understanding and Automating the process

Once we have understood the requirements, next we can build the service into our Service Catalog using Automation.

For our new laptop service, there may still need to be some manual intervention in delivering the laptop and performing a migration of the data.  But perhaps not!

Imagine that an agreement has been setup with your laptop supplier to take an order if placed through an API – you will send a URL with the answers given by a web form:


When the user submits this form, their username is captured via their  “domain name” login as well as their answers (all fields are mandatory).  A script builds a payload of the answers and the following URL is produced:{“companyname”:”yourcompanyname”, “user”: “probins”,”ram”:”4gb”, “cpu”:”i5″,”hdd”:”100gb”, “screen”:”15inch”}

The laptop supplier receives this order in their ordering system via this api call from your system.  They have already agreed with you the make and model laptops to supply based on the values received via the system.

The supplier preloads your operating system image onto the laptop and ships it to your IT department.

The IT department  received a new “laptop supply ticket” for the requesting user (“probins”) which was set to status = pending delivery – this was achieved when the form was submitted via the following generated url


This generated the ticket to supply the laptop when it arrives.

On arrival the IT department updates the ticket from status = pending to status = received.

This demonstrates the automation process in a simple way, a process that can be triggered using a Simple Service Catalog.

Additional Service Catalog Features

The list of additional Service Catalog Features is almost endless, almost anything can be done through automation.  One subject that I have not touched on is adding authorisation to the process.

Services can be offered to all users of a system, to a group of users or to only 1 user.

Services can also require authorisation from another user or a group of users.

This would be used, in our example, to prevent users from requesting new laptops every 2 weeks by placing a Request, Authorise process in place, where accountable staff be placed into the authorise cycle.

Service Catalog Options

The list of options available to companies for Service Catalogs is quite small.  As Cisco partners we can offer the powerful Cisco Prime Service Catalog to large Enterprise customers who are looking for a large implementation of the type of system.

Cisco Prime Service Catalog

We have also written our own lightweight Service Catalog called Simpla



Simpla is a lightweight easy to use Service Catalog that can be used and configured simply.  The system can be hosted by us or on your site, we can get you up and running services to your end users within a couple of days.  No single service should take more than a single day to create using simple drag and drop form creations and easy to use interfaces that anyone can use.

Get in touch with us at Metsi for a full demo.

Build your own service catalog

Although UCS Director does not have a native, built in REST API capability, creating functionality through a UCS Director Custom Workflow Task is quite simple.

We, at Metsi, have used this method many times to connect to external REST API based systems, within a workflow to create seamless interaction with external systems.
This opens up UCS Director to become the central orchestrator within your infrastructure to be able to interact with file or server based systems.
Metsi Datapump is a solution we built to interact with any system to return the data in JSON. We called this a Data Pump – this way we can get JSON into a Cloupia (UCS Director code) workflow.


This diagram shows how we can get data in and out of Cisco UCS Director REST API Capabilities and/or file based data.

Making UCS Director do more for you is a key advantage to the product and we have a long track record of doing this for many companies.




Having an end to end fully automated process driven by a simple spreadsheet is a great way to build out environments.  This diagram shows how we can automate Cisco UCS Director REST API Capabilities using the following, using a spreadsheet:

  • Read the Excel Sheet
  • Build a VM using the data in the sheet
  • Send a data payload to Puppet to get software installed onto the VM
  • Add the data for the VM into a MySQL database after it has been provisioned
  • Add data into a SQL Server Database table after the VM has been provisioned
  • Add the VM to a helpdesk based system (e.g. OSTicket) so that issues with it can be logged


This is just an  example of a complex process, achievable using tried and tested methods to build up the process in a Task by Task process offered by Cisco UCS Director



With built in Powershell and SSH capability – the combinations and possibilities to automate any provisioning process is almost endless.


Get in touch with us to find out more or to arrange a demo.

Cisco UCS Director REST API Capabilities


Puppet : dynamic Filebucket servers using DALEN-PUPPETDBQUERY…

What if you could make a quick db query to see what the puppetmaster’s fqdn is?!…. 

Ok, so site.pp has some static stuff in it about your puppet master and where your agent should stuff its files…

# Define filebucket ‘main’:
filebucket { ‘main’:
server => “puppetmaster.local”, # <– static server name :(
path   => false,

When the agent runs and fails to find it’s filebucket (usually set to your puppet master) you get an error like this:

Error: /Stage[main]/Puppet_enterprise::Mcollective::Server/File[/etc/puppetlabs/mcollective/server.cfg]/content: change from {md5}73e68cfd79153a49de6f5721ab60657b to {md5}4a934b6c050e5196eaf71d87c81899a6 failed: Could not back up /etc/puppetlabs/mcollective/server.cfg: getaddrinfo: Name or service not known

Puppetlabs kindly documented this error here.

But what if you could make a quick db query to see what the puppetmaster’s fqdn is?! – This would be great because your site.pp would’t need to be updated per site!

So I grab the amazing dalen-puppetdbquery module from the forge and use it to locate the master, and now my site.pp becomes…

# Define filebucket ‘main’:
$puppetmaster = query_nodes(‘Class[“puppet_enterprise::profile::master”]’)[0]
filebucket { ‘main’:
server => “${puppetmaster}”,
path   => false,

The first element in the array is at least sure to be your puppet master (a node that got assigned the “puppet_enterprise::profile::master” class).

If you’re scaling multiple masters then perhaps tag one?

What is the Modular Layer 2 (ML2) Plugin?
It’s a new Neutron core plugin in Havana. Modular Layer 2 (ML2) plugin was designed to solve a lot of limitations of monolithic plugin.

  1.  Modular
    1. Drivers for layer 2 network types and mechanisms interface with agents, hardware, and controllers.
    2. Service plugins and their drivers for layer 3+
  2. Works with existing L2 agents
    1. Openvswitch
    2. Linuxbridge
    3. Hyperv
  3. Deprecates existing monolithic plugins
    1. Openvswitch
    2. Linuxbridge

Setting up Neutron with ML2 plugin
One can setup Ml2 plugin in your Neutron service using Devstack Users

Architecture of ML2 plugin
Looking from the top it is just the plugin which follows REST API request and have those methods registered in the form of (create/delete/update/get)_(network/networks/subnet/port).
ML2 provides you with 6 types of TypeDriver:

  1. LocalTypeDriver – local
  2. VlanTypeDriver – vlan
  3. FlatTypeDriver – flat
  4. GreTypeDriver – gre
  5. VxlanTypeDriver – vxlan
  6. TunnelTypeDriver – tunnel

self.type_manager = managers.TypeManager()

The classes for these drivers are defined in neutron/plugins/ml2/drivers/type_*.py files. The choice of TypeDriver you need to set in /etc/neutron/plugins/ml2/ml2_conf.ini file under

tenant_network_types = local,vlan(example)

Hence, the self.type_manager will now contain the class of TypeDriver you have chosen.

self.mechanism_manager = managers.MechanismManager()

The way neutron-server selects the plugin you want to run is precisely the way ML2 plugin selects the mechanism_driver. The choice of MechanismDriver can be set in /etc/neutron/plugins/ml2/ml2_conf.ini file under

mechanism_driver = mydriver (example)

You have specified what mechanism driver you want to run is, but how Ml2 will understand what this “mydriver” means. This “mydriver” needs to have some entry point. This is ideally present in neutron/setup.cfg file under

mydriver = neutron.plugins.ml2.drivers.my_mech.my_mechanism:MyDriver

Now when we do “python install” in neutron this entry point for mechanism driver goes inti egg-info of neutron. Ex. /usr/lib/python2.7/site-packages/neutron-2015.4.3-py2.7.egg-info/entry_points.txt. (Note: your python version and the egg-info version could be different so look for entry_points.txt in your machine) you’ll see how ML2 translates your TypeDriver and MechanismDriver from their aliases to get complete python path to the classes.

Where neutron.plugins.ml2.drivers.my_mech.my_mechanism is your path to neutron/plugins/ml2/drivers/my_mech/ This file contains the class MyDriver.


An initialization method is required in your class where all the initialization is done. This method is not the __init__  method of the class. This method contains all the operations which would be done post to __init__ method of driver and pre to any operations from your driver.

Cisco Intercloud Fabric is an amazing new product aimed at customers looking to expand their infrastructure into the Public Cloud.


The Cisco Intercloud Fabric system looks and feels very much like Cisco’s UCS Director product and is built upon the same Linux Virtual Appliance.


The product can connect to Amazon’s AWS E2 service, Microsoft’s Azure and the growing list of Cisco Cloud Partners who will host your public cloud VMs, all of these will appear as just part of your company’s Layer 2 network.

Once the connection is made and your have extended your infrastructure into the public  space – you can then move VMs to and from your local infrastructure using the Intercloud Fabric platform.


This will move the VM from your local vCenter to be hosted in Amazon’s AWS,  Microsoft Azure or Cisco Partner systems, but the IP Address will remain the same and to your customers nothing has changed.

Check out our live demo below where we move a VM from Private to Public Infrastructure – but maintain connectivity between 2x VMs hosted in each of the environments.

Cisco Intercloud Fabric – When to use it?

Decisions will need to be made as to when and how this technology would be used.  Perhaps your business experiences peaks and troughs in resource demands – buying more system, storage and network resource to deal with these peaks can become expensive if they are then under used during the trough periods.  Increasing these technologies place demands on power, cooling and real estate.

Temporarily extending into the Public Cloud can be an excellent resource that can make a big difference to your business.

Hybrid Cloud

The  Hybrid Cloud offers broader options than just Amazon or Azure and more specific SLA agreements can be made with partners of  Cisco, BT, Dimension Data all offer these services to businesses and can be a better alternative than Amazon or Azure’s offerings.

We will be adding to these pages in the coming weeks and months to fully explore the capability of Cisco Intercloud Fabric.

Cisco Intercloud Fabric Implementation

We, at Metsi Technologies offer the service of implementing Ciscos Intercloud Fabric at both the Business end (Customer) and Service Provider.  Get in touch if you would like to discuss this service with us.

Cisco Intercloud Fabric

UCS Director Virtual Data Center vDC  is the term used to denote the collection of policies grouped together for a UCS Director Group.



The above graphic denotes a Virtual Data Center called WordPress vDC T1 defined for the Group T1 – it pulls in the

  • System Policy (SCCM_Build)
  • Compute Policy (vcenter – WordPress Computing Policy)
  • Network Policy (Network Pol)
  • Storage Policy (Laptop Storage Pol)

(ignore the names of the policies – these are just those used in our lab)

The name of this section within UCSD, the Virtual Data Center (vDC) is not particularly well defined – it should (in my opinion) be called a Group Service Delivery Policy  as it is defining the sections of UCS Director policies together than can then be used in the Catalog, which is then presented to the end user from the end user view.

UCS Director Catalog End User View



The above shows the End User view with the Created Catalog button showing the WordPress vcenter Catalog Item which we created from the vDC entry.

UCS Director Implementation Specialists

We, at Metsi have a long track record for Cisco UCS Director implementations – get in touch if you would like to discuss the product.

What is a UCS Director Virtual Data Center vDC ?

UCS Director Cloud Orchestration is a powerful platform, capable of managing your entire  Converged Infrastructure.

UCSD POD ViewUCS Director Converged Infrastructure


UCS Director Converged View of POD

The above graphic demonstrates a UCS Director connected to VMWare’s vCenter, Cisco’s UCS Platform, a number of Nexus 7k and a Nexus 1000k switch and 2x NetApp Storage controllers.Once these connections are made (the successful connection identified with the green lights) these devices or platforms can be directly managed from the UCS Director single pane of glass.

The platform is set out on a Virtual Machine appliance and is capable of building out a virtualised infrastructure through either manual point and click or more, commonly, through automation using the UCS Director orchestration engine.

UCS Director Orchestrator

The UCS Director Orchestration engine allows for a scriptable solution to virtually any eventuality. The platform is built on Javascript (Cloupia is based on Javascript Rhino – a server side language that is a mixture of Java and Javascript). We, at Metsi, can create almost any integration with any other system using this code. It also has SSH and Powershell integration built into the product, giving this system a huge scope for automation. We can make it work with anything else.


UCS Director Reporting

Although not intended to be used as a pure reporting tool – UCS Director’s reporting capabilities are impressive. (note: the interface is also customisable to match your company’s branding).



UCS Director and Policy Based Provisioning

UCS Director has the capability to provide a customer based front end where Service Requests for provisioning tasks can be made and the functions behind those requests can be automated. So provisioning a Virtual Machine can be done by the end user and the decisions made on how big, where to put it, who owns it, what does it connect to is all pre-defined in policies by the IT Architect teams. This takes away the mundane tasks of actually creating machines and storage/network infrastructure, and puts control in the hands of the end user. This can be managed through budgetary control giving the IT management team a much easier job of managing the infrastructure.

UCS Director Implementation

If you are looking for a team to help with a  UCS Director  Implementation project – then Metsi Technologies have a long track record of success.  Get in touch with us to discuss your requirements.

UCS Director Cloud Orchestration

The Agent/Master Architecture

Puppet usually runs in an agent/master architecture, where a Puppet master server controls important configuration info and managed agent nodes request only their own configuration catalogs.


In this architecture, managed nodes run the Puppet agent application, usually as a background service. One or more servers run the Puppet master application, usually as a Rack application managed by a web server (like Apache with Passenger).

Periodically, Puppet agent will send facts to the Puppet master and request a catalog. The master will compile and return that node’s catalog, using several sources of information it has access to.

Once it receives a catalog, Puppet agent will apply it by checking each resource the catalog describes. If it finds any resources that are not in their desired state, it will make any changes necessary to correct them. (Or, in no-op mode, it will report on what changes would have been needed.)

After applying the catalog, the agent will submit a report to the Puppet master.

About the Puppet Services

Puppet Master