Metsi TechTips Knowledge Base

­

There are three principal means of acquiring knowledge... observation of nature, reflection, and experimentation. Observation collects facts; reflection combines them; experimentation verifies the result of that combination.

Denis Diderot

Our people are our greatest asset – we say this often and with good reason. It is only with the determination, dedication and subject matter expertise of our people that we can serve and generate long-term partnerships with our customers and partners.

Our consultants are constantly discovering gems of wisdom and technical capabilities of the products and solutions they work with on a day to day basis. This knowledge base is means of sharing this with you.

So you are building an Infrastructure as a Service (IAAS) platform and you want to offer Network storage to your customer base?

One of the primary issues with provisioning storage connected to virtual and physical servers is the issue of IOPS (Input Output Per Second).

Your network storage is shared across your customer base and therefore you want to provide a service that is equal to all.  How do you restrict or prevent one customer from using all of the resources on the storage device at the detriment to the other customers?

The scenario – Imagine a customer wants to host a Database on a Network Share.  He has a table with 10 million rows and wants to perform complex joins and lookups on the data constantly throughout the working day.

network iops

His data takes up 50GB of space but his application needs 2500 IOPS to get the results from his query to make his application work.

He only wants to buy a 70GB disk for his 50GB space requirement – but his IOPS on 70GB will mean that the performance of the storage unit will be stolen by his IOPS requirements.

If  our IAAS hosting platform allows all customers to do this – we will have some unhappy customers.

We at Metsi have worked with customers to build a platform that allows a sliding scale selection of disk size and IOPS in order to create a level playing field and one that we are confident will not compromise our customer’s storage systems.

simpla_iops_disksize_form

Here we can see a Performance class selection of Gold, Silver and Bronze – Gold providing a higher IOPS level multiplier than Silver and Bronze.

The sliders are linked so that you either select your Disk Size or your IOPS requirement.

This allows customers a choice of performance or storage – but restricts them in choosing a low amount of storage and abusing the IOPS.

The end result allows management of network storage where customers pay for what they use.

customersizeiops

 

If you are interested in automation and are looking for a fully automated IAAS platform – get in touch.

 

 

We, at Metsi, have worked on numerous projects for large Enterprise based businesses, looking to automate their infrastructure using workflow automation.  The steps involved in creating Virtual Machines are recreated using Tasks within a workflow.

To understand what is required to move from doing it manually to automating it in a workflow boils down to the questions of What? How much? Where? Which? In building a VM.

The manual way in VMWare’s vSphere involves a right click followed by a wizard of forms.

new_virtual_machine

This then captures the name of the Virtual Machine you are building, Where it will be hosted for both compute and storage, What operating system, Which  network it will connect to, How much RAM, CPU and Disk space is required.

There are a number of rules to this –

  • each question must be answered
  • VMs must have unique names
  • The names can be used within the operating system and cannot contain certain characters and for Windows systems are limited to 15 characters
  • Most networking uses TCP/IP and therefore the VM will need to get a unique IP address within it’s network

In building a workflow automation strategy, these choices are made up front by the Architects within the business and most decisions will be built around an If Then Else decision or to borrow a programming term a Switch set of choices.

Switch(operating_system){

                case “windows”:

                                connect_to_network = “win_network”;
                                join_domain = “true”;
                                break;

                case “linux_redhat”:

                                connect_to_network = “linux_network”;
                                join_domain = “true”;
                                break;

                case “linux_debian”:

                                connect_to_network = “linux_network”;
                                join_domain = “false”;
                                break;

 

}

The above example shows a switch piece of code where a network and join domain choice is made based on the supplied value of the operating_system variable.

Using this kind of approach removes unnecessary choices from the end user, the person initiating the process.

In fact the start of the journey actually begins with the question Who? Who is allowed to make the request for the new Infrastructure and how much do they know about it all?

Typically the task of building IT Infrastructure falls at the feet of the IT department and for their user base to place requests on them to build Virtual or Physical servers.  The end user may just ask – can I have one? Or can I have a normal, big or small one? Then the rules for answering the questions are known to the IT team.

In building an Automated Workflow that builds the VM – the IT team need to build a form or a method of collecting the unknowns and feed that to the workflow tasks in order for the VM to build.

One of the tasks is to generate a unique hostname to be used in the process.  This may currently be done using Excel – the IT Admin loads up the hostname sheet – looks at the last name in the list – adds a new one to it using the next number e.g.

Win2012r201
Win2012r202
Win2012r203
Win2012r204 <- this is what will be used

This has it’s downfalls of course – if the spreadsheet is not updated then the process fails and an error will occur with a duplicate name.

A better way would be to store the names in a database – perhaps use an auto generated ID as the last digits in the name, and then add a row to get a new name in the process of the workflow.

The Database table is quite simple

hostname_table

CREATE TABLE `tbl_hostnames` (

`id` INT NOT NULL AUTO_INCREMENT,

`hostname_prefix` VARCHAR(13) NOT NULL,

PRIMARY KEY (`id`)

)

hostname_data

…with the data as shown above.  Then the value of the hostname is the value of hostname_prefix + id.

Now you may say but the id must be at least 2 digits and so you need a leading zero.  This is quite simple to fix – with an if statement

if(id < 10){

idstr = “0” + id;

} else {

idstr = id;

}

Then

hostname = hostname_prefix + idstr;

The code needs to be built into a workflow task that determines the hostname to use.  One approach we have used is to create an Apache/MySQL VM that hosts the above database and using PHP we query it and output the data in JSON format.  This then can be accessed using a standard http request in the form of a REST API call.

The process of making REST API calls sits at the heart of Workflow Automation projects, and enables us to store the choices that will be used in the build process.  This allows programmatically making choices based on the answers provided by the end user.  As an example – John Smith works in the Finance department and has been tasked with testing out a new version of a new accounting program that the company want to try.  The IT provisioning process has now been automated and his new Intranet page shows him a simple form.

new_form

The page is LDAP enabled and so hidden in the form is the value of John’s login ID (johns) – he is also in the Finance OU within Active directory and in our database we have a table that stores the costcode for the members of the Finance OU – so we retrieve John’s ID and OU membership and then use that to look up the costcode (this we will assign the VM to that costcode so that we can charge the finance department for their CPU, RAM and Disk space).

John selected a “medium” VM and so we have defined that to have 8GB RAM, 2x CPU and the Finance department only use Windows VMs – so that’s what John will get.  If Pete from the DEV team logged in – he would also get the choice of OS

new_form_dev

As John is in Finance they use a network with a portgroup called finance_net and the choice of which network each department uses is also stored in our MySQL database and the query is made to determine the network to use based on the user’s OU.

Finance VMs are also hosted on a specific set of clustered servers and a set datastore.  The values of these are also stored in the database and retrieved within the workflow.  If they need to grow then the database can be updated or added to and the decision on which to use can be made programmatically.  This may involve a process to first query what is currently there and determine whether it has enough, and if not what to do about it.

As these automation workflows develop they can make decisions and self build infrastructure without the need for intervention. E.g.  If a datastore reaches 80% then go create another and use that new one.

As the data is stored in a database, this can also have a front end written to manage it.  This removes the need for programmers to manage the infrastructure and can all be done via a web page.

automation_workflow

We have then developed this into a tool called Simpla – which enables simple form creation to feed a workflow.  Check out this great tool at http://www.metsi.co.uk/service-catalog-as-a-service

Get in touch if you want to discuss anything in this article or any other aspect of Automation

IT Security is one of our core business categories at Metsi Technologies.  We have partnered with Sailpoint to give Enterprise-Class Identity Governance to your core business.  Purchase a Sailpoint License from us and we can also install and implement the solution to your business as well as  offer full Identity Access management consultation (IAM) to protect your business from the many vulnerabilities that this, often overlooked area of running a business, addresses.

Sailpoint License

 

If you are looking for a Sailpoint Implemntation  – then contact us at Metsi Technologies – we have a proven track record in the industry that is second to none.

Looking to Build your own Service Catalog?  In this post we look at how a Service is defined and the steps we take in setting up a Service Catalog from scratch.

As technology has developed, most people use technology every single day of their lives and with the development of Smartphone technology, familiarity with web applications in a browser or via a phone app become second nature.  Companies therefore, look towards these technologies to solve many of their common requirements that they are faced with daily.  Their employees already know how to use them. These requirements may require lots of manpower to achieve what could be accomplished using automation tools.

We, at Metsi have found that many companies are looking for a Service Catalog solution to allow it’s employees and/or customers to request services from a single pane of glass.  These services may be simple in terms of definition, but may require some complex processes in order to achieve the result.  A simple “I need a new laptop” request would only be fulfilled after the following questions were answered:

  • How much RAM do you need?
  • What processing power do you need?
  • How much Hard disk space?
  • What size screen?
  • Who in the business will pay for it?
  • Where should it be delivered?

There could be additional requirements attached to this request:

  • Migration of data – who does it and when?
  • What happens to the old laptop – does it get wiped and recycled?

These additional parts of the request may be served by other systems via a ticket.  The complexity of the request can therefore grow to reach multiple systems each with a Create, Read, Update, Delete  requirement.  Owners of the systems need to see where the request is at each stage of the process and act upon it accordingly.  The requestor also needs to see the progress of the request, without needing to call or contact someone else.

Service Definition

service definition

Where do we start with the Service Catalog definition?  Firstly we would understand how the service is currently delivered.  Perhaps a web form is completed or even an excel spreadsheet is filled in and sent via email.

What happens next? Are additional systems involved in the fulfilment process?  If so what is the process?

These processes would be mapped into a flow-chart with each end point defined to understand the data required to be collected.

Understanding and Automating the process

Once we have understood the requirements, next we can build the service into our Service Catalog using Automation.

For our new laptop service, there may still need to be some manual intervention in delivering the laptop and performing a migration of the data.  But perhaps not!

Imagine that an agreement has been setup with your laptop supplier to take an order if placed through an API – you will send a URL with the answers given by a web form:

newlaptopform

When the user submits this form, their username is captured via their  “domain name” login as well as their answers (all fields are mandatory).  A script builds a payload of the answers and the following URL is produced:

http://youlaptopsupplier.com/api/newlaptop?payload={“companyname”:”yourcompanyname”, “user”: “probins”,”ram”:”4gb”, “cpu”:”i5″,”hdd”:”100gb”, “screen”:”15inch”}

The laptop supplier receives this order in their ordering system via this api call from your system.  They have already agreed with you the make and model laptops to supply based on the values received via the system.

The supplier preloads your operating system image onto the laptop and ships it to your IT department.

The IT department  received a new “laptop supply ticket” for the requesting user (“probins”) which was set to status = pending delivery – this was achieved when the form was submitted via the following generated url

http://itdepartmenturl.yourcomanydomain/api/newlaptoprequest?payload{“user”:”probins”,”dept”:”Automation”}

This generated the ticket to supply the laptop when it arrives.

On arrival the IT department updates the ticket from status = pending to status = received.

This demonstrates the automation process in a simple way, a process that can be triggered using a Simple Service Catalog.

Additional Service Catalog Features

The list of additional Service Catalog Features is almost endless, almost anything can be done through automation.  One subject that I have not touched on is adding authorisation to the process.

Services can be offered to all users of a system, to a group of users or to only 1 user.

Services can also require authorisation from another user or a group of users.

This would be used, in our example, to prevent users from requesting new laptops every 2 weeks by placing a Request, Authorise process in place, where accountable staff be placed into the authorise cycle.

Service Catalog Options

The list of options available to companies for Service Catalogs is quite small.  As Cisco partners we can offer the powerful Cisco Prime Service Catalog to large Enterprise customers who are looking for a large implementation of the type of system.

Cisco Prime Service Catalog

We have also written our own lightweight Service Catalog called Simpla

simpla_frontpage

 

Simpla is a lightweight easy to use Service Catalog that can be used and configured simply.  The system can be hosted by us or on your site, we can get you up and running services to your end users within a couple of days.  No single service should take more than a single day to create using simple drag and drop form creations and easy to use interfaces that anyone can use.

Get in touch with us at Metsi for a full demo.

Build your own service catalog

Although UCS Director does not have a native, built in REST API capability, creating functionality through a UCS Director Custom Workflow Task is quite simple.

We, at Metsi, have used this method many times to connect to external REST API based systems, within a workflow to create seamless interaction with external systems.
This opens up UCS Director to become the central orchestrator within your infrastructure to be able to interact with file or server based systems.
Metsi Datapump is a solution we built to interact with any system to return the data in JSON. We called this a Data Pump – this way we can get JSON into a Cloupia (UCS Director code) workflow.

datapump1

This diagram shows how we can get data in and out of Cisco UCS Director REST API Capabilities and/or file based data.

Making UCS Director do more for you is a key advantage to the product and we have a long track record of doing this for many companies.

 

datapump2

 

Having an end to end fully automated process driven by a simple spreadsheet is a great way to build out environments.  This diagram shows how we can automate Cisco UCS Director REST API Capabilities using the following, using a spreadsheet:

  • Read the Excel Sheet
  • Build a VM using the data in the sheet
  • Send a data payload to Puppet to get software installed onto the VM
  • Add the data for the VM into a MySQL database after it has been provisioned
  • Add data into a SQL Server Database table after the VM has been provisioned
  • Add the VM to a helpdesk based system (e.g. OSTicket) so that issues with it can be logged

 

This is just an  example of a complex process, achievable using tried and tested methods to build up the process in a Task by Task process offered by Cisco UCS Director

 

datapump3

With built in Powershell and SSH capability – the combinations and possibilities to automate any provisioning process is almost endless.

 

Get in touch with us to find out more or to arrange a demo.

Cisco UCS Director REST API Capabilities

 

Puppet : dynamic Filebucket servers using DALEN-PUPPETDBQUERY…

What if you could make a quick db query to see what the puppetmaster’s fqdn is?!…. 

Ok, so site.pp has some static stuff in it about your puppet master and where your agent should stuff its files…

#/etc/puppetlabs/code/environments/production/manifests/site.pp
# Define filebucket ‘main’:
filebucket { ‘main’:
server => “puppetmaster.local”, # <– static server name :(
path   => false,
}

When the agent runs and fails to find it’s filebucket (usually set to your puppet master) you get an error like this:

Error: /Stage[main]/Puppet_enterprise::Mcollective::Server/File[/etc/puppetlabs/mcollective/server.cfg]/content: change from {md5}73e68cfd79153a49de6f5721ab60657b to {md5}4a934b6c050e5196eaf71d87c81899a6 failed: Could not back up /etc/puppetlabs/mcollective/server.cfg: getaddrinfo: Name or service not known

Puppetlabs kindly documented this error here.

But what if you could make a quick db query to see what the puppetmaster’s fqdn is?! – This would be great because your site.pp would’t need to be updated per site!

So I grab the amazing dalen-puppetdbquery module from the forge and use it to locate the master, and now my site.pp becomes…

#/etc/puppetlabs/code/environments/production/manifests/site.pp
# Define filebucket ‘main’:
$puppetmaster = query_nodes(‘Class[“puppet_enterprise::profile::master”]’)[0]
filebucket { ‘main’:
server => “${puppetmaster}”,
path   => false,
}

The first element in the array is at least sure to be your puppet master (a node that got assigned the “puppet_enterprise::profile::master” class).

If you’re scaling multiple masters then perhaps tag one?

What is the Modular Layer 2 (ML2) Plugin?
It’s a new Neutron core plugin in Havana. Modular Layer 2 (ML2) plugin was designed to solve a lot of limitations of monolithic plugin.

  1.  Modular
    1. Drivers for layer 2 network types and mechanisms interface with agents, hardware, and controllers.
    2. Service plugins and their drivers for layer 3+
  2. Works with existing L2 agents
    1. Openvswitch
    2. Linuxbridge
    3. Hyperv
  3. Deprecates existing monolithic plugins
    1. Openvswitch
    2. Linuxbridge

Setting up Neutron with ML2 plugin
One can setup Ml2 plugin in your Neutron service using Devstack Users

Architecture of ML2 plugin
Looking from the top it is just the plugin which follows REST API request and have those methods registered in the form of (create/delete/update/get)_(network/networks/subnet/port).
ML2 provides you with 6 types of TypeDriver:

  1. LocalTypeDriver – local
  2. VlanTypeDriver – vlan
  3. FlatTypeDriver – flat
  4. GreTypeDriver – gre
  5. VxlanTypeDriver – vxlan
  6. TunnelTypeDriver – tunnel

self.type_manager = managers.TypeManager()

The classes for these drivers are defined in neutron/plugins/ml2/drivers/type_*.py files. The choice of TypeDriver you need to set in /etc/neutron/plugins/ml2/ml2_conf.ini file under

[ml2]
tenant_network_types = local,vlan(example)

Hence, the self.type_manager will now contain the class of TypeDriver you have chosen.

self.mechanism_manager = managers.MechanismManager()

The way neutron-server selects the plugin you want to run is precisely the way ML2 plugin selects the mechanism_driver. The choice of MechanismDriver can be set in /etc/neutron/plugins/ml2/ml2_conf.ini file under

[ml2]
mechanism_driver = mydriver (example)

You have specified what mechanism driver you want to run is, but how Ml2 will understand what this “mydriver” means. This “mydriver” needs to have some entry point. This is ideally present in neutron/setup.cfg file under

[neutron.ml2.mechanism_drivers]
mydriver = neutron.plugins.ml2.drivers.my_mech.my_mechanism:MyDriver
                                   

Now when we do “python setup.py install” in neutron this entry point for mechanism driver goes inti egg-info of neutron. Ex. /usr/lib/python2.7/site-packages/neutron-2015.4.3-py2.7.egg-info/entry_points.txt. (Note: your python version and the egg-info version could be different so look for entry_points.txt in your machine) you’ll see how ML2 translates your TypeDriver and MechanismDriver from their aliases to get complete python path to the classes.

Where neutron.plugins.ml2.drivers.my_mech.my_mechanism is your path to neutron/plugins/ml2/drivers/my_mech/my_mechanism.py. This file contains the class MyDriver.

self.type_manager.initialize()
self.mechanism_manager.initialize()

An initialization method is required in your class where all the initialization is done. This method is not the __init__  method of the class. This method contains all the operations which would be done post to __init__ method of driver and pre to any operations from your driver.

Cisco Intercloud Fabric is an amazing new product aimed at customers looking to expand their infrastructure into the Public Cloud.

 ICF_front

The Cisco Intercloud Fabric system looks and feels very much like Cisco’s UCS Director product and is built upon the same Linux Virtual Appliance.

ICF_IcfCloud 

The product can connect to Amazon’s AWS E2 service, Microsoft’s Azure and the growing list of Cisco Cloud Partners who will host your public cloud VMs, all of these will appear as just part of your company’s Layer 2 network.

Once the connection is made and your have extended your infrastructure into the public  space – you can then move VMs to and from your local infrastructure using the Intercloud Fabric platform.

ICF_Migrate

This will move the VM from your local vCenter to be hosted in Amazon’s AWS,  Microsoft Azure or Cisco Partner systems, but the IP Address will remain the same and to your customers nothing has changed.

Check out our live demo below where we move a VM from Private to Public Infrastructure – but maintain connectivity between 2x VMs hosted in each of the environments.

Cisco Intercloud Fabric – When to use it?

Decisions will need to be made as to when and how this technology would be used.  Perhaps your business experiences peaks and troughs in resource demands – buying more system, storage and network resource to deal with these peaks can become expensive if they are then under used during the trough periods.  Increasing these technologies place demands on power, cooling and real estate.

Temporarily extending into the Public Cloud can be an excellent resource that can make a big difference to your business.

Hybrid Cloud

The  Hybrid Cloud offers broader options than just Amazon or Azure and more specific SLA agreements can be made with partners of  Cisco, BT, Dimension Data all offer these services to businesses and can be a better alternative than Amazon or Azure’s offerings.

We will be adding to these pages in the coming weeks and months to fully explore the capability of Cisco Intercloud Fabric.

Cisco Intercloud Fabric Implementation

We, at Metsi Technologies offer the service of implementing Ciscos Intercloud Fabric at both the Business end (Customer) and Service Provider.  Get in touch if you would like to discuss this service with us.

Cisco Intercloud Fabric

UCS Director Virtual Data Center vDC  is the term used to denote the collection of policies grouped together for a UCS Director Group.

UCSD-vDCa

UCSD-vDC


The above graphic denotes a Virtual Data Center called WordPress vDC T1 defined for the Group T1 – it pulls in the

  • System Policy (SCCM_Build)
  • Compute Policy (vcenter – WordPress Computing Policy)
  • Network Policy (Network Pol)
  • Storage Policy (Laptop Storage Pol)

(ignore the names of the policies – these are just those used in our lab)

The name of this section within UCSD, the Virtual Data Center (vDC) is not particularly well defined – it should (in my opinion) be called a Group Service Delivery Policy  as it is defining the sections of UCS Director policies together than can then be used in the Catalog, which is then presented to the end user from the end user view.

UCS Director Catalog End User View

UCSD-t1admin

 

The above shows the End User view with the Created Catalog button showing the WordPress vcenter Catalog Item which we created from the vDC entry.

UCS Director Implementation Specialists

We, at Metsi have a long track record for Cisco UCS Director implementations – get in touch if you would like to discuss the product.

What is a UCS Director Virtual Data Center vDC ?

UCS Director Cloud Orchestration is a powerful platform, capable of managing your entire  Converged Infrastructure.

UCSD POD ViewUCS Director Converged Infrastructure

  

UCS Director Converged View of POD

The above graphic demonstrates a UCS Director connected to VMWare’s vCenter, Cisco’s UCS Platform, a number of Nexus 7k and a Nexus 1000k switch and 2x NetApp Storage controllers.Once these connections are made (the successful connection identified with the green lights) these devices or platforms can be directly managed from the UCS Director single pane of glass.

The platform is set out on a Virtual Machine appliance and is capable of building out a virtualised infrastructure through either manual point and click or more, commonly, through automation using the UCS Director orchestration engine.

UCS Director Orchestrator

The UCS Director Orchestration engine allows for a scriptable solution to virtually any eventuality. The platform is built on Javascript (Cloupia is based on Javascript Rhino – a server side language that is a mixture of Java and Javascript). We, at Metsi, can create almost any integration with any other system using this code. It also has SSH and Powershell integration built into the product, giving this system a huge scope for automation. We can make it work with anything else.

UCSD_Cloupia
  

UCS Director Reporting

Although not intended to be used as a pure reporting tool – UCS Director’s reporting capabilities are impressive. (note: the interface is also customisable to match your company’s branding).

UCSD_reports

 UCSD_topology 

UCS Director and Policy Based Provisioning

UCS Director has the capability to provide a customer based front end where Service Requests for provisioning tasks can be made and the functions behind those requests can be automated. So provisioning a Virtual Machine can be done by the end user and the decisions made on how big, where to put it, who owns it, what does it connect to is all pre-defined in policies by the IT Architect teams. This takes away the mundane tasks of actually creating machines and storage/network infrastructure, and puts control in the hands of the end user. This can be managed through budgetary control giving the IT management team a much easier job of managing the infrastructure.

UCS Director Implementation

If you are looking for a team to help with a  UCS Director  Implementation project – then Metsi Technologies have a long track record of success.  Get in touch with us to discuss your requirements.

UCS Director Cloud Orchestration

The Agent/Master Architecture

Puppet usually runs in an agent/master architecture, where a Puppet master server controls important configuration info and managed agent nodes request only their own configuration catalogs.

Basics

In this architecture, managed nodes run the Puppet agent application, usually as a background service. One or more servers run the Puppet master application, usually as a Rack application managed by a web server (like Apache with Passenger).

Periodically, Puppet agent will send facts to the Puppet master and request a catalog. The master will compile and return that node’s catalog, using several sources of information it has access to.

Once it receives a catalog, Puppet agent will apply it by checking each resource the catalog describes. If it finds any resources that are not in their desired state, it will make any changes necessary to correct them. (Or, in no-op mode, it will report on what changes would have been needed.)

After applying the catalog, the agent will submit a report to the Puppet master.

About the Puppet Services

Puppet Master