pondelok, 1. septembra 2014

Beyond virtualization - sandboxes, whales and containers

IT companies have challenges, therefore IT people have chalenges - constantly being ahead of the curve, saving costs, innovating, moving ahead with what's next.
My last blog post is almost a year old so I have to confess I have challenges too :-) It is time to dust off my blogging skills and immerse into something I found both intriguing and awesome.

Recently I've came across an interesting article regarding way we virtualize OS stack and layer the IT infrastructure in most cost effective and resource effective way. I wrote about type 1, type 2 hypervisors long time ago but they are, although robust and stable, not bringing too much excitement. Yeah you can automate them, assign more TBs of RAM, more than dozens of CPUs and cloud the hell out of them. But at the end, not much changed since last year. 

Meet cgroups,
Cgroups were originally developed back in 2006 by engineers in Google to create way how processes can be grouped and  have resources like CPU, memory, disk IO assigned through kernel prioritization and allocation of resources.
If you are into application virtualization you basically have the idea. It is the way how you separate application + their dependencies in nice and cozy sandboxes where they can live and prosper and get resources based on individual needs, without affecting too much the others. cgroups are supported by every recent Linux distro.

Meet docker,
Using the method of cgroups you can automate application deployment, resource allocation and manage the whole thing through various APIs and also integrate it into wider cloud computing infrastructures like OpenStack Nova.
Docker is the opensource automation tool you can use with very interesting concept of managing the dockerized applications through shell or whole, docker enabled, Linux host through gui (shipyard).
There are already thousands of dockerized apps and servers like tomcat, jboss, apache and others already available and built by very lively community.
You pull the app from Internet repository customize it to your needs and save it offline for your use. Once done you can move application, copy it to another host or even another Linux distribution with all the dependencies contained within package.
In addition containerized application can coexist with other similar containers on single Linux machine and share resources. But instead of having usual Guest VM penalty of all the OS files and resource demands you need to support in every VM (several GBs), application container consumes only resource used by application and its dependencies. That's where it starts being awesome and wait there's more awesome coming.
You start thinking, yeah I can have Redhat ES running docker, doing my containerized apps on top. I can copy the containers from my test machine to production machine without all the rework to be re-done but how it challenges the classic baremetal virtualization with all the HA,  vmotion etc.
So what if someone take the small solid Linux distribution like Chrome OS and strip it to the bone so it takes not more than 200MB of disk space, supports clustering and live migration, and have centralized patching and application updates the way Google does?

Meet CoreOS,

CoreOS is a stripped down Linux distribution based on Chrome OS. It has all the nice things from Chrome OS like build in support for centralized mass scale patching, super small footprint and read only root partition, oh it actually have two partitions (same way how servers have two BIOSes) so in case something goes terribly wrong during patching you boot up from secondary one.
In addition CoreOS supports clustering with fleet and etcd manages the communication between several hosts in cluster, running several instances of same application selecting master server and have voting algorithm in case your master server dies.

Docker runs on top of linux kernel
app container uses only fraction of  resources
when compared to standard baremetal virtualization


CoreOS has built in high availability clustering
 and resource distribution


Wrapping up - this technology is emerging and being opensource with great community - in my view will almost certainly challenge the classic virtualization technologies. With backing by vendors like ebay, rackspace, google and others I can't wait to see more.

streda, 27. novembra 2013

Licensing Microsoft Windows 2012 on vSphere

Recently I was investigating changes in licensing of Microsoft's latest Windows 2012 OS. As usual Microsoft licensing is sometimes "foggy" and definitely not a straightforward exercise and dealing with ESXi and Windows VM was never been an easy task however it is simpler than it seems on first look. Based on lack of simplified overviews on the Internet I tried to explain to my best knowledge so here is the deal....

Windows Editions

Since Windows 2000 there were always three editions of server operating systems, standard, enterprise, datacenter limiting HW resources available for use.

With Windows 2012 we are losing Enterprise edition and both Standard and Datacenter are, function wise, equal. That means you can run clustering services on standard and address same amount of CPU or RAM with both editions. When it comes to running new Windows on any hypervisor things are getting different.


Sounds like Nordic twins, aren't they? :-) These are names/terminology used for physical and virtual operating system environments. You have to license each physical CPU with Windows Server license, Microsoft is referring to this as licensing POSE(physical OS environment) to enable you to run VOSE (virtual OS environment).

Basically by assigning Windows license to ESXi (licensing POSE) or any other hypervisor you are getting right to run virtualized workloads (VOSE) of Microsoft Windows on same host.
By purchasing Windows 2012 Standard OS you will always get single license for 2 physical CPUs and 2 virtual OS instances. For example one HP DL380G8 with 2 CPUs with single Windows 2012 standard license will cover you on POSE level and give you right for two additional virtual instances on same physical host.
With Datacenter edition all above is true and in addition you are entitled to run unlimited number of virtual instances.

Things are getting more complicated once you try to license 4 CPU server like HP DL580G7 or HP BL685G7 where single Windows 2012 license will only cover half of the CPUs. In this scenario you have to purchase two licenses and that will of course give you 4 virtual instances in case of Standard and (surprise) still unlimited number of virtual servers in Datacenter case.

Licenses cannot be dynamically transferred across different ESXi hosts but you are allowed to reallocate POSE licenses to other hypervisor server every 90 days. As long as you have enough VOSE licenses (or slots) available you can also vMotion VMs across hosts. Picture below describes this scenario. On Host 1 you are moving all 4 VMs off as both Host 2 and Host 3 are assigned with three POSE licenses they are entitled to run 6 VMs but both are running only 4 VMs and have 2 empty slots to cater vMotion demand incoming from Host 1.

Windows Server 2012 Licensing in virtual environments

You can check http://download.microsoft.com/download/3/D/4/3D42BDC2-6725-4B29-B75A-A5B04179958B/WindowsServer2012VirtualTech_VLBrief.pdf for more details.

nedeľa, 24. marca 2013

Virtualization Matrix Posted

Ever wondering how to get all virtualization platforms compared one to each other? Andreas Groth did a remarkable job starting this year with virtualization matrix. Recently he added Red Hat Enterprise Virtualization RHEV 3.1 (KVM based) to the bunch.

Current version contains comparison of platforms from VMware (vSphere 5.1.), Redhat RHEV 3.1), Microsoft (Hyper-V 2012) and Citrix (XenServer 6.1)

Check it out.

utorok, 8. januára 2013

VMware vSphere - Basic building blocks of internal cloud Part 2 (Managing your cloud)

In Part 1 we ran through basic building blocks of your virtual infrastructure. These are essential and very crucial pieces of technology helping you to create reliable and flexible environment.

We still don't know how all this can be managed. There are several options and with vSphere 5 these options are even greater although we've lost ESX console. You remember...ESXi doesn't have one.

Let me show you basic diagram....and don't get scared by complexity there are far more complex drawings :-)

I tried to pencil whole solution as described in Part 1 and what kind of management tools you can use to manage every tier of your private cloud. 

With vSphere 5 product line there is an improved web access portal available right after installation of the vCenter. You can find yours on default URL https://vcentername:9443. Current version is more capable than previous one included in vCenter 4.1 and you can do many basic tasks right away.

Most conventional and probably also most used is vSphere client connected to vCenter server. This is feature rich and very intuitive client-server application. However it requires Windows based vCenter server we can now, with version 5, install cost effective linux based option available. VMware calls it vCenter server appliance and there is no need for additional Windows licenses you just download and install ISO. For those who, for any reason, prefers Windows free environment this could be the way forward.

With vSphere client you can do all kind of operations on datacenter, cluster, esx and virtual machine like setup clusters, initiate vmotion, setup DRS, assign datastores, setup alarms and of course deploy virtual machines and appliances (from OVF files or other sources) in addition to this  vCenter can be enriched with plugins such as for Update Manager for interaction with VMware Update manager patching solution, Operations Manager plugin for capacity and performance reporting and management or 3rd party plugins like vkernel's vOps. vSphere client is also used to setup permissions and atuhentication options on ESX or vCenter server.

For multiple vCenter servers there is an option to create linked mode vCenters where you can join  and create hierarchy similar to domain and have one tree of all of your datacenters and clusters with less administration overhead.

Less convenient but very flexible is powercli, this is vmware addition to standard powershell framework and you can do all kinds of "en masse" tasks which are usually hard to perform or very complicated in terms of complexity. Imagine you need to perform change of policies for multipathing on 100+ SAN datastores or obtain ESX CPU, MEM and NIC info for multiple clusters connected to same vCenter. That is quite complicated and complex task to be performed through client. On top powercli you can have PowerGUI. This is free 3rd party tool with additional powerpacks for VMware so you can do less scripting and more queries and reporting right from the graphical interface. I'm talking little bit more about this brilliant tool in my previous post.

Another option, to perform tasks through command line, is esxcli you can compare this one to the powercli but it uses native vmware command line syntax where you can execute various configuration commands and is meant to be used for ESX(i) management directly.

On the picture above, you may notice additional "management" console called VMware VDR (vmware data recovery), it is not used as management tool for ESX or vCenter elements but it's rather used for managing backups of virtual machines. With basic installation you already have possibility to leverage from VCB (VMware consolidate backup) which you can use to backup whole     virtual machine files. You can download VCB from VMware for free  but going forward strategy is to use VDR as it is easier to manage and maintain. VDR is downloadable appliance in form of OVF file so you can easily deploy it within your infrastructure and utilize native ESX(i) functionality like snapshots and clones to create one centralized store of your backed up VMs.

pondelok, 31. decembra 2012

VMware vSphere - Basic building blocks of internal cloud Part 1


Articles like this usually start with some basic terminology so let me start with context of what it took to build cloud internally. Companies which invested in some sort of virtualization technology years ago didn't knew they were building private clouds, this hyped buzzword came later.
They just realized that they can get more flexibility and higher density of servers when they abstract HW layer from actual x86 machine by adding additional virtualization layer (hypervisor).

Does it makes sense to you? No? OK let's see it on a big picture.

HW abstraction of type 1 (bare metal) hypervisor

By looking at the picture you can see that actual components of physical HW are not directly accessible to operating systems (there are moments/scenarios when paravirtualization is used and this is not completely true but we will get to this later). There is virtual HW presented to OS instead and this is somehow mapped to physical HW by hypervisor. By doing this you can easily add, remove and modify virtual HW without physically touching the server in your server room. Even without downtime for certain OSes like Windows 2003, Windows 2008 Enterprise or SLES 11

VMware did a great job by doing this for x86 platforms. I'm telling you because virtualization, high availability, resource scheduling was here long before VMware. IBM and generally mainframe class machines had its own implementations since 70's. Important thing is that x86 is the most available and cost effective platform out there with huge portfolio of machines and configurations. You can check for hundreds of vendors and their compatibility with VMware hypervisors giving you plethora of options regarding you choice for virtualization platform.


So much for the history. VMWare offers two enterprise class of hypervisors both are type 1 that means they are installed on bare metal HW without any requirements for host operating system.

ESX - once mainstream platform with full console currently being replaced by stripped version called ESXi, last available version was 4.1U3, all newer versions does not contain ESX.

ESXi - very small, in terms of installation footprint, hypervisor, no console on server itself, set to be managed through infrastructure management tools like vCenter server, powercli etc. Current version is 5.1

Both versions fully support 64bit and can be combined.

You may wonder what is so magical about it. You have your HW abstracted you can install 2,3,4 maybe 5 independent OS on one physical server. So what? You are just putting all eggs into one basked if your server fail it will drag all guest OS with it. 
That's where VMware clustering kicks in.

VMware cluster

All available VMware vSphere versions allows you to create basic high availability clusters.
vSphere is the common name for all the functions of enterprise virtualization products. There is a comparison available so you can pick the correct licensing model.

What you can expect from VMware cluster is that if one of your ESX(i) hosts fail you will have all guest OS restarted on rest of the ESX(i) machines still available in the same cluster. Internal resource management (DRS) will also, according to your preference, try to equalize usage across cluster nodes.

That's it. 

To let you create cluster you need to have certain pre-requirements fulfilled. 

1.) You need SAN (iSCSI, Fiber Channel, NAS, NFS, FCoE) so your hosts can see the same storage at all times.

2.) HW must be the same model and configuration, there are some native functions which can help you with HW compatibility but in real life scenario you should prefer exactly the same HW from CPU till NIC and memory size.

Depending on your license you will have several functions of clustering at hand.

vMotion - let you move around your virtual machines within same cluster, it will transfer VM and content of the memory (memory state) without outage to different ESX(i)

svMotion - it will do the same but for all the configuration and virtual disk files. You can for example free up over utilized datastore and move the machine to less used one.

HA - in case of ESX(i) failure it will restart guest VMs on another ESX(i) in the same cluster, there is whole science behind how nodes in cluster detects failure but this article is called basic building blocks so no advanced terms here yet :) There is a brilliant book available describing whole magic.

DRS - will monitor resource usage in real time and in case there is a need it will move (vMotion) machine from one ESX(i) to another. Very useful for having same level of utilization on all cluster nodes.

FT - fault tolerance, despite the fact that HA will take care of restarting guest VMs in case of failure on available ESX(i) machines it still means necessary downtime for such VM. (machine needs to boot up) FT goes further and it will (once turned on) create linked shadow VM so that in case of failure there will be no outage at all. The CPU operations are shadowed on secondary VM and such VM is immediately available.

dvSwitch - distributed network switch. Whole networking in cluster is virtualized and you can have two or more physical NICs connected to same virtual switch (for redundancy and bandwidth) vSwitch is set up per one ESX(i). Allowing you to have several port groups serving several VLANs. Each guest VM can have multiple virtual NICs connected to multiple VLANs. What dvSwitch do is the further extension of vSwitch configuration across multiple ESX(i) within same cluster.

storage API - once supported by SAN vendor and model you can offload specific operations (usually demanding heavy CPU usage) to storage array and spare some CPU cycles. For example storage vMotion can be fully managed and performed on array by ESX(i) issuing xcopy command to move virtual machine vmdk files from one datastore (LUN) to another.

There are some other functions with more or less value but these, I described, are basic building blocks of every virtual infrastructure built on VMware vSphere product.

I will focus on management options in Part 2

sobota, 29. decembra 2012

Power GUI - swiss army knife of every administrator

Every once in a while you as an administrator face the problem to combine data from vcenter, be it virtual machine HW configuration, or ESX(i) machine inventory. Now imagine you have multiple vcenters and you want to see combined data for all virtual machines + their host and cluster relationships including subnets they are in. Or you want to list all machines with RDMs and whether RDM mode is physical or virtual.

Let me introduce you to Power GUI.

Power Gui is a free tool (or framework to be more precise) from Quest (now subsidiary of Dell) which creates front-end powershell interface for lazy admins as I am. Basic idea is to have one common interface grouping all objects of interest together and enable you to easily run reports (csv, html) or scripted tasks on any of them.

In addition to standard installation you can download power packs directly from PowerGUI interface. Power packs are special bulks of scripts customized for certain technology. For instance Active Directory, Network or VMware.

You can start playing around with queries after you add at least one host or vcenter and credentials into inventory.

If you checked VMware addon during installation you can now start listing and querying you virtual infrastructure for data.

I would like to point out one particular VMware powerpack called "VMware community powerpack". This one can run special predefined queries against all connected vcenters which are focusing on most day to day administration tasks. You can check for alarms, esx(i) HA slot availability, list RDM drives, current cluster CPU %Ready values etc.

You can export results into csv files and work with the data further in excel spreadsheets. I usually gather raw data from Power GUI and then do some vlookup stuff to put data together creating more complex inventories and reports.

Each powershell query you are about to run can be reviewed prior to execution. If you are more powershell capable person you can even run the console in authoring mode and edit/customize the scripts yourself.

I hope you will find this useful and this little summary will help you battling everyday challenges as a VMware administrator.

Download PowerGUI - for VMware addon you need to have VMware PowerCLI installed from link below
VMware PowerCLI