Xtravirt Blog

We are committed to sharing Virtualisation knowledge. The Xtravirt Blog is the place where we share our thoughts, opinions, ideas and technology

  • Category
    • Data centre
    • Cloud
    • Workspace
    • vPi
  • Author
    • Curtis Brown
    • Andy Hine
    • Saurabh Chandratre
    • Peter Grant
    • Giuliano Bertello
    • Xtravirt
    • Simon Eady
    • Gregg Robertson
    • Matthew Bunce
    • Steven Dunne
    • Nigel Boulton
    • Dan Card
    • Jonathan Medd
    • Ather Beg
    • Richard Renardson
    • Chris Trew
    • Sean Duffy
    • Darren Woollard
    • Darren McDonagh
    • James Taylor
    • Michael Poore
    • Seb Hakiel
    • Grant Friend
    • Chris Mitchell
    • Mark Barrett
    • Matt Mould
  • Filter
  • Reset

Search results

VMware Horizon - What am I buying?

These days, Horizon is much more than simply a means to deliver a desktop to a user. With the modern realisation that the user isn’t really after a desktop (they just want their applications), Horizon is now, more than ever, … [More]

Once upon a time (in the mid 2000s), VMware decided to delve into the End User Compute market space, taking on the juggernauts of Citrix and Microsoft by releasing VMware Virtual Desktop Manager (VDM). This evolved into VMware View in 2008, and over the following eight years a further re-branding exercise resulted in the release of the VMware Horizon Suite. Our plucky VDI software evolved into VMWare Horizon (with View) then Horizon View with the release of version 6.0, and finally in version 7 ‘Horizon 7’, with the name ‘View’ reserved as a hangover legacy name for the VDM components. So, as a customer, you’ve decided to invest in VMware Horizon.  These days, Horizon is much more than simply a means to deliver a desktop to a user.  With the modern realisation that the user isn’t really after a desktop (they just want their applications), Horizon is now, more than ever, a suite of products that allow you to deliver applications in a multitude of ways. This article aims to provide you with a quick primer on these products and how they fit together.  For the sake of simplicity, it will stick to the on-premises offerings, largely as these are still the most common ‘in the wild’, though Cloud based elements, such as AirWatch, are gaining ground.

The Shopping List

Let’s start by exploring the suite from a component level.  The suite comprises the following products*:
Product License Level What does this do
VMware Horizon (the View component) Standard Advanced Enterprise Delivers Virtual Desktops as either dedicated VMs or via Remote Desktop Session Hosts (RDSH). The latter ability is also used to deliver Published Applications (Not in Standard though).
VMware ThinApp Standard Advanced Enterprise This is what is known as Application Virtualisation.  An application is re-packaged in a self-contained executable that can be run on a Windows PC (physical or virtual).  The self-contained nature means that it is essentially sand-boxed – with minimal dependency on the surrounding operating system (required DLLs, configuration etc. are self-contained)
VMware Identity Manager Standard Advanced Enterprise This provides user facing authentication and single sign-on (SSO) capabilities as well as an application portal for presenting Virtual Desktops, Remote Applications, ThinApp packages and web-based applications. The SSO element can be used to provide immediate access to Web based applications if these are capable of SAML-based authentication. Beyond Horizon, Identity Manager is available in an Advanced version and as part of AirWatch.  These provide additional features, notably in Mobile Device management including device configuration, remote wipe etc.
VMware App Volumes Enterprise This product provides the means to deliver defined stacks of applications (AppStacks) to virtual desktops or RDSH servers instantaneously using shared Virtual Disks. This is quite agnostic – and can be purchased and deployed with other VDI solutions such as Citrix XenDesktop.
VMware User Environment Manager Enterprise This is used to manage user’s Windows Profile – the users environmental and application settings.  The Windows profile is notoriously large and unreliable, especially in a roaming context.  UEM provides a means to streamline and centrally manage the Windows profile, so improving log on times and reliability. As with App Volumes, this too can be purchased and used outside of Horizon – including with physical devices.  In this context, its central management of settings is particularly useful. If you’re using Standard or Advanced, the View part of Horizon includes the older Persona Manager product.  This uses a compressed version of the Windows Profile to speed up the delivery of Roaming Profiles to virtual desktops.
VMware Mirage AdvancedEnterprise This provides a layered-image based desktop management solution. Although it can be used in Virtual Machines, it is primarily aimed at the management of physical Windows PCs in a corporate environment. Its abilities include in place operating system upgrades and the delivery of sets of applications that are layered on top of the base image.  This makes it quite attractive for Windows XP/Vista migrations for example. VMware FLEX is based on VMware Mirage, combining its technology with that of VMware’s desktop hypervisors – Workstation (for PC) and Fusion (for Apple Mac) – to deliver centrally managed, encrypted offline Virtual Desktops.  This is predominantly intended for BYOD or field staff who require a corporate desktop in a roaming context where connectivity to Horizon might be difficult.  It’s not part of the Horizon Suite and is sold separately.
*the components you’re entitled to depends on the level of licensing you’ve purchased The Horizon Suite also includes VMware vSphere for Desktops.  This is functionally equivalent to Enterprise Plus in terms of features, but is licensed per desktop.  The idea is that this is used to host the published desktops while the Horizon infrastructure servers are hosted in the corporate server estate.

How it all hangs together…

The following diagram shows how the platform holds together as a single site solution.  This is a somewhat simplified example with no resilient components, firewalls and load balancers and concentrates on Internet based access.


We can see how our user initially connects from their device to Identity Manager.  The user’s device and location can define how they authenticate – for example, access from the outside might need Two Factor Authentication, whereas an internal user might just need an Active Directory Used ID or password. Once authenticated, the user can see the applications they are entitled to.  Let’s first look at a Cloud based application such as SalesForce.  If the user selects this, Identity Manager (having already been configured with a SAML relationship to Salesforce), passes the user authentication as a token to the user’s browser accessing the Salesforce application – without them needing to log on a second time. When considering an application or desktop presented by Horizon View, the request is passed to the Connection Servers.  These will initiate a session to an available desktop (either VDI or RDSH, depending on what is requested) – in the case of VDI desktops, these can be provisioned as Linked Clones using Composer. As the VDI session is logged on, the App Volumes Agent detects the credentials and App Volumes mounts an entitled AppStack containing the application required.  The user’s Windows Profile is provided via UEM.  For internet-based users, the View session is tunnelled out via the Security Server/Appliance in the DMZ.  Internal users connect directly to their session (though this can be configured to tunnel via the Connection server). ThinApp packaged applications may well be deployed to the Horizon desktop (possibly inside the App Volumes AppStack), but as an alternative, if the Workspace ONE client is installed, Identity Manager can stream a selected entitled package directly to the user’s Windows laptop/desktop, straight from the Identity Manager Portal (this isn’t shown above). Alternatively, VMware Horizon Mirage can be used to manage and protect our PCs, whether within the network or outside (via the security gateway in the DMZ).

Closing Thoughts

The Horizon suite has quite a few elements, each delivering different pieces of the puzzle - even in a basic configuration.  However, in concert, the solution is quite seamless to the end user.  The single point of access and authentication becomes Identity Manager, with this presenting all the entitlements desired through a single pane of glass.


  If you’d like to learn more about VMware Horizon and how to use it to deliver applications to end users, please contact us and we’d be happy to use our wealth of knowledge and experience to assist you. 

About the Author

Curtis Brown joined the Xtravirt consulting team in October 2012. His specialist areas include End User Compute solutions and Virtual Infrastructure design and implementation with particular strength in VDI, storage integration, backup and Disaster Recovery design/implementation. He is a VMware vExpert 2017.

Single Sign On: The VDI Challenge

As more and more applications become used by a business, the more authentication turns into a headache for both support and end users. Fortunately, there are mechanisms that, when implemented, allow for the exchange of security tokens.

Authentication can be quite a complex beast, with different applications using their own authentication methods.  As more and more applications become used by a business, the more authentication turns into a headache for both support and end users.  Of course, as a key security mechanism, authentication is something of a necessary evil. Fortunately, there are mechanisms that, when implemented, allow for the exchange of security tokens.  SAML (Security Assertion Markup Language) is an XML-based open-standard that provides such a mechanism.  It works on the principle of three roles:
  • The Principal – usually the user credentials
  • Identity Provider (IdP) – an identity source
  • Service Provider (SP) – a service requiring authentication
One Single Sign On (SSO) solution is VMware Identity Manager (vIDM) which serves as a central point of authentication. On one hand, it is configured with authentication sources.  This could be a basic password sign on or a more advanced solution, for instance, a two-factor token solution such as SecurID or other RADIUS solutions.  The relationship between the authenticator and vIDM may vary, for example, password authentication against Active Directory uses either a service account or the membership of the vIDM appliance in Active Directory. On the other hand, integration is configured with applications to be served by the SSO solution.  SAML is used to provide the configuration and the trust relationship.  vIDM serves as the IdP, with the applications as Service Providers. In between this, vIDM leverages a configured directory that maps users (or groups) to entitled applications.  vIDM provides a workspace portal that can present these entitled applications to authenticated user sessions. One of these applications is Horizon View; where both remote applications and desktops can be presented to users via vIDM’s workspace.

Curtis 1


Authentication between vIDM and Horizon View

Horizon View uses a SAML relationship for authentication via vIDM.  Horizon is the SP and vIDM is the IdP.  During the Directory configuration of vIDM in particular, the distinguished Name attribute needs to be included, as this is required for the Horizon SAML token exchange.  Setting up the relationship occurs on both sides.  In brief: In VMware Identity Manager’s Administration console (Under Catalog>Manage Desktop Applications>View Application) View Pools must be enabled, with the appropriate connection settings and credentials entered – oh, and remember to accept the Certificates if they aren’t already trusted!

Curtis 2

In Horizon View Administrator, you will need to set up an Authenticator on a connection server. If you choose the setting ‘Required’, it will only accept authentication from vIDM, however by choosing ‘Allowed’as your setting, you will retain the ability to log on directly to View. Curtis 3 This establishes the two-way connectivity, at which point a synchronization should be carried out on the vIDM side.  This pulls in a list of applications and their entitlements from View into vIDM. It would be beneficial to set a regular schedule for this.  Note that Horizon View resources are entitled and managed by View and are only presented via vIDM – hence the need to synchronize. Once the entitlements are synchronized from Horizon View, a user can log into vIDM and select a Horizon View resource.  This action generates a SAML token that passes on to Horizon View, authenticating into View automatically, which in turn establishes the desktop or application session.  

Where TrueSSO fits in…

Unfortunately, not all authenticators are built equally.  In the simplest scenario, where the authenticator is a simple Active Directory user name and password, the SAML token passes from the vIDM session to the Horizon View Connection server, complete with all the required user attributes including the password. However, for some third party IdPs, the user can be authenticated at the third party, with the resultant session established with vIDM via the trust mechanism with the IdP – with no user password passing through.  From a security perspective, this is a good thing however, there is an issue with respect to Horizon View.  In this scenario, if a user clicks on a View resource, say a desktop, the user will be prompted for a password.  This is because the token from the vIDM session to Horizon View needs a password that vIDM doesn’t have. The user password is required not by View itself (as the SAML token itself is trusted) but for the guest Operating System of the published desktop.  Of course, this request rather defeats the point of Single Sign-On. However, all is not lost!  In VMware Horizon View 7, an additional feature was added – TrueSSO.  In a nutshell, TrueSSO intervenes in areas such as a password-less login process, by providing a trusted session certificate to the Desktop Host during the Windows logon in lieu of the password. TrueSSO is based around three elements:
  • Horizon View Connection Servers – these still handle the logon and authentication process, even when handed off from vIDM.  If the vIDM token lacks a password, it checks the requested domain and passes the request to…
  • Enrollment Servers – these compile a request from the Horizon Solution for a session based certificate that is passed to…
  • Certificate Authority – an Active Directory based certificate authority is configured with an authentication (SmartCard) template.  The Enrollment Server requests the creation of a certificate from the template, which is then used during the logon sequence onto the View desktop instead of a password.
Multiple Enrollment servers can be used for resilience and load balancing, configured as Active/Passive or Active/Active.  The Enrolment Server application can be installed on the CA if this is desirable too.  The VMware Horizon View 7 documentation has a particularly nice diagram describing this architecture.

Curtis 4

Don’t forget (as I did the first time) to enable “Supress Password Popup” in the vIDM Admin page for View Pools! The net result is that a user in a configured domain can log on, regardless of identity source, to a Horizon View resource via vIDM.  

And in the real world …

I recently worked on an engagement where the customer wanted to use an Azure Active Directory solution as an SSO solution, but leverage VMware Identity Manager for publishing Horizon View based applications.  Azure was configured using Active Directory Federated Services to federate with the existing on-premise Active Directory (into which the Horizon View solution was deployed).  The purpose of using Azure was to provide both Smartphone based two-factor authentication for external users as well as providing access into SharePoint and other applications.  All rather slick. Unfortunately, while Azure could be used as a SAML IdP allowing Single Sign On into VMware Identity Manager, the user session SAML token didn’t contain a password.  By configuring TrueSSO to request certificates from the customer’s Domain CA, we could provide an end-to-end single sign-on to desktop from an Azure log on.  
  If you’re interested in finding out more about how Xtravirt can assist with deploying a VDI solution, please contact us and we’d be happy to use our wealth of knowledge and experience to assist you.  

About the Author

Curtis Brown joined the Xtravirt consulting team in October 2012. His specialist areas include End User Compute solutions and Virtual Infrastructure design and implementation with particular strength in VDI, storage integration, backup and Disaster Recovery design/implementation. He is a VMware vExpert 2016.

The rise of SDN: A practitioners deep dive into VMware NSX

Introduction The hype about Software Defined Networking (SDN) has been around for years, during which time the technology has rapidly matured, leading to fast growing adoption.  So what is it all about? In this article I’ll share a bird’s eye … [More]

Introduction The hype about Software Defined Networking (SDN) has been around for years, during which time the technology has rapidly matured, leading to fast growing adoption.  So what is it all about? In this article I’ll share a bird’s eye view at software based networking, and in particular VMware® NSX. The areas covered are:
  • What is SDN?
  • Origins of NSX
  • Why virtualise the network?
  • Traditional network challenges
  • How does NSX achieve network virtualisation?
  • Features and architecture
  What is SDN? For those who have managed to avoid the marketing, SDN enables networking and security functionality traditionally handled with hardware based equipment, eg: network switches, firewalls, load balancers, to be performed in software. This process is often called ‘Network Virtualisation’ and involves the abstraction of networking from its physical hardware. VMware NSX is a network virtualisation platform that VMware hope will fundamentally transform the data centre’s network operational model, just like the server virtualisation ‘bonanza’ did over 10 years ago. This, in the eyes of VMware will continue the organisational march to realising the full potential of the Software Defined Data Centre (SDDC – a term coined by VMware a number of years ago). As part of this drive, NSX is now integrated into the later versions of VMware’s vSphere hypervisor.   Origins of NSX So where did NSX come from? Back in July 2012 VMware purchased Palo Alto based company Nicira for $1.2 billion. This represented the most expensive acquisition made by VMware (and most likely will be for a long time yet), showing how serious they were and are about their SDDC vision. Nicira were founded in 2007 and had a modest (ish) customer base of mainly enterprise clients.. Whilst their focus was on network virtualisation and developing SDN products, most were for non-VMware and open source platforms.  NSX was developed from Nicira’s existing NVP software.  

Why virtualise the network? By automating and simplifying many of the processes that go into running a data centre, network virtualisation helps organisations achieve major advances in simplicity, speed, agility and security. With this approach to the network organisations can:
  • Achieve greater operational efficiency by automating processes
  • Improve network security within the data centre
  • Place and move workloads independently of physical network activity

  Traditional network challenges Networking teams respond to the request for change from the business with manual and often complex provisioning of hardware devices, software and configuration. And this is usually performed by an engineer with specific knowledge of the particular technology. This can create bottlenecks in provisioning new networks and applying network related changes, along with introducing the possibility of configuration errors and even outages due to human factors. To explain by way of example; imagine a new application is to be developed, it requires isolated test, development and production environments to support it,  and each environment has its own web tier in a DMZ, and all need access to a central application library and code repository… and the deadline is yesterday. The virtualisation team have provisioned the virtual server resource and passed it over to the network team. Now what? The team manually provision new physical networks? Access ports? Trunk ports? VLANS? Firewall rules? Is there capacity on existing switches? Where is routing going to occur? Where will the DMZ be placed? And who’s got the skills and time to do it? … “This is complex…let us get back to you.” You can see how this type of commonly repeated scenario can lead to a number of other challenges. What happens if a workload needs to move from one host or resource pool to another, for example due to maintenance. Can the new destination support the networks that the VM is currently part of? Will this require a change of IP address? Will that be in the right rule base, and please don’t tell us you need to move a VM from test to production. This traditional networking approach is static, inflexible and produces silos. The management overhead is increased further by sprawl of VLANs and firewall rule sets. Years of server virtualisation has meant that IT infrastructure teams can now respond quicker to these type of business challenges, in fact they may well have a lot of their processes automated, maybe in provisioning new workloads or remediating issues, with integration into service desk and change management systems. So is there any chance the networking can follow suit? Well it’s certainly slower and more complex with the traditional approach. The ability to rapidly provision, update, and decommission networks (including DMZ’s) in an agile, lower risk and highly available way can be achieved through network virtualisation, and is a major use case for VMware NSX. Add to that massively reduced management overhead, the ability to automate these processes and even integrate it with existing systems makes it a very compelling conversation. Security and routing are other networking challenges, traditionally it has been the ‘Castle’ approach where IT has secured the data centre perimeter. Often achieved by having a powerful hardware device at the edge (or multiple if a DMZ is required), sporting large throughput capacity, firewalling and L3 routing capability with maybe some anti-virus or intrusion detection. I see a couple of issues with that approach, firstly performance is a concern, causing potential choke points and inefficiency as each workload’s networking may be subjected to ‘hair-pinning’. The process whereby the network packet travels from the source machine all the way up to the edge device, is processed and then set off on the return journey back down again to the destination (even if the destination is located on the same physical host by the way).  

  As more and more of the data centre is defined in software, the transition to east-west traffic is huge. A simple example is virtual machine A, which is part of internal network 10.1.1.x and running on DC host 1, wants to communicate over TCP port 80 with virtual machine B, which is part of internal network 10.1.2.x running on DC host 2 - this is traffic that does not need to leave the data centre for any reason like north-south traffic, so therefore should not need to bother with adding extra processing at the edge device and also slowing down its own round trip time.  

  And what happens if a threat slips through the cracks, some malware on a VDI machine for example – how is that contained? With the Castle model, once a threat is inside it can roam around freely connecting to as much as it can, of course attempts to counter this may be with software (“personal”) firewalls on each VM or having individual rules for every single machine on the perimeter device, or even having physical firewalls between each machine. None of those options are scalable or manageable, and the latter for your average organisation is nonsensical.  

  By creating granular security for network segments at layer 2 instead of layer 3, this not only allows IT to increase network efficiency and reduce chatter but also introduces firewalling to east-west traffic, establishing a much more secure zero-trust operating model for networking (even within the same VLAN). Security can be deployed at the VM, at vNIC level. To stick with the analogy earlier we have now created the ‘hotel’ model. This process is called Micro-Segmentation, and is another major use case driving NSX adoption. How does NSX achieve network virtualisation? VMware NSX applies network virtualisation to the physical network, much like a hypervisor does for compute, this allows for software based networks to be created, managed and deleted.  

  When defining networks in software (or logically) we are providing an overlay that decouples the virtual plane from the underlying hardware rendering it a network backplane, so merely a vehicle for traffic to travel on the physical network. This introduces potential to extend the life of hardware or reduce cost of its replacement due to the transition of the intelligence into software.  

  VMware NSX builds upon vSphere vSwitch technology as well as adding:
  • Encapsulation techniques at the hypervisor level
  • Introducing new vSphere kernel modules for VXLAN
  • Distributed logical routing and distributed firewall services together with edge gateway appliances to deal with north-south traffic routing
  • Advanced services such as load balancing
The example in the diagram above shows VMs (green and blue) on the same host but on different networks. Using NSX virtual networking services (2 x NSX vSwitches and 1 x NSX distributed logical router to be precise) the VM’s can communicate with one another without a single frame leaving the host.  

  The diagram above shows Virtual machines on different hosts can communicate through NSX distributed logical switches even when the underlying physical network is not configured.  In this example that would mean the network team would not have to do anything. Features NSX allows for networking functions previously defined in hardware to be realised virtually, including logical switching and routing, firewalling, load balancing and VPN services. NSX also provides a REST API for integration with additional network/security products and cloud platforms.  

  Architecture For those looking to realise network virtualisation it is important to understand that in the case of NSX there is no single component that will ‘switch on’ networking virtualisation. As shown in the overview of the architecture below, there are a number of integrated technologies working together to enable its introduction. Organisations who have already invested in VMware infrastructure are at an obvious advantage. NSX data plane components are already embedded into the latest versions of vSphere at the hypervisor level, providing the base for distributed virtual switching and edge services along with L2 bridging to physical networks. Control plane components are introduced on top, facilitating the use of logical networks and VMware vCentre is a prerequisite for the NSX management plane. At the top of the stack NSX integrates with the vRealize suite for cloud operations, automation and self-service.  

  Deployment Deploying NSX can be straightforward with the right planning and design, and can be installed on top of any network hardware. Note the direct relationship between vCentre and NSX Manager.  

  To maximise the technology, the real focus and effort comes with tight integration into an environment, whether upgrading an existing one or planning a greenfield site.  NSX is a powerful platform which can drive complexity when considering configuration and customisation, and associated elements and polices. Summary NSX is changing the face of network virtualisation and with benefits already being realised at the enterprise customer level, indications are that its adoption will continue to grow as organisations understand the benefits of network virtualisation. A well-engineered physical network will always be an important part of the infrastructure, but virtualisation makes it even better by simplifying the configuration, making it more scalable and enabling rapid deployment of network services. Businesses are exploring NSX and network virtualisation because they are able to achieve:
  • Significant reduction in network provisioning time
  • Greater operational efficiency through automation
  • Improved network security within the data centre
  • Increased flexibility and agility
The use cases for NSX are moving from ‘presentation world’ to the real world and most major innovations VMware are working on rely on NSX for the virtualisation of the network. With the rise of containers and isolated application environment, the micro-segmentation use case will become even more prevalent as it delivers a fundamentally more secure data centre. SDN has also created an interesting dynamic for the SysAdmin. Does the virtualisation expert become a networking expert as well? Or does it fall into the network engineer’s domain… we’re already seeing a transition to the former, and a further evolution of IT support roles in general so it will be interesting to see how this develops. Xtravirt are the experts in NSX solution design and integration and can help deliver an accelerated and non-disruptive SDN transformation for your organisation. To find out more about how we can work with you, contact us today.   About the author Andy Hine joined Xtravirt in August 2015 as a Technical Pre-Sales Consultant. He has over 15 years’ experience in IT across various industries and technologies. He has been involved in many transformation projects, architecting and enabling solutions in IT infrastructure and systems management, EUC/application delivery, virtualisation and cloud transition.  Andy has a wide array of technical skills mainly focused on VMware, Citrix and Microsoft technologies.

Horizon View App Publishing & Custom Icons

The Problem… So, here’s the problem -  you’re presenting applications in Horizon View and instead of using a generic icon you want to use your own unique application icons.  For example, you’ve got an application that runs from a batch … [More]

The Problem…

So, here’s the problem -  you’re presenting applications in Horizon View and instead of using a generic icon you want to use your own unique application icons.  For example, you’ve got an application that runs from a batch script, or a web application that you want to launch using Internet Explorer via RDS (perhaps some legacy intranet application).  If you deploy the application in View using an Application pool, there’s no obvious way of pulling in a custom icon.  You end up with a generic icon - for example, for a web application using Internet Explorer, you get the icon for Internet Explorer as per below (Let’s say a MS Certificate Services portal in our example).


Functionally, when you set up a pool, Horizon View pulls in the icon image from the application executable described in the Application Pool configuration. So how, I hear you cry, can we put a custom icon onto the application?  Having tried pointing the Application Pool to a link file in Windows with little success, I came up with this.  I’m not saying it’s the only method, but it works quite well.

The fix…

Firstly, using our example scenario from above let’s create a batch file to launch the application


For our example, this will launch Internet Explorer and point it at our web site URL.  It could equally be a batch script with a custom configuration for application X. Next, we need to use a tool to compile an executable from the batch file (a bat to exe conversion).  I used this one (http://www.f2ko.de/en/b2e.php) as it allows you to inject an icon in the form of an ICO file as well as run the application silently.  The icon file is up to you, but you can convert images to icons using this conversion tool: http://converticon.com/ ).  I chose a particularly timely example, as I’m writing this at Halloween.


This will provide us with our executable version of the batch file.  Put this executable in a location with appropriate file permissions to allow a user to read/execute the binary. Now publish the application in Horizon View as an application pool, as required.


And, through the joys of modern technology, we now have our application with the correct icon.  Oh, the horror!


 In our case, as the executable is set to run hidden, the user simply sees the Internet Explorer browser with our website.


So, although a bit of a bind, at least we have a way around this minor aesthetic issue with RDS published applications in VMware Horizon View. If you’re interested in finding out more about how Xtravirt can assist with deploying a VMware Horizon solution, please contact us and we’d be happy to use our wealth of knowledge and experience to assist you   About the Author Curtis Brown joined the Xtravirt consulting team in October 2012. His specialist areas include End User Compute solutions and Virtual Infrastructure design and implementation with particular strength in VDI, storage integration, backup and Disaster Recovery design/implementation. He is a VMware vExpert 2016.

Welcome to Windows Server 2016

As we approach the end of the year, Microsoft have released the latest version of their server-side flavour of their Operating System offering – Windows Server 2016.  It’s been three years since the release of Windows Server 2012R2.  Let’s take … [More]

As we approach the end of the year, Microsoft have released the latest version of their server-side flavour of their Operating System offering – Windows Server 2016.  It’s been three years since the release of Windows Server 2012R2.  Let’s take a look at some details in this new version.


Editions and Licensing

As featured before, we have Datacentre and Standard versions – the former is now aimed specifically for “highly virtualised datacentre and cloud environments” while the latter is intended for physical servers. The Datacentre version’s additional features, above and beyond Standard, emphasise this cloud prioritisation:
  • Shielded VMs
  • Software defined networking
  • Storage Spaces Direct
  • Storage Replica
In addition, a Standard edition license covers you for two “Operating System Environments” (OSEs – Windows instances) or Hyper-V containers, while Datacentre is unlimited. There are some additional variants:
  • Essentials replaces the old Foundation release aimed at small (25 user / 50 devices) businesses
  • MultiPoint Premium Server is a specific edition for Remote Desktop access and is only available to Academic licensees – The MultiPoint Premium Server role is included in Standard and Datacentre, requiring Server CALs and RDS CALs as before
  • Storage Server is an OEM release for Windows based storage solutions
  • Hyper-V 2016 – the free, Hypervisor only offering continues (remember to license your guests though…)
The big news for Datacentre and Standard is that licensing has moved to a core, rather than the socket based model (as is for all other editions).  All cores on a physical host must be licensed, with a minimum license of 16 core licenses per server – with a minimum of 8 core licenses per physical processor.  Core license packs are sold in 2-core packs, so a minimum purchase is basically 8 x 2-core packs. Microsoft state that this will be priced equivalent to a 2 CPU Windows 2012R2 edition.  Beware though, if you’ve purchased a new 2-socket box with a pair of Intel Xeon with a high core count, this could look quite pricey. Take a server with two Intel Xeon E5-2699 v4 -- this would have 44 cores (each CPU has 22 cores), so straight away, you’re looking at 22 x 2 core licensing packs, which would be the equivalent to buying 3 CPU licenses of Windows Server 2012R2.  Draw your own conclusions. One note – if you have an existing Software Assurance agreement, moving to Core based licensing only kicks in when the agreement is renewed - you’ll be getting a minimum of 8 cores per processor and 16 cores per server licenses for each 2-processor license at renewal of the agreement.

New Toys!

So, now that the pain point of licensing is out of the way, let’s take a look at some of the new features mentioned above.

Shielded VMs

This is a security mechanism that allows administrators to provide a means to secure individual VMs.  It leverages a Guardian service that stores keys which an approved Hyper-V 2016 host uses to prove its authorisation to run shielded VMs.  Hyper-V 2016 uses Trusted Platform Module (TPM) and UEFI on start-up to ensure it is healthy and provides confirmation of its identity when presenting itself to the Guardian service.  If all is well, the Guardian issues a certificate to the host enabling it to run the Shielded VM.  The VM itself is encrypted (using BitLocker backed by vTPM) and uses a hardened VM worker process of the host that encrypts all state related content, checkpoints, replicas and migration traffic.  The VM also has no console access, including VM external features such as Guest File Copy, PowerShell integration or direct administrative permission to the guest OS.

Software defined networking

Leveraging technology from Azure, Windows Server 2016 networking has gained the ability to deploy policies providing QoS, isolation, load balancing and DNS (amongst others). This ability is provided through network virtualisation handled by VXLAN based micro-segmentation, much in the same way as VMware NSX. All this is possible due to the implementation of a new installable Network Controller component.  This manages firewalling (vSwitch port all the way to datacentre), Fabric management (IP subnets, VLANs, L2/L3 switching), network monitoring and topology discovery, L4 load balancing and RAS gateway management.

Software Defined Storage

Storage Spaces Direct leverages local storage to create a converged storage architecture, somewhat similar to VMware VSAN.  Like VSAN, it’s primarily aimed at storage for virtualisation. Resiliency to drive failures etc. is configurable by volume type, supporting mirroring (performance) and erasure coding (efficiency).  Furthermore, hybrid volumes combine these techniques into a single volume with an added ability of automatic storage tiering.

Storage Replica

Storage Replica offers a built in synchronous replication solution for business continuity and DR.


Windows 2016 now provides the means to deploy applications in Containers, in keeping with the current trend towards a DevOps model.  Developers can package applications and deploy as containers.  Containers come in two flavours – Windows Server or Hyper-V.  The difference between these is that a Windows Server container is broadly the same as a Linux one. The application itself is containerised, with its own view of the host OS.  Hyper-V containers are more virtualisation driven, with the container including an operating system.  This leverages hardware virtualisation, and completely isolates the container from the host OS.  Windows Server Containers, being somewhat smaller and less resource intensive, scale more efficiently but Hyper-V containers are more isolated and secure. In addition, Windows 10 Professional and Enterprise Anniversary Editions both support Containers, allowing developers to create containers on their workstations and deploy to Windows Server 2016.

Nano Servers

Nano Server is a Windows 2016 deployment option that provides the ability to deploy the smallest possible footprint Windows Server installation.  It is so small that it runs headless, with no GUI etc. so taking Server Core to the next level.  It’s designed specifically for Cloud workloads and specific use cases (including Containers).  Being such a small install reduces the surface area and so improves security whilst reducing the patching and support overhead. Nano isn’t selected as an installation option – deployment requires customisation of the image for a variety of reasons, not least defining device drivers as it lacks user-mode plug-play.

Closing Thoughts…

We’ve only scratched the surface of the new features of Windows Server 2016.  Many of these are quite attractive, even when expanding beyond the Microsoft world.  I can see Nano in particular being an interesting option in a VMware vSphere platform for application delivery, perhaps as a part of a vRealize Automation solution.  Of course, time will tell how successful these new features are – network virtualisation for example will need to compete with the traditional networking player offerings by Cisco etc. as well as software solutions such as VMware NSX. Of course, licensing is a question mark of its own which will have implications for most customers, including those running VMware vSphere. I’m looking forward to seeing how Windows Server 2016 is accepted into the marketplace and how it develops.  

About the Author

Curtis Brown joined the Xtravirt consulting team in October 2012. His specialist areas include End User Compute solutions and Virtual Infrastructure design and implementation with particular strength in VDI, storage integration, backup and Disaster Recovery design/implementation. He is a VMware vExpert 2016.

A view of VMworld 2016 from a first timer

Xtravirt provided me with the opportunity to attend VMworld Europe this year which was held at the Gran Via conference centre in Barcelona. Being my first time attending the event, I didn’t know what to expect, however I had heard … [More]

vmworld2016 Xtravirt provided me with the opportunity to attend VMworld Europe this year which was held at the Gran Via conference centre in Barcelona. Being my first time attending the event, I didn’t know what to expect, however I had heard a lot about the event and sessions which were to take place over the 4 days as well as a few anticipated big announcements. The feedback from my colleagues about VMworld has always been positive so I was intrigued to say the least. In this blog I aim to give you an insight into my VMworld experience. cvb  The sessions During my VMworld experience, I attended various sessions on SDDC, vRA, vROps, NSX, vCloud and took part in some Hands on Labs as well. There were also some great Expert Panel sessions where you got the chance to interact with Experts within a specific product or technology. The quality of some of these sessions for a non-technical audience was spot on, especially sessions such as, Experience the Business Impact of IT Innovation & Transformation, Business Value of Data Centre Virtualisation and Hybrid Cloud Extensibility. The general sessions took place every morning and that’s where lot of the new announcements happened. I personally feel that the biggest announcement was VMware Cloud on AWS. I think it’s a win-win situation for both companies, as VMware are market leaders in SDDC and AWS are market leaders of public cloud. It’s due to be released in May 2017. Pat Gelsinger, CEO of VMware, also introduced some new features with vSphere 6.5 and VSAN 6.5 future releases which I’m looking forward to finding out more about. Solutions Exchange The Solutions Exchange features the latest products and solutions from over 130 sponsors and exhibitors. People may also know it better as the goodies area! It’s a huge hall which is full of lots of booths and vendor representatives who are happy to go through their products with you. This is also where you get a chance to demo some of the products and it helped me to compare a few options that are actually available out there. There was also the chance to enter a number of prize draws, where some booths/vendors even gave away Barca vs Man City tickets for a match taking place over the event’s 4 days. I tried my luck a few times at different booths however unfortunately didn’t win – maybe next time.  Your Schedule There is a lot happening at VMworld so it makes a difference to prepare your schedule prior to heading there using the tools provided by the event organisers.  You still have the option to add other sessions once there but its best to be prepared beforehand. It’s also worth heading to the Breakout Sessions. These are opened up to all attendees five minutes before the designated start time – but be warned they get full quickly. Networking As with many events of this type, VMworld is a great opportunity to network with peers, industry leaders, tech providers and vendors. The meal times, aside from a chance to pick up some great food, were a good time to meet up with other attendees and gave me a chance to get to know new people, socialise and expand my peer network amongst those all with a common interest. The VMworld party hosted by VMware was held a couple of days into the event and provided another opportunity to get to know other attendees and enjoy some great entertainment. This year it featured a great musical performance by Empire of the Sun with the opening act by FACE-TIC who are one of the biggest names in Barcelona’s DJ scene. The vendor parties held every evening were also a great way to network with colleagues and meet other IT industry representatives. Overall, I enjoyed my VMworld 2016 experience and I cannot Thank Xtravirt enough for this opportunity. Well done to the VMworld team on a fantastic event!  

About the Author

Saurabh Chandratre joined the Xtravirt consulting team in March 2015. His specialist areas include VMware Virtualisation product sets, Microsoft Server Technologies and solutions and EMC storage technologies.

Designing Horizon FLEX

Following on from my previous blog post which explained what VMware® Horizon FLEX is, I thought it would be worthwhile pulling together my thoughts on what you need to consider when building your own VMware Horizon FLEX estate. Before you … [More]

Following on from my previous blog post which explained what VMware® Horizon FLEX is, I thought it would be worthwhile pulling together my thoughts on what you need to consider when building your own VMware Horizon FLEX estate. Before you start spinning up servers, you need to consider a few factors.


FLEX works on the concept of a web server solution that can dish out virtual desktops and then manage them.  As such, the FLEX management server needs to be accessible over the internet (even if only intermittently from the user perspective), either directly (which in many ways is more flexible – pun intended), or potentially via an endpoint based VPN. This diagram shows the components of FLEX FLEX FLEX is an extension of VMware Horizon Mirage.  The first thing you deploy in a Mirage solution is the Mirage Management Server and its SQL database.  In a FLEX solution, this is just the same.  In later iterations, this Management server also includes a MongoDB database – this is only used for Mirage – if you’re not using Mirage itself, then the 250GB space requirement can be ignored, though putting it on a separate disk is recommended as a precaution.  With the later releases of Mirage, a couple of Mirage Management servers should be deployed for resilience (ideally tucked behind a load balanced VIP), though FLEX is more tolerant of downtime, so this might be something to consider. The FLEX server itself is actually a component of the VMware Horizon Mirage Web Management Server.  This installer isn’t particularly selective, being an all-or-nothing affair.  Again, for resilience, load balancing a pair is recommended, though a single node can support thousands of endpoints.  In an internet facing deployment, as would usually be the case, VMware recommend deploying the solution behind a reverse proxy to protect it.  Failing that, one option is to deploy FLEX servers in the DMZ, manually deleting the sites related to Mirage management, leaving only the RVM server (Horizon FLEX) and blocking the sub-folders from public access.  It’s not bullet-proof, but it’s a limitation of the software design. Images are hosted on a simple HTTP web server.  When a user connects to FLEX from the client and downloads an entitled desktop, the client is directed to the appropriate image on the web server for download.  There’s little security risk as the only thing hosted here are the images. These are encrypted anyway, but are otherwise un-configured with respect to Active Directory or user credentials until they are completely downloaded and policy applied.  An interesting option is to use a cloud provider to host these.  This could even be used to distribute the image across different geographies for a global deployment.  It’s recommended, at least from a load perspective (but probably wise from a security perspective too) that the image store isn’t on the same server as the FLEX server in production. Mirage itself can be leveraged to protect the Windows OS within the deployed desktop.  This involves deploying the agent in the VM.  Architecting Mirage itself is an extension onto what we’ve already deployed – adding Mirage servers with their storage behind a load balancer.  If the VM has a VPN connection into the corporate network, you won’t necessarily need to have this part internet facing. However, if you choose to, the Mirage Gateway could be deployed in the DMZ.  Note that Mirage now uses a MongoDB deployed on the Management server.  This is used as a caching mechanism for small frequently used files, so ensure that this database is on fast solid state storage. Each Management server should be provided with 250GB or more on a dedicated volume for maximum efficiency.

What are you deploying and who is using it?

Consider the target use cases and the image you’re deploying.  If you’re talking about users on corporate machines downloading a desktop, you might have a VPN solution on the endpoint, so you might have a user connecting over VPN, then connecting to the FLEX server and using the image etc.  In a BYOD context, you’ll want to consider VPN within the deployed VM. When a VM is first downloaded off-site using the FLEX client, it’s literally the raw (albeit encrypted) template image.  The FLEX management server carries out an offline domain join for the VM assigned for the user.  In turn, FLEX generates a blob file that, along with other configuration information, contains the unique windows client information including the client side domain join information.  This is injected into this template allowing the VM to configure on start up.  At this point, the user can log on with their AD credentials. This does pose another issue though – how does the user authenticate their AD credentials on a new domain joined desktop when they’re offsite?  One solution is to deploy a Read Only Domain Controller that can be connected to by the client.  An alternative is to look at VPN solutions, either in-guest or on endpoint.  Inguest, as a solution, requires that the VPN be initiated on machine start-up so the user can authenticate on logon.  Microsoft DirectAccess can achieve this (and can be configured by FLEX as part of the entitlement), although third party solutions may also work.

Closing Thoughts…

On the surface, it’s a pretty simple product, however, it’s not without complications. These complications lay largely in how to deploy it in an existing environment.  In the modern day where wireless connectivity is relatively commonplace, VDI is often an easier solution to deploy, manage and secure than any offline desktop solution.  However, there are still some use cases where an offline desktop is preferable and FLEX fits this bill nicely. If you’re interested in VMware Horizon FLEX, want to know more, or require assistance in developing a FLEX solution, please contact us and we’d be happy to use our wealth of knowledge and experience to assist you.  

About the Author

Curtis Brown joined the Xtravirt consulting team in October 2012. His specialist areas include End User Compute solutions and Virtual Infrastructure design and implementation with particular strength in VDI, storage integration, backup and Disaster Recovery design/implementation. He is a VMware vExpert 2016.

Configuring Group Policy Security Filtering

Group Policy is a powerful and essential feature set within Active Directory. It provides the centralized management and configuration of operating systems, applications, and users’ settings in an Active Directory environment.

Group Policy is a powerful and essential feature set within Active Directory. It provides the centralized management and configuration of operating systems, applications, and users' settings in an Active Directory environment. The standard way of targeting Group Policy is by linking Group Policy Objects (GPOs) to Sites, Domains or Organisational Units (OU). This is ok for many scenarios but it does have some drawbacks. A key one is when you want to have a large number of users or computers in one OU i.e. the “London Users” OU but you also want to apply a specific GPO to only a subset of users. There are a couple of ways to do this:
  1. Security Filtering
  2. WMI Filtering
In this blog I focus on configuring group policy security filtering and explain how Security Filtering works and some of the considerations, specifically about whether to remove the “Authenticated Users” group.

Understanding Security Filtering

First things first – ensure the GPO is in scope

In order for Security Filtering to have any relevance, the GPO must first be in scope. What this means is that the GPO must be linked to a Site, Domain or OU that contains the user or computer object you want the setting to apply (or not apply) to. Remember if you have a user who is a member of a group named “Sales Users” and the Sales Users group is in the OU, then this will have no impact on the GPO scope. GPOs only apply if the actual user or computer account itself is in the OU. 1

GPO Permissions

In order for GPOs to apply to the user or computer, the user or computer must have 2 permissions on the GPO:
  • Read
  • Apply Group Policy
By default, the Authenticated Users Group is present on a GPO and this has both Read and Apply Group Policy rights. Also don’t be confused by the name - ‘Authenticated Users’ are not just user accounts, these are all authenticated objects within the domain, which include User and Computer accounts. Computer accounts are like users in many ways in that they have a username and password associated with them and they actually login to the domain when they boot up and establish a secure channel. Now this means that by default, all users and computers (within scope) will get the GPO applied. 2

Stopping certain users or computers from getting a GPO applied

In order to stop a user or computer (that is within scope) from getting a policy, you simply remove the rights on the GPO for that user or computer. Let’s say we had a GPO linked to the “London Users” OU which locked down the desktop, and we wanted this to apply to all users except for the “IT Support team” whose user accounts are also in the “London Users” OU. We would do the following:
  1. Create a security group for the “IT Support Team”
  2. Add this group to the GPO and DENY it permissions from applying the Group Policy (leave Read enabled)
This would ensure that everyone except the IT Support Team would get the locked down settings and would look like this: 3 The reason I only denied the “Apply group policy” but left the Read enabled is because if I denied the read permission and if one of the IT Support Team users was also one of the people that administers Group Policy, then they would lose the right to Read the GPO within the Group Policy Management Console. The would see a message saying inaccessible.


Removing the “Apply group policy” is all I need to do to stop the policy applying. Be aware that deny takes precedence over any allow. Also be aware that we are dealing with GPO permissions on a GPO by GPO basis, so if you deny a group from having rights on one GPO, that won’t affect their rights on others.

Why you should never remove the Authenticated Users Group

Until June 2016 many people would remove the Authenticated Users group and just give permissions to the user accounts for example that they want the policy to apply to. However, in June 2016 Microsoft released a security patch that changed the way group policy filtering works. Computers now process the GPO under the computer’s security context. What this means in practice is that the computer that the policy setting is processing on, even if it’s a user setting, must have the rights to read the GPO. Therefore, it’s highly recommended that the Authenticated Users group maintains Read permissions to all GPOs. See the following articles for more information: Deploying Group Policy Security Update MS16-072 Microsoft KB:  3163622 Security update for Group Policy: June 14, 2016

Allowing only certain users or computers to apply a GPO

The other scenario is where you want to only allow the GPO to apply to specific user or computer groups. i.e. you only want the “Sales Users” team to get the GPO, all other users in the OU should not get it. In this case you should do the following:
  1.  Modify the GPO permissions so that the Authenticated Users group has Read but NOT “Apply group policy” permissions (i.e. untick the box)
  2. Add the “Sales Users” group to the GPO and give them Read and Apply group policy permissions
This would look like this: 5 6

What’s the recommended way to modify permissions using the GPMC?

In these examples I’ve been going to the GPMC, clicking on a GPO, going to the delegation tab and then modifying the ACLs directly. Within the GPMC on the Scope tab you have the ability to Add security groups. Anything you add here will automatically get the “Read” and “Apply group policy” permissions. This is the preferred and probably safer way to give groups of users or computers access to the GPO, however it’s now required to keep the Authenticated Users with Read permissions due to the change in behaviour mentioned above. 7 It’s worth saying that the Domain Admins group, as you may have seen, also have Read permissions to the GPO meaning that even if you removed Authenticated Users, if you were a domain admin, you’d still be able to read it. But if you were a GPO Admin but NOT a domain admin then you may get the inaccessible GPO message. If you’re adding explicit DENY permissions, then you also need to modify these directly using the Security Dialog box.

Best practice

In most scenarios, if you need to apply security filtering I’d recommend:
  1. Keeping Authenticated Users with Read but remove the Apply group policy permissions
  2. Then add the groups you want to have permissions using the “Security Filtering” Add button on the Scope tab in the GPMC
Use the deny approach only where you want to apply to all except for a single explicit group and it would be too cumbersome to manually add all the groups that you want to give access.  

About the Author

Peter Grant joined the Xtravirt team in October 2008.and has 20 years of IT experience. He is the Xtravirt CTO and his specialist areas include Virtual Desktop Infrastructure (VDI), Virtual Infrastructure design and implementation including security, network, storage and backup, Disaster Recovery design and implementation. As well as contributing to the Xtravirt blog, Peter blogs on his own site at: www.virtual-ninja.com Peter is also a PluralSight author and has published a course on Group Policy: Troubleshooting. Click here to view.

Horizon FLEX – FLEXible roaming desktops?

I’ll be honest – I like VDI.  It’s a great way of delivering a slick, consistent managed desktop experience, not to mention applications to users from practically any device or location.  However, it does have a somewhat serious Achilles Heel … [More]

I’ll be honest – I like VDI.  It’s a great way of delivering a slick, consistent managed desktop experience, not to mention applications to users from practically any device or location.  However, it does have a somewhat serious Achilles Heel – you can access a desktop or application from any device or location so long as you can connect to the VDI platform.  So if your connection is poor or non-existent, well it’s normally back to traditional corporate laptops and all the management overheads that this entails.  

  Once upon a time, VMware attempted to answer this problem by providing an offline mode for VMware View (back in the pre-Horizon days).  Even VMware themselves accepted that this was less than successful - the capability was later dropped.  The issue was that the mechanism for checking in and out desktops essentially meant downloading and uploading entire VMs regularly – obviously not a tenable position, particularly as internet connections a few years ago were even worse than now. However, VMware has re-visited the requirement for offline desktops by looking in the kit bag and coming up with a new solution.  As with all corporates, they had to give it a publicity friendly name – introducing Horizon FLEX.  

So what is FLEX?

FLEX takes the back end web services of Horizon Mirage and combines it with the desktop hypervisor products of VMware Fusion Pro (for Apple Mac clients) and VMware Workstation Player (For Windows PCs).  

  Horizon FLEX is installed in the datacenter.  In Mirage, you have a management server and Mirage servers, with the latter providing the capacity for the solution.  Here, for FLEX, it’s the management part that’s important.  Using the Mirage features (backup and recovery, app layers etc.) is actually optional – if you don’t plan to use these, they can be co-hosted on the management server. The FLEX solution is very dependent on certificates – public trusted certificates are strongly recommended, particularly in a BYOD context, though private certificates are workable too. On the client side, we have an installation of VMware Fusion Pro or VMware Workstation Player.  Note that VMware Workstation Pro includes Player – but Player is the FLEX client on Windows.

So how does it work?

We create a template VM using VMware Fusion, configuring it to serve as an image in FLEX (this includes setting VM encryption and pointing the image at the FLEX solution for management).  The VM is exported as a compressed TAR file and then uploaded to a simple HTTP server.  The image is then registered by an admin in the FLEX admin console. With the image registered in FLEX, this can now be entitled to Active Directory based users (or groups).  The entitlement will define aspects such as VM naming, expiration on the image and Active Directory joining.  In the case of the latter, it is possible to inject Microsoft DirectAccess VPN configuration into a Windows 10 image. This permits a secure Active Directory join and access from within the VM over the internet.  We can also define policies such as locking down USB access to the VM. Our user opens the client software, Workstation Player as shown in the example below and selects the option to connect to the VMware Horizon FLEX server.  

  After entering server details and credentials, if the user is entitled to a desktop, it can be selected for download.  The client pulls the image from the HTTP server, with the relevant policy and configuration settings from FLEX. When the VM is first started, the user must enter an unlock passphrase (configured when the image is created and published) in order to access the encrypted image.  The VM is then configured (naming and so on) prior to allowing the user to log into the VM and hey presto, our user has a secure, offline desktop based on a corporate image. A VM image can be supplied on a USB stick, copied to the device and used where downloading an image is unattractive.  Even in this context, an initial connection to the FLEX server is required in order to authenticate, acquire policy and configuration and decrypt the VM.  

Putting the flexibility in FLEX

One of the key things about FLEX is its flexibility.  Here’s a few useful pointers:
  • The VM within the image needn’t be Windows.  You don’t even have to use all of the native features of FLEX such as Active Directory domain join.  Officially, Windows XP upwards plus Ubuntu are supported as guests.
  • If you go down the Windows route, you could include the Horizon Mirage agent within the VM and manage the OS and applications this way, or include other solutions such as SCCM.
  • For publishing over the internet, consideration will need to be given for securing FLEX, but more importantly access from the VM back to the environment, particularly for accessing LAN based resources.  Joining Active Directory is an important aspect here, with options including, use of Read Only Domain Controllers, endpoint based VPN or even in-guest VPN as possibilities.
  • One clever idea a customer used was hosting the images on a geographically replicated Cloud provider.  The cloud provider’s DNS entry for the storage would direct users to the nearest copy of the VM image, so automatically optimising the download for globally remote users.
  • The client should really be deployed with a corporate license key.  There are ways of packaging both the Windows and Apple Mac clients to deploy in a consistent manner.  In the case of VMware Fusion, the downloaded Fusion Package can be edited to include a specific license key.  VMware Workstation Player’s installer can be configured using command line switches to specify the license key, installation path etc.  Using 7-Zip or similar to create a self-extracting archive that then launches this command line automates this further.
  • Another clever customer solution was to build a website to publish the client packages and on-line help.  This ensures that the user can access the configured client package, company specific guidance as well as connect and download a desktop with minimal intervention by support staff.

Closing Thoughts…

Whether you go down the VMware Horizon FLEX route or other vendor solutions such as Citrix XenDesktop’s offline mode, roaming desktops are something of a niche case. They are primarily useful in a BYOD context where offline work is required although if you are already using corporate kit, this might not be the right thing for you.  VMware’s approach is much simpler to implement and use compared to their previous effort, while providing much greater flexibility and scaling.   If you’re interested in the VMware Horizon Suite, Xtravirt has considerable experience in providing design and implementation consultancy in this area.  Please contact us and we’d be more than happy to use our real world experiences to support you.         About the Author Curtis Brown joined the Xtravirt consulting team in October 2012. His specialist areas include End User Compute solutions and Virtual Infrastructure design and implementation with particular strength in VDI, storage integration, backup and Disaster Recovery design/implementation. He is a VMware vExpert 2016.

What’s new in NSX 6.2.3

Please Note: The NSX for vSphere 6.2.3 release has been pulled from distribution. The current version available is NSX for vSphere 6.2.2. VMware is actively working towards releasing the next version to replace NSX for vSphere 6.2.3. For more information, … [More]

Please Note: The NSX for vSphere 6.2.3 release has been pulled from distribution. The current version available is NSX for vSphere 6.2.2. VMware is actively working towards releasing the next version to replace NSX for vSphere 6.2.3. For more information, please visit click here.   On 9th June 2016 VMware released VMware NSX (for vSphere) 6.2.3. NSX is VMware’s solution to virtualising network and security for the software-defined data centre. This 6.2.3 release is considered a minor release but it does bring in a lot of enhancements. One big new feature introduced is the support for 3rd-party hardware L2 gateway integration. This is useful when migrating physical workloads into an “NSX enabled” environment (don’t forget the controllers see issue 1477280). Another key feature is the VXLAN UDP port which has changed from 8472 to 4789. I will not list all the new features (you can use the release notes for that) but will give a quick overview of new features that I consider to be the most interesting and the issues to keep an eye on.
  • NSX Edge on Demand Failover: giving users the ability to run on demand failover it’s a good option not only for testing
  • Edge Firewall SYN flood protection: disabled by default, can be enabled via REST call. Particularity useful when the ESG is exposed publicly on the WAN
  • SNMP v2c support, for the NSX Manager, Edges and Controllers
  • Global Dashboard to quickly monitor the overall health of your NSX environment
  • Desired/Live location attribute is now displayed for ESGs and DLRs
  • It is possible to apply a NAT rule to a vNIC interface, used to be an IP address only
  • You can configure DHCP options on NSX Edges, a very useful one is 121 which allows you to inject a static route into the DHCP client
  • A new license model implements the default license upon install which is “NSX for vShield Endpoint”, enabling the use of NSX for deploying and managing vShield Endpoint for anti-virus offload capability only
  • Fixed issue 1456172: good to have some warnings displayed, NAT has been part of the firewall service however people tend to forget that if the firewall is disabled so is the NAT
  • Fixed issue 1619570: In a large-scale DFW configuration with millions of rules and Service Composer, rule publishing may require several seconds to complete after a reboot. During this time, new rules cannot be published
  • Fixed issue 1467774 whereby a route learned from an eBGP peer advertised to an iBGP in the same AS was retaining a previous (wrong) administrative distance
A full list of the fixed issues can be found by clicking here. There are also some known issues to be aware of, key ones include:
  • Issue 1529178: Uploading a server certificate which does not include a common name returns an “internal server error” message
  • Issue 1534606: Host Preparation Page fails to load when NSX Managers are running different versions
  • Issue 1386874: Networking and Security Tab not displayed in vSphere Web Client
  • Issue 1604506: Cannot deploy DLR without NSX Edge VM if using default gateway for static routing use case see KB 2144551
  • Issue 1556924: connectivity through some of the DLR LIF’s could be affected is the VXLAN layer on the hosts is not properly configured
  • Issue 1493611: L2 VPN could be configured with VLAN ID 0; the GUI will let you do so however this is not supported and traffic will not traverse the tunnel so be careful
  • Issue 1474238: After vCenter upgrade, vCenter might lose connectivity with NSX when using the root embedded SSO account
Customers using Distributed Firewalling and Security Groups are advised, by VMware directly, not to upgrade to 6.2.3. This is because there is a known issue (KB 2146227) whereby virtual machines could lose connectivity upon a vMotion operation followed by changes to configuration of the Global Address Sets in the SG referenced for that virtual machine. The list of bug fix is huge this time so triple check and take the time to read through the entire list of known and fixed issues when planning to upgrade. And don’t forget to back up all the NSX components, a good starting point is the available at NSX Backup and Restore. The official documentation for VMware NSX for vSphere can be found by clicking here. NSX for vSphere 6.2.3 Release Notes can be found by clicking here. If you’d like any assistance with a VMware NSX project or want to learn more about how Xtravirt can help your organisation, contact us and we’d be more than happy to use our real world experiences to support you.

About the author

Giuliano Bertello joined the Xtravirt consulting team in April 2015. Giuliano’s specialties include VMware vSphere design and implementation, as well as End User Computing design and delivery. His focus is now around Cloud Automation and Orchestration and Software Defined Networking (SDN) which lead him to complete VMware’s NSX Ninja training course. As well as contributing to the Xtravirt blog, Giuliano blogs on his own site at http://blog.bertello.org

The VCDX Journey – Matthew Bunce

Xtravirt Senior Consultant Matthew Bunce has recently achieved VMware® Certified Design Expert (VCDX) certification becoming VCDX #222. In doing so he has joined an elite group of world-class architects which also includes two of his Xtravirt colleagues, Sam McGeown and … [More]

Xtravirt Senior Consultant Matthew Bunce has recently achieved VMware® Certified Design Expert (VCDX) certification becoming VCDX #222. In doing so he has joined an elite group of world-class architects which also includes two of his Xtravirt colleagues, Sam McGeown and Gregg Robertson. Matthew has been awarded his VCDX in the specialist area of Data Center Virtualization (VCDX-DCV).  

  In this blog we chat to Matthew and find out more about his journey to this outstanding achievement.  

So you’re VCDX #222, how do you feel?

I really don’t know yet. I don’t think I’ve quite processed that I’ve actually managed to achieve it. It’s been my primary focus for the last 5 months and now I really don’t know what to do with myself. I’m sure the feeling won’t last long though!  

How did you get into using VMware?

I started using VMware technologies in 2003-2004 and have used it at several roles over the last 12 years. I’ve done everything from desktop support, networking, voice, messaging and storage in that time and the skills I’ve gathered across all those technologies have been invaluable.  

What made you decide to go for the VCDX certification?

I’ve been working towards it for a couple of years but it was only just before Christmas 2015 when I discussed with fellow vExpert, Marco van Baggum, about submitting a design based upon a project we had completed earlier in the year that I committed to doing it - not realising we would only have just under 3 months to get everything completed.  

How long was your VCDX journey?

I started the process back in 2013 when I took the VCAP-DCD exam, 6 months later I followed up with the VCAP-DCA. The VCDX-DCV seemed so far away at the time and it was something I never thought I would achieve.  

What advice would you give others thinking of embarking on this journey?

There are a quite a few things, but mainly:
  • Start as soon as you can.
  • Take small steps at first and try and gather information, designs and documentation from projects you have worked on and start to put a document outline together. Having a document with nothing but headings can be very daunting but allows you to break it down into smaller chunks which you can focus on.
  • If possible, work with someone either on the same design or going for the same type of VCDX.

If you were to consider do this all again and going for another VCDX, what would you do differently?

Considering that I am planning to go for my VCDX-NV later this year, I have thought about what I could do differently. While working on my submission this time around, I had colleagues and friends review my design several times while I was writing it and had some great feedback, but couldn’t get some of the points into my design so hopefully next time around I will be able to incorporate their advice a lot more.  

I know its early days but how has life been since becoming Matthew Bunce VCDX?

It’s still very new and I don’t think it’s quite sunk in yet. I even woke up early this morning to check my phone like I’ve done every morning for the last week, looking to see if my results had arrived. It took me a couple of seconds to realize I was already a VCDX and I hadn’t just dreamt it!  

Overall, has the journey been worth it?

The journey has had a huge impact on me both personal and professionally. I think about problems in different ways and have learnt so many different things along the way. I’m certainly much more confident in my own capabilities and I’m looking forward to putting these skills to use.  

Any further comments?

Undertaking the VCDX journey takes lot of time and commitment and during this journey I’ve missed family birthdays, parent’s evenings and all sorts of other things so anyone considering this needs the support of their family. Lucky for me my family were right behind me and I owe a big thanks to them for their support and patience over the process. I’ve been fortunate that my colleagues and friends have also been incredibly supportive, having faith in me even when I’d lost faith in myself. Having someone to work alongside is invaluable. My thanks go to Marco van Baggum for working with me on the design and putting up with me for the last 6 months! I don’t think I’d have submitted if it wasn’t for his support. A supportive employer also helps, Xtravirt have been fantastic, fully supporting me through the whole process. My colleagues have been incredibly supportive and the management team have been behind me 110% all the way.  

  Matthew Bunce joined the Xtravirt consulting team in May 2015. As well contributing to the Xtravirt blog, Matthew blogs on his own site at www.virtualisedgeek.com. If you’d like any assistance with a virtualisation project or simply want to learn more about how Xtravirt can help your organisation, please contact us , and we’d be more than happy to use our real world experiences to support you.

Twist or Stick – Assessing how appropriate a Cloud Migration is

I was recently on an engagement to assist a customer with their decision process as to the future direction of their IT environment. In particular, they were keen to look at two areas – migration to a Cloud provider, and … [More]

I was recently on an engagement to assist a customer with their decision process as to the future direction of their IT environment.  In particular, they were keen to look at two areas – migration to a Cloud provider, and the state of End User Compute, possibly with a view to moving this up to a cloud solution too.

The Assessment

As with any sort of assessment process, we needed information.  This came from a number of sources:

Capacity Planning and Inventory Tools

To gather hard facts and figures, a number of tools were installed and used.  For a basic capture of the existing desktop estate, Lakeside Systrack was used.  This provided useful data in terms of application utilisation as well as performance metrics. For the server estate, two tools were used.  As the customer was already largely hosted on a VMware vSphere estate, VMware Capacity Planner was leveraged in conjunction with Sonar, an analytics tool developed by Xtravirt (take a look at https://sonarhub.com - well worth it.).

The former gave useful metrics on the application servers in the estate, including both virtual and the few legacy physical systems, whist the latter provided immediately usable analytics data on the VMware vSphere estate.  Being presented in a readily usable format was particularly valuable in the turn-around time for compiling findings in the assessment as a whole.  

Talking to IT

Who better to talk to about a company’s IT than the team who keep it fed and watered?  The IT team were able to go a step further than mere facts and figures, discussing how the estate holds together as a solution for business needs as well as future plans and existing pain points.  This meant engaging with different subject matter owners – such as desktop support, helpdesk, networks and so on.  A key player here was also a discussion with the CIO and IT manager to get a direction at the strategic level.  

Talking to the business

While talking to IT is undoubtedly valuable, IT represents a somewhat vested interest.  At the end of the day, IT provides a service to the business, so engaging with business unit leads to gain a feel for their requirements from IT and whether they were being met provides valuable insight into the perception of IT in the business as a whole.  

The Findings

Findings for an assessment like this will vary from customer to customer.  Some will be well suited to take the plunge into Cloud compute, others will be better served by staying with a hosted provider or even on premise.  In this latter case, it may be that the customer use case might suit Cloud provisioning, but they may not be ready for it with respect to the logistics or technology involved – for example, the network topology may need re-working first. In the case of this customer, the key business driver from both IT and the business as a whole was that IT was seen as expensive and underperforming. Delving deeper, it was found that the server estate, barring a few minor issues, was quite straight forward.  Technically, there were few reasons not to migrate to cloud.  Connectivity was well provisioned and the largely virtualised estate would migrate relatively easily.  Indeed, the somewhat elastic needs of the business were well suited to one of Cloud compute’s key strengths - the ability to provide a compute-on-demand model. The limiting factor was the existing desktop solution.  Predominantly VDI, it lacked a cohesive managed application strategy.  All applications were hard-grafted into one of a large number of template desktops, each requiring individual attention to maintenance.  In addition, the underlying hardware was not well suited to the numbers involved, particularly with respect to storage.  The combination of these two factors was such that IT was perceived as slow and unwieldy to the point that the business were considering returning to thick clients. Of course, moving to thick clients, while possible, would only emphasise further the difficulties with applications.  Also, in a surprise to the IT team, the business feedback showed a strongly positive response to the flexibility of VDI – particularly the ability to roam sessions between sites and even home working.  

The Recommendation

So, to conclude, we recommended that Cloud was indeed the way ahead, but initially only for server workloads.  With End User Compute, refining the existing solution and deploying a supporting application strategy was suggested.  It was agreed that this was a healthy approach as it would also be less disruptive than an all-at-once cloud migration. Xtravirt went on to assist with the customer’s tender process, supporting them through this and beyond into delivery. Although we do a lot of consultancy around the design and delivery of solutions, Xtravirt also provide consultancy to help customers, as a trusted advisor, to guide their future IT strategy.  If you would like to find out more about how Xtravirt can help your organisation contact us today.   About the Author Curtis Brown joined the Xtravirt consulting team in October 2012. His specialist areas include End User Compute solutions and Virtual Infrastructure design and implementation with particular strength in VDI, storage integration, backup and Disaster Recovery design/implementation. He is a VMware vExpert 2016.

Why should you go to a VMUG?

Networking with your peers of course! The many tea/coffee/lunch breaks are a great chance to meet and get to know other community members and VMware users and share information and knowledge. Anything else? Well more often than not VMUG meetings … [More]

Having been a VMUG member for several years and now a VMUG leader I have seen the value of being involved in this community and attending the meetings/events/conferences.

So why do I think you should you go?

If you are using VMware technologies/products in your workplace or you are regularly consulting on VMware products, the VMUG events are in my opinion an invaluable resource. It simply doesn’t matter what experience/knowledge you have of the products, whether you are very new to virtualization or an old hand I  guarantee you will come away from any VMUG meeting having learned something useful or dare I say been a great source/help to others whom you have met and networked with. Each VMUG meeting is usually broken down into sessions, some of them vendor specific (but still relating to virtualization) while others are community led.

Community Sessions

Community led sessions are a fantastic opportunity to see how your peers are using VMware (or related) products in the real world. They are always very well informed, unbiased and in some cases very frank! More often than not you will be able to gleam those really useful nuggets of information you were hoping for which will help you should you ever need to look at or deploy any given product. There isn’t any sales fluff, you are quite literally hearing it from guys who have carried out installs, deployments, tackled issues and problems and in some cases have come up with ingenious ways to get the most from the product.

Vendor Sessions

I can almost hear your eyes rolling at the thought of sitting through a vendor session but I will ask you to re-consider. The vendor sessions are very specific with regards to their product and where it sits in our area of interest (virtualization), there is always the opportunity for Q&A not just in the session but afterwards and it can be a great chance to learn something new and discover some new tech that may well be what you have been looking for. The above are the most common sessions but frequently you will also come across the following type of sessions.

Roundtable sessions

Usually limited in the number of people who can attend, these sessions often focus on a particular subject matter hosted by very knowledgeable community members.

“Rockstar” sessions

Probably not termed in such a fashion but it is usually a session where someone who is considered a “rockstar” in the VMware world/community is presenting on their specialist subject matter. These are usually very popular and for good reason, these guys didn’t become rockstars for nothing!

Lab/training workshops

These sessions are sometimes vendor specific (eg a demo or hands on) or can be simply training workshops where a VMware certified instructor or knowledgeable community member is freely giving their time on a particular product or area of interest.   The above is by no means an exhaustive session list, but should give you an idea as to what often takes place in any given VMUG meeting. So what other good reasons are there to come along? Networking with your peers of course! The many tea/coffee/lunch breaks are a great chance to meet and get to know other community members and VMware users and share information and knowledge. Anything else? Well more often than not VMUG meetings are followed by vBeers, which is another chance to talk tech! What’s not to like? If you want to know where your nearest VMUG is visit the VMUG website.   About the author Simon Eady joined the Xtravirt consulting team in October 2014. As well as contributing to the Xtravirt blog, Simon blogs on his own site at www.definit.co.uk. He is co-leader of the South West UK VMUG - @swukvmug.

Getting down to the business of IT with vRB

VMware vRealize Business (vRB) is available as part of the vCloud and vRealize suites and provides a business context to the services IT offers. It helps organisations shift from a technology orientation to a service broker emphasis and provides transparency … [More]


1. Introduction

VMware vRealize Business (vRB) is available as part of the vCloud and vRealize suites and provides a business context to the services IT offers. It helps organisations shift from a technology orientation to a service broker emphasis and provides transparency and control over the cost and quality of IT services. It extends the vRealize suite to business stakeholders and provides a fact based approach to key challenges and the age old business need to minimize the cost of IT while maximising the value IT delivers to its customers. This blog is the result of work undertaken whilst delivering a PoC for a medium sized organisation and outlines the capabilities and functionality of vRB.

2. Challenges Faced by the Manager of Cloud Operations

The manager of cloud operations in an organisation is constantly faced with a number of challenges regarding cost visibility and optimisation in the delivery of Infrastructure as a Service (IaaS), including:
  • What is the total spending and what is it comprised of?
  • What is the cost of delivering a unit of IaaS?
  • Who consumes these services and at what cost?
  • What are these services used for and what is the cost allocation for each?
  • How is my cost efficiency compared to that of other public cloud infrastructures?
  • What is the cost of potential alternatives to delivering IaaS?
  • How do I use the information above to optimize the cost of my existing and future operations?
  • How do I create an accurate showback report to show it to the stakeholders

3. Business Management for Cloud

vRB is all about the business management of your IT and cloud infrastructure, but what does it really do?
  • Provides cost and usage visibility of virtual infrastructure/private cloud and public cloud with out of the box integration with VMware vCenter ServervRealize AutomationvCloud DirectorvCloudAir, Amazon Workspace Services (AWS), and Azure.
  • Performs what-if analysis of virtual infrastructure/private cloud and public cloud, based on cost and utilization.
  • Automatically prices the services available through self-service in a hybrid cloud.
  • Provides out of the box benchmarks for cloud / virtual infrastructure environments, providing insight into capacity, costs and efficiency.
  • Delivers common reporting and usage metering coverage for measuring, analysing, reporting and invoicing based on usage across private and public cloud.

4. vRB Categories

The information reported on in vRB falls into the following categories:



Total Cloud Cost Displays the total cost of running a cloud per month.
Operational Analysis Displays the estimated average cost of the virtual machines for the current month based on its utilization for last 30 days.
Demand Analysis Displays the number of virtual machines that are used over a period of time. This analysis is useful for tracking the demand for virtual machines for planning.
Cost Drivers Displays the distribution of cost breakdown for your cloud environment in terms of cost drivers. The cost drivers are Server Hardware, Storage, OS Licensing, Maintenance, Labour, Network, Facilities, and Additional Costs. For more information about viewing and editing cost drivers
Cloud Resources Displays the estimated cost breakdown for a virtual machine in terms of CPU, memory, storage, and operating system license and labour cost based on last 30 days.
Demand Allocation Displays how the allocated costs are spread across different consumer groups of virtual machines. This widget displays only the top consumers of virtual machines in terms of cost.
Capex/Opex Displays the total capital expenditure and operating expenditure incurred for your private cloud.
Allocation Displays the estimated costs for allocated and unallocated resource types (CPU, memory, and storage) over the month.
Demand Largest Changes Displays the category where the largest change in the cost has occurred month on month. The list is ordered by descending order of change in cost. The widget also provides an idea as to which top level consumer has the highest month on month increase or decrease in IT cost consumption.

5. vRB Standard Dashboard

In this section, I’ll go through the standard dashboard and highlight the areas covered and what you’ll see on screen.  


5.1 Overview Tab

The overview dashboard consists of
  • Total Cloud Cost
  • Operational Analysis
  • Consumption Analysis


5.1.1 Cloud Cost Tab

vRealize Business Standard categorises the cost drivers into Server Hardware, Storage, OS Licensing, Maintenance, Labour, Network, Facilities and Additional costs.  

    Cost Drivers vRealize Business Standard categorizes the cost drivers into Server Hardware, Storage, OS Licensing, Maintenance, Labour, Network, Facilities, and Additional Costs. The cost driver data that you provide is the monthly cost except for the server hardware cost and storage array hardware cost.  

Option Description
Server Hardware Displays the server cost information by CPU age. For complete cost information related to server hardware,
Storage Displays total cost of data store (for each storage type or storage profile) and the storage arrays. You can select Datastores and Storage Hardware to view its respective storage cost details. The data store cost is the cost of the data stores that are not part SRM EMC storage array.
OS Licensing Displays the operating systems cost distribution of your cloud environment. For complete cost information related to OS licensing, For Non-ESX physical servers, VMware license is not applicable.
Maintenance Displays the maintenance cost distribution for the server hardware and OS maintenance. For complete cost information related to maintenance,
Labour Displays the labour cost distribution for the servers, virtual infrastructure, and operating system. For complete cost information related to labour, For physical servers, operating system labour cost and servers labour costs are applicable, virtual infrastructure cost is not considered.
Network Displays the networks costs by NIC type. For complete cost information related to network,For physical servers, the network details are not captured. So, the network cost is considered as zero.
Facilities Displays the facilities costs distribution according to rent, real estate, power, and cooling. For complete cost information related to facilities
Additional Costs Displays the additional costs details such as backup and restore, high availability, management licensing, VMware software licensing.

5.1.2 Operational Analysis Tab

The operational analysis considers the CPU, RAM, and storage as first class components of the cloud infrastructure. The Resources table under the Operational Analysis tab displays cost breakdown information in terms of the current month cost, trend, and total percentage value of CPU, RAM, storage, operating system (license and labour) consumption in your cloud environment. You can filter the resource cost information, its generation details and virtual machine cost based on each data centre or view all data centres cost information together, by using the All Data Centres drop-down list.  


 5.1.3 Consumption Analysis

The consumption analysis determines who consumes the resources, the purpose of which they are consumed and the cost associated with the resources.  

  Consumption overview is a dashboard that shows the monthly cost, budget and charge of the virtual machines.  

  Monthly Cost:  the total monthly cost of all virtual machines, which includes cost of the RAM, CPU, storage and OS.   Monthly Budget:  the budget allocated to that consumer. Budget is the expected limit on charge consumption that is allocated to each consumer.   Monthly Charge:  the monthly charge of all virtual machines. Charge is calculated on the capacity allocated to the virtual machines. Consumed Capacity:  the total charge split into RAM, CPU and storage charges. It also displays trend of total number of running virtual machines per month.   Over/Under Budget:  compares budget and charge per consumer at monthly and yearly intervals. The graph displays the top three consumers whose charge has deviated from budget. The deviation can be the charge either exceeding or close to the budget.   Cost/Charge:  compares cost and charge for each consumer at monthly and yearly interval. The graph displays the top three consumers having the highest charges.   Consumers list is an overview of how costs are allocated and the budget and costs over time.  

Top Consumers:  displays how the allocated costs are spread across different consumer groups of virtual machines. This also displays the top three consumers of virtual machines in terms of cost and remaining consumers are grouped under others. Over/Under Budget over time: compares the total charge and total budget. In the widget, the bars graph represent charge, the line graph represent the budget. Cost/Charge over time:  compares the total cost and total charge. In the widget, the bars in the graph represent cost, the line graph represent the charge.  

5.2 Cloud Comparison tab:

You can view the cost of virtual machines in your private cloud and then compare it with the pricing models for Amazon Web Services, Windows Azure, and vCloud Air public cloud. vRealize Business Standard estimates the costs of running a completely new instance or an existing instance of a virtual machine in the private cloud by using the cost drivers of your private cloud, and then provides you with a comparison of the cost of the same configuration in Amazon Web Services, Windows Azure, and vCloud Air cloud models.

5.3 Public Cloud tab:

vRealize Business Standard integrates with vCloud Air, AWS and Windows Azure and provides detailed analysis of the bills. vRealize Business Standard provides the users an overview of how their investments are spread across vCloud Air, Amazon Web Services (AWS) and Windows Azure.  


5.4 Reports tab:

You can generate reports from vRealize Business Standard to get cost details of vCenter Server, vCloud Director, vRealize Automation, and virtual machines of the public cloud. This information can be exported to a suitable format    


6. Finding out more about vRB

vRealize Business is available as Standard, Advanced & Enterprise versions. A comparison of the features can be found here: http://www.vmware.com/products/vrealize-business/compare.html For more information on vRB, see the links below: http://www.vmware.com/uk/products/vrealize-business http://pubs.vmware.com/vrealizebusinessstd-6.0/index If you’d like any assistance with a VMware vRealize project or want to learn how it can work for your organisation, contact us and we’ll be more than happy to use our real world experiences to support you. About the Author Saurabh Chandratre joined the Xtravirt consulting team in March 2015, his specialist areas include VMware Virtualisation product sets, Microsoft Server Technologies and solutions and EMC storage technologies. He is certified in the delivery of the VMware vRealize Business Management Suite.

VMware Virtual SAN 6.2 Announced

VMware’s Hyper Converged solution Virtual SAN (vSAN) has been updated to version 6.2 and reached GA on 15 March. It’s been almost 2 years since the first release of vSAN went GA and VMware have added loads of great new … [More]

VMware’s Hyper Converged solution Virtual SAN (VSAN) has been updated to version 6.2 and reached GA on 15 March. It’s been almost 2 years since the first release of VSAN went GA and VMware have added loads of great new features and abilities in this latest release.  The new features can be broken down into the following groups:

Lowest Cost

  • Near-line deduplication and compression per disk group level called “Space Efficiency”. (All Flash Only option)
  • Space efficiency will be enabled on a cluster level.
  • Deduplication will happen when de-staging from cache tier to capacity tier within the Virtual SAN and the deduplication will use fixed block length deduplication with a very granular 4KB blocks allowing great data and space efficiency. (All Flash Only option)
  • Compression which will happen after deduplication. (All Flash Only option)
  • RAID 5 Erasure Coding with “FTT=1” with a minimum of a 3+1 configuration allowing 1.33x instead of 2x overhead. Currently a 20GB disk takes 40GB but now it will be around 27GB (All Flash Only option)
  • RAID 6 Erasure Coding with “FTT=2” with a minimum of a 4+2 configuration allowing 1.5x instead of 3x overhead. Currently a 20GB disk takes 60GB but now it will be around 30GB. (All Flash Only option)

Radically Simple

  • Quality of Service will be available to allow complete visibility into IOPS consumed per VM/Virtual Disk, eliminate noisy neighbour issues and manage performance SLAs (independent of VM provisioning order)

Ready for Any Application

  • SAP Core Ready with testing and validated deployments.
  • Tightly integrated cloud management with Horizon and the ability for procurement of Virtual SAN licenses bundles for lowest cost VDI storage.
  • Oracle RAC Supported with testing and validated deployments.

 Advanced Management & Troubleshooting

  • Enhanced Virtual SAN Management with New Health Service to allow built-in performance monitoring, health and performance APIs and SDK, storage capacity reporting and many more health checks.
Virtual SAN – Usability & Manageability : Health Check (Q1)
  • Fully integrated operational management, which are natively part of the vSphere Web Client.
  • SNMP support, custom scripts, emails via vCenter alarms.
  • Performance Monitoring – Web client integrated
  • Cluster wide summary of VM availability
  • Event based alarm triggers
  • Detailed Space Reporting (account for dedupe etc)
  • Proactive rebalance from UI (Health and actions)
  • Alarms on performance threshold breach
  • Integrate performance data in support bundle

Finding out more about Virtual SAN

For more information on VSAN 6.2, see the links below: If you’d like any assistance with a VMware VSAN project or want to learn more about how it could work in your organisation, contact us and we’ll be more than happy to use our real world experiences to support you. About the Author Gregg Robertson joined the Xtravirt consulting team in December 2011. He is a current vExpert and in 2015 achieved his VCDX (DCV). As well contributing to the Xtravirt blog, Gregg blogs on his own site at www.thesaffageek.co.uk.  

User Environment – Managing a Windows Pain Point

The ‘Windows User Profile’ – words to make even the most optimistic Windows admin shake his head and grimace.  On one hand, it’s a sound idea – grouping all of the settings that are specific to a user into one … [More]

The ‘Windows User Profile’ – words to make even the most optimistic Windows admin shake his head and grimace.  On one hand, it’s a sound idea – grouping all of the settings that are specific to a user into one location on a device – all nice and neat.  On the other hand, we have the problem of how Microsoft achieved it and all the legacy baggage that goes with it.


This blog post looks (somewhat in general) at what modern tools can do to both counter the issues often associated with the user environment in Windows and enhance the capabilities.  While there are some really clever tools out there, some more and some less capable in different ways, this post will focus primarily on VMware User Environment Manager as a reference (mainly because it’s a pretty simple product).  The concepts can apply to other products, such as RES ONE Workspace, because the underlying issues are common.  

So what’s up with Windows Profiles?

Well, to be fair, the native approach is optimised for local use, predominantly in a stand-alone context.  Microsoft have included some options to provide either a draconian response, the Mandatory Profile (locked down, no changes permitted – rock solid, but inflexible) or Roaming Profiles for a portable option that’s sometimes a little flaky.  Group Policy added some much needed granular abilities – most notably folder redirection, which we’ll discuss later. The profile can be boiled down to the following:
  • User Data – all those documents, music files, dodgy GIFs you keep in the My Documents, My Pictures and so on.
  • Application settings – The user’s configuration for discrete applications.  These may be in the user’s own part of the Windows Registry or in configuration files.
  • Environment Settings – These are typically start menu items, desktop shortcuts, wallpaper.  However, you can include printer and disk mappings to this.
All this, without careful management can lead to problems.  Firstly, without management of User Data in particular, the profile can get somewhat obese.  If the profile is just maintained locally, size isn’t necessarily a big issue.  However, in a modern environment where a user wants to log on and access their data anywhere in the business, a big profile is cumbersome and causes both reliability and performance issues.  We can alleviate some of these issues by using native tools, but only to a point. The Windows profile is, to a certain extent, unreliable, with respect to the registry settings and has little ability to self-heal in the event of a fault in the profile.  Windows Policy often doesn’t go deep enough to override faulty settings and often, the only approach to profile corruption is the dreaded all-or-nothing Profile Reset.  Yuk. The user profile is very proprietary with respect to operating system releases (with Windows XP version 2 profiles differing considerably from later releases).  This makes migrating between releases quite painful too. Loading the Windows Profile, (particularly when roaming is an all-or-nothing affair) in the roaming case, it’s all downloaded on log on, then loaded into memory – in the case of application settings, these will be loaded even if the application isn’t needed. So we have a solution that works, but not very fast, nor efficient or reliable.  We can fix some issues with just native tools, but it only goes so far.  

User Environment Management

So we want to fix these problems, sure, but we don’t want to stop there.  We want to be able to centrally administer some settings and maybe allow user control of others.  We might want to do clever things based on context – what about different settings if a user logs on in VMware View rather than on an office PC or maybe different again when off site? We need a third party solution above and beyond the basic native Windows toolset.  As mentioned earlier, there are numerous options out there – this speaks volumes as to how seriously this issue is seen across the industry – if it were an obscure problem, there wouldn’t be many tools – good old supply and demand economics.  Here, we’ll consider VMware’s User Environment Manager (UEM), what was once upon a time known as Immidio Flex+.  

Setting up VMware UEM

I won’t go too far into the details of how to set up UEM – as others out there have blogged about this already, but it boils down to these steps:
  • Set up a couple of shared folders – A UEM Configuration Share (stores all the common, centrally managed items) and a Profile Archive (for user-specific stuff).
  • Set up some Windows Active Directory Group Policy to apply to users and devices to configure them to use UEM.
  • Install a UEM management console to configure the solution and an agent into all managed endpoints.
Once these are in place, the good stuff begins.  

User Data

Let’s consider the easy one first.  By establishing a centrally managed file services solution on a NAS or server infrastructure, we can establish user home folders on a per user basis.  Into this we can use the native Windows Group Policy to redirect My Documents etc to the Home Folder share.  We can repeat this for Pictures etc.

  We can apply this either to the Organizational Unit containing the users, or we can do clever things like enable LoopBack processing and apply the setting via the computer account’s location in Active Directory.  This works especially well in VDI and RDSH environments where the file solution is in proximity to the desktops.  

Application Settings

One of the key attributes of UEM and others is the central administration of application settings.  In the case of UEM, a tool is provided to capture application settings - Application Profiler.  This tool is run on a reference machine similar in configuration to the user endpoints, with the application installed (but not run) alongside the Application Profiler.

  The application is launched from the Profiler – any configuration settings are recorded by the Profiler and held in the form of XML content in the Config Share.  This is important as it provides flexibility with respect to migrating between OS as it isn’t a Windows native profile format.  It is possible to manually edit the captured config to include additional elements if required. An important difference with respect to application settings is that in Windows native profiles, the settings are loaded to memory at user log on regardless.  In the case of UEM, it is possible to select the application profile to be loaded only if the application is started.  This has a substantial benefit with respect to log on times and is applicable to most applications (an obvious exception being those that run on log on, perhaps with a service). A useful feature here is that UEM can even intervene in ThinApp packages, applying settings within the sandbox.  It can also interface with App-V 4 and 5 in a similar manner. Applications that aren’t defined would fall into the scope of the user’s personal settings and are saved to the archive share.

User Environment Settings

Beneath the application layer, we have elements of the operating system and environment we may wish to adjust.  Typically, these might include printer settings, drive mappings etc.  We can even pull in Microsoft ADMX Policy templates and configure templates here rather than via Windows Group Policy. In the example below, we have a home drive mapping.

As with application settings, items not otherwise locked down are managed using the user’s personal archive.

Applying Settings on Conditions

In UEM, there are a number of different ways we can apply settings based upon the context in which they are needed. With the User environment tab, these are applied when the user is logged on.  Taking our Home Drive mapping above, we could apply this based on a condition where it is only executed when logged on via an internal IP address. In the case of application settings, these can be set to apply when an application is run.  So we could map a drive when VLC is launched and undo it when the application exits….

We could take this one step further – we can apply the environmental setting for an application based on a condition.  For example, the user runs VLC, it will run a drive mapping if the user is logged on from a PC on a particular IP range or the Remote display protocol is PCoIP….


To Conclude

So, by implementing a fully featured product such as UEM, we can simplify and centralize the administration of user and application settings, improving security, reliability and compliance to corporate standards.  We can even customise the needs depending on a variety of conditions, such as whether the user is connected to a desktop via a remote desktop protocol, or on a laptop running on battery; a member of a particular Active Directory group or connected from a particular IP address range. The extended capabilities, such as applying application settings on-demand improve the user experience by reducing the log on time for a start.  By implementing folder redirection as well, we further eliminate reliance on a particular endpoint in the equation – a valuable tool for both Remote Desktop Session Host shared desktops or non-persistent virtual desktops.   If you’d like any assistance with an End User Compute solution (including VMware or Citrix based solutions) or simply want to learn more about how Xtravirt can help your organisation, please contact us, and we’d be more than happy to use our real world experiences to support you.

vROps a new diva in the datacenter

For the last few months and certainly very recently (at the London VMUG meeting) I have had the chance to talk to peers and vExperts and share “war stories” with regards to vRealize Operations Manager. What has become a consistent … [More]

For the last few months and certainly very recently (at the London VMUG meeting) I have had the chance to talk to peers and vExperts and share “war stories” with regards to vRealize Operations Manager.

What has become a consistent theme in all the stories is just how much compute resources vROps requires when you “go big” and not just what is clearly defined in the sizing spreadsheet.

For example some of the large deployments I have either been involved in or heard about (to monitor upwards and beyond of 30000 VMs) required the deployment of Large vROps nodes.

A single large node collecting data for a large deployment as outlined above requires the following resources.
  • 16x vCPU
  • 48GB RAM
  • 2TB Storage
  • 1700 IOPS
As you can see a single node is not to be sniffed at and you would need 7 of them! (If you required HA), that’s a total of 112 vCPUs and 336GB RAM. When you consider the IOPS required per node, we are already well into SSD/Flash territory so that will also need to be considered. Another important issue that has come up is that vROps really does need (at this scale) a 1:1 ratio of pCPU to vCPU, anything else and it has been seen to behave erratically.

Then you will need to consider things like DRS affinity and or anti-affinity rules so as not to have your nodes ever sharing a host. Resource pools would need to be considered.

With all the above to consider vROps is no longer just another monitoring tool and in my opinion it should be treated like a tier one application (even if it’s on your management cluster). I know of many businesses and organisations where they are now extremely dependent the alerting, capacity planning and other features vROps brings to the table as a product. It has now become the hub of a massive quantity of data and with more features and functions being added with each release this will only increase.

With all that’s being thrown at it and with that only being set to increase, vROps (like a diva) will need and demand special attention when it comes to planning, deploying  and day to day running.  

About the Author

Simon Eady joined the Xtravirt consulting team in October 2014. As well as contributing to the Xtravirt blog, Simon blogs on his own site at www.definit.co.uk.

If you would like any assistance with a VMware vROps project or simply want to learn more about it, please contact us, and we’ll be more than happy to use our real world experience to support you.

vRA Enterprise Level Distributed Installation

Recently I was fortunate enough to design and build an enterprise level distributed installation of the vRealize Automation suite of products and integrate it into an enterprise environment. I’ve done several vRA/vCAC deployments before but each time I do a … [More]

Recently I was fortunate enough to design and build an enterprise level distributed installation of the vRealize Automation suite of products and integrate it into an enterprise environment. I’ve done several vRA/vCAC deployments before but each time I do a new deployment I like to collate information, read all the latest articles and make sure what worked in the past for me hasn’t changed or more likely has been enhanced so I can provide an even better deployment. For those unsure of what an enterprise distributed deployment comprises of I have added a logical diagram below [gallery link="file" ids="10738"] For my current deployment it was based on vRealize Automation 6.1 due to it being part of a Hybrid Cloud deployment but the architecture and layout are exactly the same for 6.2. (note this is defined after collecting customer requirements based on amount of workloads, NSX load balancing and the requirement of application services so make sure you have reasons for design decisions)


For the resources I used, some are ones I used in the past to learn how to do an enterprise deployment and some are ones I re-read prior to this deployment. I have listed them below to save me looking for them again but also to maybe help other people: NB: Make sure when importing the certificate into the appliances remember to remove the bag attributes at the beginning of the PEM file and start from —BEGIN CERTIFICATE—– until ——–END CERTIFICATE————- NOTE: VMware no longer recommend using an external postgres database. The 6.2 documentation has been updated to reflect this.


Along the way I hit a few errors and spent a fair bit of time with VMware support also on a few of them. The main ones are listed below: Gregg Robertson joined the Xtravirt consulting team in December 2011. As well contributing to the Xtravirt blog, Gregg blogs on his own site at www.thesaffageek.co.uk . If you’d like any assistance with a virtualisation project or simply want to learn more about how Xtravirt can help your organisation, please contact us, and we’d be more than happy to use our real world experiences to support you.

Public CA certificates with Internal Server Names & IP Addresses

While working on a recent engagement I had a discussion with a customer’s Architect about how we would issue certificates for a vSphere, vRA & vROPS deployment. The customer had no internal CA and relied instead on a public CA … [More]

While working on a recent engagement I had a discussion with a customer’s Architect about how we would issue certificates for a vSphere, vRA & vROPS deployment. The customer had no internal CA and relied instead on a public CA to issue all certificates that would be user facing.

This simplified the management of the certificates and meant they did not need to maintain an internal PKI infrastructure or root certificates on client devices. I explained to him that while this worked currently for their servers which used internal names or reserved private IPs it would soon change and they would need to look at deploying their own PKI infrastructure.

As of the 1st November 2015, public Certificate Authorities like Symantec and GlobalSign were no longer issuing certificates with a subjectAltName extension or Subject commonName field containing a IP address within the IPv4 RFC 1918 reserved address space or IPv6 address in the RFC 4193 range: - (10/8 prefix) - (172.16/12 prefix) - (192.168/16 prefix)
FC00::/7 prefix on an IPv6 address

This is also the case for Internal Names. An Internal Name is a Common Name (CN) or Subject Alternative Name (SAN) field of a certificate does not end with a valid Top Level Domain (TLD) i.e. .local, .internal etc. CN or SANs which end with valid TLD i.e. .com or .net will still be valid.

This also affects certificates which use NetBIOS names or short hostnames i.e vCenter01, WebServer, Beeblebrox etc.

Any certificate which expired after 1 November 2015 will not be reissued and after the 1st October 2016 all certificates which are still valid will be revoked by the issue CAs and will no longer work as a valid certificate.

This is not just a VMware issue and will impact all servers using certificates described above. However, if you are affected by this issue in your VMware environment, VMware have posted a Knowledge Base article which covers the issue. Click here to go to the article.

About the author

Matthew Bunce joined the Xtravirt consulting team in May 2015. As well contributing to the Xtravirt blog, Matthew blogs on his own site at www.virtualisedgeek.com If you’d like any assistance with a virtualisation project or simply want to learn more about how Xtravirt can help your organisation, please contact us , and we’d be more than happy to use our real world experiences to support you.

Nvidia GRID 2.0 – 3D Acceleration on Horizon View gets even more Ooomph!

In the case of 3D acceleration, Nvidia have made two significant advances. The first is GRID, and the second is the new Tesla M60/M6 GPU adapters. This blog post looks at these and my own experiences implementing them in a … [More]


Back in July 2014, my illustrious colleague, Steve Dunne wrote about a Proof of Concept on his experiences with Nvidia based GPU accelerated graphics in a VMware Horizon View environment (see Horizon View 3d a client engagement). But as with all things in the wonderful world of technology, things move on.

In the case of 3D acceleration, Nvidia have made two significant advances. The first is GRID, and the second is the new Tesla M60/M6 GPU adapters. This blog post looks at these and my own experiences implementing them in a proof of concept.

What is GRID?

Originally, it referred to the combination of a software management layer and the GRID K1 or GRID K2 GPU boards. These were released with support on vSphere 5.1 and 5.5 allowing VDI desktops to be provided with graphics acceleration. Initially, this operated in two forms, vDGA (where a GPU is dedicated to a single VM) or vSGA (where the VMware driver offloads to the Nvidia adapter).

These both had limitations. vDGA is fast as a thief, however the VM is hard-pinned to a host and it’s a 1:1 mapping of VM to physical GPU, impacting scaling considerably. vSGA was great for scaling, but didn’t provide great performance – suitable for lightweight use cases wanting some graphics – a better Aero interface or browser rendering.

The GRID software layer adds a third option that provides more flexibility. The GRID software is installed (as a VIB on the ESXi host) and presents the capability of adding a vGPU to a VM. When applied to a VM, the administrator can select a Profile. These profiles are largely based around how much video RAM is assigned to the VM. With GRID 2.0 however, this functionality also expands to features too.

For example, in 2.0, there are Quadro validated profiles specifically optimized for CAD and Business profiles for more general purpose. This is akin to buying a Quadro adapter or a Geforce adapter for a physical machine.

In GRID 2.0, when paired with the M60 adapter the profiling is both flexible and powerful. It provides the ability to share resources between VMs, while still offering the capability of high end performance – essentially vDGA – but without the complex configuration of pass-through. Unlike vDGA, a GRID VM can be cold migrated to a second host relatively easily.

New Hardware - M6 & M60

Nvidia recently released two new adaptors based on their Tesla GPU core. The M6 is an MXM formatted mezzanine card designed for blade servers while the M60 is a full PCIe card for traditional servers. The M6 has a single GPU while the M60 has two, larger GPUs.

So, how do they compare?

GRID K1 GRID K2 Tesla M6 Tesla M60
GPU (CUDA Cores) 4 (4x192) 2 (2x1536) 1 (1536) 2 (2x2048)
VRAM 4x4GB 2x4GB 1x8GB 2x8GB
vDGA users 4 2 1 2
GRID 1.0 2.0
GRID 1GB Profile 16 8 8 16
GRID 2GB Profile 8 4 4 8
GRID 4GB Profile 4 2 2 4
GRID 8GB Profile - - 1 2

Note that the 8GB Quadro profile on GRID 2.0 also exposes CUDA and OpenCL allowing the card to be used for compute acceleration too – this might be useful outside of VDI provisioning…

Oh, and GRID 2.0 adds 4K display support too – up to four monitors, depending on the profile.

Onto the PoC...

I was fortunate enough recently to be involved with a Proof of Concept using the Nvidia Tesla M60 on Cisco UCS C240M4 hardware. This isn’t officially supported until Q1 2016, but working with Cisco and Nvidia, we stood up what will be essentially the certified configuration (complete with C240 specific cabling, heatsink and air flow baffles and beta release firmware) – quite literally bleeding edge!

All of this was going to be host VMware vSphere 6.0 Update 1 and VMware Horizon View 6.2.

The intention of the PoC was to test a couple of CAD applications for suitability. The test was to include presentation on both Horizon VDI desktops as well as leveraging the new support for 3d accelerated RDSH based applications and desktops.

Installing the infrastructure

Installing the infrastructure side of the solution was pretty straight forward. A straight forward build out of vSphere and a small Horizon view environment. Our M60 equipped host was placed in its own cluster.

Laying out the GRID

The next step was acquiring and installing the software components for GRID. Normally, Nvidia software would be acquired through the regular Nvidia.com downloads site, however GRID 2.0 is different. You need to go through a registration process first. Once you’ve done this, you need to download three items:
  • The Nvidia GPU Mode Switch tool
  • NVidia GRID 2.0 software package
  • Nvidia GRID 2.0 License server.

GPU Mode Switch

The Tesla cards can operate in one of two native modes – Compute and Graphics. The default is the former, but for vSphere, we need the latter. To switch modes, we use the GPU Mode Switch tool. The GPU mode switch tool for a VMware environment is provided on an ISO image that boots into a Linux shell. We boot the host from the ISO image and run the following:
  • To check the current mode run gpumodeswitch --listgpumodes
  • If the mode needs to be changed to graphics, run gpumodeswitch --gpumode graphics

Then reboot the server into ESXi. You’re then good to go on installing the VIB containing the drivers and management software for GRID on ESXi.

Installing the VIB

This is relatively easy. Nvidia provide the VIB package as one of the downloads. You then need to upload this to the ESXi host. I find the easiest method is to upload the VIB to a vSphere datastore using the vSphere client.

From there, you need to use either SSL (or access via the server console) to the ESXi command line. This will need enabling on the host first, and, as we’re all good, security conscious boys and girls, disabling when we’re done. Oh, and put the host into maintenance mode!

We use esxcli to install the VIB –
esxcli software vib install –v //.vib
By default, vSphere balances vGPU resources across GPUs. However, if you mix VMs with different GRID profiles, it may prevent a VM from starting. For example, a pair of VMs are running with a 1GB profile on a host with a (2x GPU) M60 card. Each GPU (with 8GB VRAM) ends up with a single VM. VM no. 3 has the 8GB profile but can’t start as the GPUs each only have 7GB available. To change allocation to one where each vGPU resource is allocated by depth instead, there’s a hidden ESXi host setting to edit: /etc/vmware/config
Add the following: vGPU.consolidation=true
Then we reboot the host and take it out of the maintenance mode.

Prep the Nvidia License Server

This is a GRID 2.0 element that is of little technical benefit to the solution, but nonetheless is a requirement of Nvidia. In their wisdom, they sell GRID 2.0 in differing levels of licensing, based on feature set (unlike 1.0 that simply worked on the principle of you got everything with the board).

Regardless, it requires a relatively humble VM running either Windows (7/2008 or later) with a copy of Java or Linux. Installation on Windows is a classic setup executable wizard. One thing to note is that the wizard asks about allowing access through the Windows firewall. The application uses Port 7070 for accessing the license function. Allow this one through. The other is port 8080 for management, don’t do this one – the management page has no security, so you only want to access this from the local OS itself.

Once it’s installed, you need to register the license the server on the Nvidia portal – you need the MAC address of the license server for this. Once registered and the licenses applied, you download a BIN file. This is then registered on the license server – within 24 hours of generation.

Configuring the VM for View and 3D acceleration.

Again, pretty straight forward for most desktop OS, but a little trickier for RDSH hosts. For regular desktop OS, such as Windows 7, build a VM as before – Install the OS, VMware tools, View agent and then optimise.

One extra thing you’ll need at this point - install either the VMware Horizon View Direct Connect agent or VNC Server within the VM. You’ll need this because later you’ll need to configure the Nvidia Control Panel – but once the Nvidia drivers are installed, you can’t use the console via the vSphere client and RDP won’t work as it uses its own WDM device driver for video. You’ve been warned!

At this point, the cool stuff starts.

First, we shut down the VM. We edit the settings of the VM using the vSphere Web Client (yes – I said the Web client – the old client won’t help here) and add a Shared PCI device. This will show up as NVIDIA GRID and allow you to select a profile as required:

OK this and power up the VM.

At this point, if you log on to the PC and check Windows Device Manager, you’ll see an extra graphics board – probably using the Windows generic adaptor.

We install the GRID specific Nvidia drivers (again, from the GRID portal). This is pretty much the same process as with a traditional desktop with an Nvidia board. Now we reboot the VM.

As stated above, you can’t use the vSphere console – so log on via VNC or Horizon View Direct connect. At this point, you need to register the VM with the license server. This is done via the Licensing tab on the Nvidia Control Panel.

We can now use this VM in Horizon View, including a template for linked clones if we wish. One thing to note is that PCoIP is mandatory when setting up your pool! RDP just won’t cut it as it’ll use its own drivers instead of NVidia.

There are a couple of catches when deploying RDSH Servers. The easy one is that when you install the View Agent on your RDSH server, make sure you select the 3D RDSH option. The next one is only really an issue on Windows 2012. At the last stage, licensing the server for Nvidia, Horizon View Direct Connect considers you as logged in via Remote Desktop, even if the protocol is PCoIP. As such, the Nvidia Control Panel will not function. Use a VNC Server instead.

Closing thoughts

As users demand more and more graphics horsepower from VDI, the proliferation of GPU solutions is likely to increase, hopefully bringing more players into the market and impacting the currently rather high costs. AMD are already starting to offer support for vSGA and vDGA on their FirePro cards – hopefully, they’ll offer an equivalent to GRID in the future.

With respect to GRID 2.0, my opinion is somewhat positive. It’s powerful and relatively straight forward to set up, though the need with GRID 2.0 to deploy the licensing solution adds an otherwise unnecessary layer of complexity and a potential point of failure.

Suffice to say though, if Santa stuffed a Tesla M60 into my no doubt copious pile of Christmas presents, I would be far from unhappy.

If you’d like to learn more about VMware Horizon View, 3D acceleration or VDI, please contact us and we’d be more than happy to use our real world deployment experiences to help you.

A Tour of vSphere 6.0 Update 1 Platform Service Controller UI

One of the glamorous new features of VMware vSphere 6.0 Update 1 is the new web interface for the Platform Service Controller (PSC) component of vCenter. This is VMware’s attempt at making the PSC a little more user friendly. This … [More]


One of the glamorous new features of VMware vSphere 6.0 Update 1 is the new web interface for the Platform Service Controller (PSC) component of vCenter. This is VMware’s attempt at making the PSC a little more user friendly. This blog article is an exploration of this interface.

To access the PSC, you’ll need:
  • a local account for vCenter (for example, administrator@vSphere.local), and
  • a web browser (because that’s the future….).
You point the browser at https://(vcenter)/psc and log in. This works regardless of whether you have an embedded vCenter solution (vCenter server and PSC on one server) or go for separate servers, though obviously the URL is a little different! This is what you’ll see. Pretty, isn’t it?

Single Sign On

We can carry out some user administration and configuration – this could be done from within the vCenter itself already, and the GUI is no different. So, as before, we can set up identity sources and set up policies.

Managing the VMCA

The new element is the Certificates section which provides some tools to aid managing the VMCA – the built in Certificate Authority.

If you’re replacing the certificates with proper signed certificates to make it an intermediate certificate authority (which is better than leaving it at the default), you can replace the Root Certificate for the vCenter Certificate Authority by going through Certificate Authority> Root Certificate. You’ll still need to go through the process of creating a CSR and editing the certificate, as would have been the case in the old process anyway. (Take a look at the VMware vSphere 6 documentation for this.).

After this, you renew the machine certificate for vCenter by going to Certificate Management>Machine Certificates and selecting the __Machine_Cert and hitting Renew.

If you’ve got any solutions installed, the Solution User Certificates tab can be used to renew these also.

By managing certificates on the PSC before adding any other components, you don’t need to change anything else – the PSC will issue trusted certificates to requesting components from here on out.

Last but not least

Finally, there’s a section called Appliance Settings. Obviously, this is focused on the vCenter appliance and provides both administration for Active Directory domain membership…..

… and access to the link to the actual Appliance settings interface, accessible separately at https://(vCenter Appliance):5480/login.html.

So, overall, a useful feature and a big improvement on the initial release.

If you’d like any assistance with a virtualisation project or simply want to learn more about how Xtravirt can help your organisation, please contact us, and we’d be more than happy to use our real world experiences to support you.

The VCDX Journey – Gregg Robertson

Xtravirt Senior Consultant Gregg Robertson has recently achieved VMware® Certified Design Expert (VCDX) certification, becoming VCDX #205. With only 213 VCDX’s worldwide, Gregg has joined an elite group of world-class architects. In the specialist area of Data Center Virtualization (VCDX-DCV), … [More]

Xtravirt Senior Consultant Gregg Robertson has recently achieved VMware® Certified Design Expert (VCDX) certification, becoming VCDX #205. With only 213 VCDX’s worldwide, Gregg has joined an elite group of world-class architects. In the specialist area of Data Center Virtualization (VCDX-DCV), Gregg is 1 of only 12 people in the UK to hold this certification.

In this blog, we find out more about Gregg’s journey to this outstanding achievement.

How did you get into using VMware?

I started using VMware at my first company after moving to the UK called Conchango after being interested with what one of my colleagues was doing within VI 3. I would do my normal desktop support work and then asked him if I could help him build machines in my spare time as I really found virtualisation exciting. We were then bought out by EMC and from then I spent more and more time learning VMware and using all the new study resources being part of EMC afforded me in order to skill-up. Soon my whole job was looking after two global development environments built on VMware in the US and UK and I really got into the whole VMware community via social media, my blog and the VMware communities of which I am now a moderator. Soon after I decided to become a consultant to up my game on VMware and supporting technologies.

What made you decide to do the VCDX?

The VCDX was always something I dreamt of doing and was one of the reasons I decided I needed to leave EMC and join Xtravirt as I felt I needed consultancy experience to be able to obtain the VCAP exams and then have the experience to submit and defend for the VCDX.

How long was your VCDX journey?

I guess this depends on where you think it really began as I could say from the early VCP3 days but when I decided I was going to realistically aim for the VCDX I would say three years. This did include an unsuccessful defence in April 2014.

What advice would you give to others thinking of embarking on this journey?

There are quite a few things, but mainly I would say:
  • If you feel your current role won’t allow you to learn and grow enough to be on a level to submit and defend for it then maybe look for a new role either within your company, if possible, or even a role in a new company.
  • Set out a realistic plan of when you are planning to do the required VCP and VCAP/VCIX exams and time to build your design or amend/enhance an existing design.
  • Make sure you learn from every person on your team about what they are doing for each portion of the environment as the VCDX isn’t just about being a VMware SME but also knowing how the supporting technologies and solutions connected to the VMware solution impact the design.
  • Don’t stop preparing, even after submitting.
  • Leave nothing in the tank, but don’t burn out. This comes back to good planning and finding a realistic balance.
  • Take a week of annual leave before the defence (last time I worked the day before and day after my defence).
  • The panel are your peers and you belong in that room. Imagine it’s like you are explaining your design to colleagues who are interested in a previous project you worked on and they’re asking questions to better understand why you chose things.
  • Join a study group with people who are as motivated as you and bounce ideas off each other.
  • Get a mentor. There is a search field on vcdx.vmware.com to enable you to search out mentors. Also, if possible, choose people who you know personally as this makes the process of liaising with your mentor much easier.
  • The mentor isn’t there to give you the “answers” they are there to push you to better yourself and to point out where you may need to research more.

If you were to go for another VCDX, what would you do differently?

At present I’m taking a bit of a break as I need to skill up on vSphere 6 and vRA 7 which were put on the back burner whilst I was aiming for my VCDX (based on vSphere 5.0), but I am drawn to submitting for my double VCDX and more specifically the VCDX-CMA utilising possibly the vRA design I created for my current engagement that I’ve been working on for the last year. But I need to allocate some time to this that doesn’t impact family time or my sanity.

As for prep wise, if I had failed I’m not sure what else I could have done more, I guess there always is more books to read, blogs to read and CBT videos to watch but largely I think I went all in this time.

How has life after becoming a VCDX been for you?

My weekends and evenings are certainly much more freed up which I’ve really enjoyed as it’s allowed me to spend time with my wife and my two and half year old daughter. I only found out I passed recently so it’s still sinking in and also it’s far too early to tell what impact it may have on my life.

Overall, has the journey been worth it?

Without question it has been worth it and it has pushed me to become a better architect and consultant and also to continually strive to be better. I would recommend the journey to anyone willing to consistently go for it

Any final words of advice?

Just one for people who failed the VCDX or even the supporting exams, I failed the VCAP-DCD the first time and failed my VCAP4-DCA twice so don’t feel bad about failing. Learn where you were weak and try again. It’s a cliché but it’s true that it is about the journey and you have to take failures as a lesson, regroup and go at it again. One of the first things I mention in my VCDX blog posting is about starting early and setting a timeline of when you want to defend. Also for those who fail the VCDX the first time I know it’s painful but there are some big names who failed first time (I’m not meaning me here) who are now double VCDX’s. Gregg Robertson joined the Xtravirt consulting team in December 2011. To read more about Gregg’s VCDX journey and his other blogs visit his site at www.thesaffageek.co.uk . If you’d like any assistance with a virtualisation project or simply want to learn more about how Xtravirt can help your organisation, please contact us, and we’d be more than happy to use our real world experiences to support you.

The VCDX Journey – Sam McGeown

Xtravirt Senior Consultant Sam McGeown has recently achieved VMware® Certified Design Expert (VCDX) certification, becoming VCDX #204. With only 213 VCDX’s worldwide, Sam has joined an elite group of world-class architects. In the specialist area of Cloud Management Automation (VCDX-CMA), … [More]

Xtravirt Senior Consultant Sam McGeown has recently achieved VMware® Certified Design Expert (VCDX) certification, becoming VCDX #204. With only 213 VCDX’s worldwide, Sam has joined an elite group of world-class architects. In the specialist area of Cloud Management Automation (VCDX-CMA), Sam is 1 of only 5 people in the UK to hold this certification.

In this blog, we find out more about Sam’s journey to this outstanding achievement.

How did you get into using VMware?

I used to manage the IT and web servers for a charity, so the budgets were extremely tight – I had one physical server for development to replicate the live IIS and MSSQL environment and I stumbled across VMware Server. It was like magic – two servers running on one! Later that became a stand-alone ESX server and I went on from there!

What made you decide to do the VCDX?

I was never sure that I could defend VCDX! I did my first VCAP (the DCA) in August 2013 which is the first real step of the path to VCDX, and I did the DCD a few months later at VMworld. Once I had those under my belt I started felt a bit more confident and that maybe it wasn’t unobtainable!

How long was your VCDX journey?

Somewhat foolishly, I swapped track from datacenter to cloud at the beginning of 2015. I had been working on a DCV design, but it wasn’t great and would’ve required a lot of fictitious components. A huge vCloud Director project landed and it was perfect for VCDX so I started study for the cloud VCAP exams. Then the CIA and CID were retired with no replacement and I was left hanging for a few days before VMware announced they would waive the VCAP requirements for anyone submitting for the CMA.

The project I used started in January 2015 and ran for about eight weeks, it was finished with two weeks to the submission deadline for the June defence that year – I managed to submit it but it was a rush! I failed that first defence and spent a bit more time preparing for the second attempt this October, which I then passed – so on the face of it 10 months.

What advice would you give to others thinking of embarking on this journey?

Just do it! Don’t put VCDX on a pedestal – it is achievable!
  • Read the blueprint…repeatedly – it tells you everything you need to cover
  • Build a small study group of people who you can meet with regularly – online is fine – and review, practice and study together.
  • Get regular input from a VCDX mentor – they’ll help keep you on track and discover strengths and weaknesses.
  • Don’t wait to find out if you are invited to defend – start working towards it as soon as you’ve submitted.
  • If you get invited to defend don’t just practice your presentation – practice the design and troubleshooting scenarios too.
  • Talk to your partner/wife/husband/family and make sure they are with you – you will need their support and their patience!

If you could do the whole VCDX journey again what would you do differently?

That’s a tough one – as you say, it’s a journey, so the whole experience builds toward the end goal. I think I needed the experience of the first defence to be able to pass the second. I did rush my first submission, but I don’t think it would’ve made any difference if I had waited and taken my time.

I think I should’ve engaged earlier with a study group on my first attempt, but I honestly don’t know if that would’ve helped me pass first time.

How has life after becoming a VCDX been for you?

Ask me in 6 months? It’s a bit too new to really say, it hasn’t really sunk in for me yet!

Overall, has the journey been worth it?

The journey has been hugely rewarding – I am a far better architect now than I was at the start of the journey. On a personal level, setting huge targets and then achieving them is a massively rewarding process – I think it gives you a huge amount of confidence to do so. Dealing with the failure of the first defence was tough, really tough, but moving past it, trying again, and succeeding – well that was flipping awesome! Sam McGeown joined the Xtravirt consulting team in January 2014. To read more about Sam’s VCDX journey and his other blogs visit his site at www.definit.co.uk. If you’d like any assistance with a virtualisation project or simply want to learn more about how Xtravirt can help your organisation, please contact us, and we’d be more than happy to use our real world experiences to support you.

VMworld Europe 2015: Not all Sun and Sangria in Barcelona

Although a somewhat belated Blog post (well, I do have a day job at Xtravirt as well!), this one covers our team of intrepid techies visiting VMworld 2015 Europe – once again at the Gran Via conference centre in Barcelona.


Although a somewhat belated Blog post (well, I do have a day job at Xtravirt as well!), this one covers our team of intrepid techies visiting VMworld 2015 Europe – once again at the Gran Via conference centre in Barcelona.

VMware Photon – Delivering Docker through vSphere and beyond

One of the big pitches at VMworld this year was the implementation of Docker in the VMware ecosystem. Broadly speaking Docker is an approach to containerising discrete applications for rapid development and delivery. Its heritage is predominantly in the world of Linux, however the concept is an extension of virtualisation. You create a Docker container (typically a tiny OS footprint and an application) and deploy as required – each can be changed, destroyed etc as needed.

Trouble is, up until now, this typically meant a Linux server (or VM) had to be deployed to support Docker, which in turn limited its scale-out capability. This is where VMware’s new play steps in. By leveraging VMware vSphere VMs essentially as Containers, we gain all of that scaling and resilience goodness that vSphere brings with the capabilities of Docker to deliver applications.

VMware approach this from a number of angles.

Firstly, there’s the ability for traditional vSphere to deliver Docker – this is intended for environments that may not be so large and wish to host both traditional VMs and Docker containers. This is leveraging the new vSphere 6.x VM forking capability to deliver our containers.

Next, we have the VMware Photon OS – a small footprint Linux OS (25MB in size) designed for container hosting. Although usable on traditional Docker solutions, it’s optimised for vSphere.

The big new thing though is the VMware Photon Platform. This is a highly customized ESXi stack specifically geared for delivery of Containers. This will include the Photon Controller as a management tool. It’s aimed purely at delivery of Cloud applications.

They had a rather amusing demo of VMware’s Docker handiwork with containers running MS-DOS hosting the classic game “Prince of Persia”.


There was also some discussion on VMware EVO SDDC. This takes the existing VMware EVO approach from the original EVO Rail concept and scales it up to a complete turnkey Software Defined Datacenter, leveraging vSphere, NSX, VSAN and other VMware technologies. Furthermore, VMware now provide a site with validated designs for this - http://www.vmware.com/software-defined-datacenter/validated-designs.html

End User Compute

On the EUC side of the fence, there were some pretty good discussions around Horizon. With the recent release of App Volumes 2.9 and Horizon View 6.2 with its increasing support for Remote Desktop Session Host (RDSH), there was a heavy emphasis on application delivery in VDI.

In the case of the latter, RDSH is maturing nicely, with feature-parity with VDI desktops – particularly with Cloud Pod Architecture support, HTML5 access, vDGA/GRID 3D graphic support and deployment using Composer.

App Volumes 2.9 now supports connections to multiple vCenter Servers and the delivery of App Stacks to physical endpoints. This is potentially a big deal – how will this impact Horizon Mirage App Layers? (I feel a bit of testing in the lab on this might be in order!). Apparently, Windows 10 support will arrive with the aptly named App Volumes 2.10.

Speaking of 3D support, Nvidia ran a particularly interesting session on the next generation of their GRID technology based on the new Maxwell core. Essentially, this is a big bump in performance over the older Kepler GPU, offering greater capacity for accelerated 3D in VMs.

Nvidia offer the Maxwell GPU in an MXM card form factor so permitting its use in certified blade servers – a popular form factor for larger estates. Horizon 6.2 adds support for hardware accelerated graphics for Linux desktops as part of the Horizon for Linux push.

So, to conclude….

A worthwhile expedition, with much knowledge gained as well as some insight as to VMware’s future direction. However, I’ll put my hands up and be honest - the networking with other colleagues in the industry, particularly in the evenings, was pretty good too.

If you’d like any assistance with a virtualisation project or simply want to learn more about how Xtravirt can help your organisation, please contact us, and we’d be more than happy to use our real world experiences to support you.

Horizon View 6 PCoIP – WAN, Limited Bandwidth, Optimise, Tune

Earlier this year, I was engaged for a couple of days, to help a customer pilot the PCoIP protocol using Horizon with View 6, with the primary driver to deliver multimedia and video across WAN links. This blog provides a … [More]

Earlier this year, I was engaged for a couple of days, to help a customer pilot the PCoIP protocol using Horizon with View 6, with the primary driver to deliver multimedia and video across WAN links.

This blog provides a summary of my findings including tools, tweaks, tips and resources I used.

Typically, whilst this is an excellent use case for PCoIP, the customer requirements and constraints were going to test PCoIP capabilities to the full. Also, as best we try to reason with the customer and set out reasonable expectations with these constraints in mind, there’s always the demand for the technology to do more. As expected, PCoIP was being evaluated against other protocols, using RDS Sessions (2008 R2), rather than full Windows 7/8 desktops. Endpoints devices were a mix of new Dell Wyse Thin Clients.

The customer was preparing to test PCoIP from the following locations connected by MPLS:-
  • India – 150ms RTT and 10Mbps link
  • Canada – 75ms RTT and 200Mbps link
No further information regarding expected concurrent users or current link utilisation was available during this short engagement.

By using the MPLS, at least the PCoIP protocol doesn’t have to traverse the internet and go through numerous additional hops. This provides additional benefits such as reduced latency and having access to those devices across the WAN from the service provider.

In addition, there was a requirement to identify the ‘lowest point’ that could deliver video playback with ‘acceptable’ performance. This doesn’t mean smooth, perfect or flawless playback, just acceptable enough to the end user, with a consistent experience which is essential.

Testing had been carried out previously with another protocol and delivered very good video (to my eyes) at 384Kbps, which I was impressed with to say the least.

WAN Testing and Simulation

Initial testing was carried out using a WAN simulator software, Soft Perfect WAN emulator, which the customer had purchased in advance.

The conditions were set with RTT latency and bandwidth for each location, and I verified some kind of accuracy of the tool, by using ping commands to check for latency and Speedtest.net. The tool was fairly accurate based upon several results, although it was the first time I’ve come across the tool. We didn’t induce packet loss or random packet ordering for the testing, we kept the testing simple and were looking to monitor real WAN testing to discover this information.

PCoIP Settings

The following PCoIP settings were identified and tuned to find the optimal experience. I’ve added some notes around PCoIP behavior I observed during testing at 384kbps

Note: View 6.0 has introduced new PCoIP defaults to provide further optimization out of the box and specifically for WAN environments. These have been highlighted in the table.

Note: A number of these settings are dynamic and it’s useful to change these settings whilst a PCoIP session is running, and then monitor (visually) the changes from the PCoIP session.

Setting Description Default Value
PCoIP Maximum Bandwidth Limit Set a limit on the bandwidth a PCoIP session can use 90000kbps
Build-to-lossless Builds the image to a completely lossless state, pixel perfect image. Only required in special uses cases. Disabling this can reduce bandwidth demands greatly. Disabled (Previously Enabled in View 5.x)
PCoIP Maximum Image Quality A lower initial maximum image quality will reduce the bandwidth required back at the expense of image quality. 80 (reduced from 90)
PCoIP Minimum Image Quality Trades off display image quality with display frame update. 40 (reduced from 50)
Frame Rate Limit Set a limit on the display update rate. Can reduce bandwidth but as the cost of smooth motion 30
Audio Limit Configures audio compressing. The resulting audio bandwidth will be near or below the limit 500kbps

PCoIP Tuning – Observations

The PCoIP ADM templates were imported and applied locally on the RDSH server and are downloaded via the Horizon Extras Bundle.

PCoIP Maximum Bandwidth Limit

  • If the use case is a bandwidth constrained environment, configure this setting with the bandwidth limit in mind to prevent the PCoIP session trying to burst beyond the available link bandwidth, which will degrade performance and likely cause packet loss and poor user experience.
  • For example, if the link is 384kbps, configure this setting as 384kbps
  • Ideally you wouldn’t limit this too much because PCoIP is a ‘bursty’ protocol and likes to use available network bandwidth to increase performance.
  • It’s not recommended to multiply this limit by the number of concurrent users expected. Better to cap a percentage of the link.

PCoIP Maximum Image Quality

  • This setting has more impact than ‘Minimum Image Quality’, so initial focus and attention should be here.
  • Default of 80 – Can reduce to 70 for WAN environments, however going too low reduces the quality, so it’s a trade-off scenario.
  • Reducing this setting will decrease bandwidth used and allow Imaging Frame (FPS) to increase
    • FPS will increase slightly but ultimately more available bandwidth = more FPS

PCoIP Minimum Image Quality

  • Default of 40 – Can reduce to 30 (lowest) or 35 in congested WAN environments
  • I didn’t notice much visual change here or from observing the PCoIP Statistics Viewer graphs.

Frame Rate Limit

  • Default of 30 – If no multimedia required, could reduce to around 12
  • On the LAN with no restrictions (PCOIP max), the embedded videos being played in IE, would use around 28-29 FPS for flawless playback
  • With the 384 PCoIP Max, reducing this setting to 12, 15 or 18 had little impact in changing the observed FPS numbers, as FPS was dictated by available bandwidth (restricted by the PCoIP Max Session) and Max Image Quality (if this setting was reduced, a slight increase in FPS was observed).

Audio Limit

  • Default of 500kbps – Should reduce in constrained scenario to 100kbps-150kbps
  • At 384kbps bandwidth limit – PCoIP would never increase audio beyond 42kbps
    • If I set the PCoIP audio limit to 75, 100, 150 or 200, this made no difference (still 42kbps)
  • As soon as the bandwidth limit and Max PCoIP session increased to 1024kbps, audio bandwidth used was double to 80kbps, which was a much more acceptable experience.
  • You can download the Teradici Audio driver and apply this if audio is causing issues, although I didn’t implement this. More information on this area can be found in this blog post.

PCoIP Client Image Cache Size

  • Default setting - the cache is for static content only, rather than video or dynamic content, therefore less effective in this scenario.

    PCoIP Transport Header

  • Default (Medium) - The PCoIP transport header allows network devices to make better prioritization/QoS decisions when dealing with network congestion. The transport header is enabled by default.
    •  This can be set to ‘Highest’, but it’s not a change I’ve seen recommended before.

PCoIP Monitoring

  • Horizon with View (vRealize Operations Manager), previously vCOPs for View (V4V). See this post
  • Perfmon or WMI counters inside the Windows session
  • PCoIP Session Statistics Viewer – Downloadable from Teradici.com
  • PCoIP Config Tool
    • Although doesn’t seem to work inside Windows 2008 RDS sessions
  • PCoIP Log Viewer
    • The logs would not parse for View Agent 6.01
I’ve previously always used the last two tools, however since the creator has moved on from VMware, I can appreciate that these tools may have not been kept up-to-date, for the latest versions.

Instead, I used the PCoIP Session Statistics Viewer tool from Teradici, it’s easy to use and presents the session data in easy to consume graphs and charts. There is where you can track the tweaks to PCoIP above and see how this impacts the current session.

RDSH Tuning

As the customer was using Windows Server 2008 R2 RDS Sessions via Horizon View, there was some additional tweaks I applied to make sure the server was an optimised as possible for best performance.
Note: Take a snapshot or backup before running and applying the changes these tools implement
  • Internet Explorer
    • Upgraded from IE9 to IE11
    • Latest version of Flash player
    • Disable Hardware Acceleration, this can make a noticeable difference.

More Tools

I utilised a few other tools which can be very helpful,
  • Teradici.com tools are highly recommended
  • PCoIP Bandwidth Calculator.xls
  • PCoIP Session Statistics Viewer (see above)
  • Monitor FPS in real-time inside Windows
  • Network Emulator\Simulator
    • Soft Perfect (shown below).
    • WANEM

Final Thoughts

  • PCoIP defaults are pretty accurate for the WAN, only a small amount of tweaking is required.
  • Network infrastructure for PCoIP is no 1! PCoIP uses UDP, so UDP packets are always going to have less priority on the network during contention than TCP. Their phase is more important and effective than playing around with the PCOIP settings. Consult the resources below.
  • LAN performance for video playback from the browser was flawless (as expected), with no PCoIP session restrictions or network simulators.
  • Don’t constrain PCoIP too heavily and understand the behaviour (it’s UDP, dynamic and bursty naturally).
  • Set user expectation – There’s only so much you can do and achieve using the protocol with limited bandwidth.
  • Real WAN testing holds the key, as protocol latency, session reliability and packet loss all come into play.
  • Horizon View 6 is missing some form of flash re-direction, which the competing solution had, with the ability to support this using RDS sessions. Horizon View is behind in this respect.
  • Despite the above point, real world testing across the WAN, with feedback from the customer adding that PCoIP was out performing the competition.

Additional Resources

VMworld Sessions

I can’t recommend these sessions enough, definitely the first place to go for PCoIP, a lot of gold and nuggets from the presenters. You can find my notes from the sessions here.

About the Author

Steve Dunne joined the Xtravirt consulting team in March 2012. As well contributing to the Xtravirt blog, Steve blogs on his own site at www.vituallyvirtuoso.com.

If you’d like any assistance with a virtualisation project or simply want to learn more about how Xtravirt can help your organisation, please contact us , and we’d be more than happy to use our real world experiences to support you.

VMware Validated Designs Released

At this year’s VMworld keynote, VMware announced the availability of Validated Designs for Software Defined Data Center. Now that the SDDC has been released and is starting to move forward, the amount of information you have to collect and go … [More]

At this year’s VMworld keynote, VMware announced the availability of Validated Designs for Software Defined Data Center.

As a senior consultant at Xtravirt (a VMware partner), who currently does a large portion of work with VMware, and having just submitted for my second attempt for the VCDX1 I have been fortunate to have access to materials and templates around almost all of VMware’s offerings. But for people who aren’t partners or maybe even more so are partners but don’t get to do “cutting edge” engagements it is hard to know where to start doing designs.

Now that the SDDC has been released and is starting to move forward, the amount of information you have to collect and go through that covers multiple reference architectures and white papers that normally have product-specific information is painful. You also come out with different outcomes and this is something VMware’s Validated Designs for Software Defined Data Center is looking to remedy.

What are VMware Validated Designs?

  • Architectures & Designs created and validated by VMware experts
  • Encompass the entire set of VMware’s Software Defined Data Center products
  • Standardized and streamlined designs for each deployment scenario & broad use-case:
    • Datacenter Foundation
    • Single-region & Dual-region IT Automation
    • QE / Demo Cloud
    • And much much more

What’s in a VMware Validated Design?

The contents are a bit like the VMware partner Solution Enablement Toolkits but include so much more, they are:
  • Solution Overview
  • Design Objectives
  • Reference Architecture Documents & Blueprints
  • Final Design Specification—including specific products and versions
  • Hardware Prerequisites & Preparatory Procedures
  • Implementation Guides
  • Operations Documentation

Reference Architectures based on the VMware Validated Designs Process

VMware have made two Reference Architectures available based on the VMware Validated Designs Process.
  • Foundation
    • The building block for all future designs
      • Focus is on datacenter, storage, and network virtualization with monitoring.
      • Uses vSphere with Operations Management (vSOM), VMware Virtual SAN, and NSX for vSphere.
  • Automated provisioning with the SDDC
    • Adds provisioning and deeper monitoring to Foundation.
      • Focus is on automating common IT provisioning tasks.
      • Uses vCloud Suite, VMware Virtual SAN, NSX for vSphere and vRealize Log Insight.

Learn More & Early Access

You can sign up for the beta through VMware to get early access and regular updates and to also learn more about VMware Validated Designs.

Gregg Robertson joined the Xtravirt consulting team in December 2011. As well contributing to the Xtravirt blog, Gregg blogs on his own site at www.thesaffageek.co.uk .

If you’d like any assistance with a virtualisation project or simply want to learn more about how Xtravirt can help your organisation, please contact us , and we’d be more than happy to use our real world experiences to support you.

1 – Since this blog was published, Gregg has successfully achieved his VCDX certification in Data Center Virtualization.

Horizon 6.2 – A Quick Briefing

Well, VMware Horizon View 6.2 has been out for a month now and there’s some nice new features to sweeten the deal. Windows 10 The singly unsurprising new ‘feature’ is the addition of Microsoft Windows 10, both as an endpoint … [More]

Well, VMware Horizon View 6.2 has been out for a month now and there’s some nice new features to sweeten the deal.

Windows 10

The singly unsurprising new ‘feature’ is the addition of Microsoft Windows 10, both as an endpoint client and as a desktop guest OS. This will be a welcome addition, given the lack of popularity of the Windows 8.x interface. The View User Profile Migration tool can migrate Windows 7 (and later) profiles to Windows 10.

Hosted Apps and Remote Desktop Sessions

One of the stand-out additions that is really starting to gain traction in terms of features as Horizon 6 evolves is Remote Desktop Session and Hosted Apps. With the release of 6.2, we gain a number of new capabilities.

We can now deploy RDS servers using VMware View Composer, rather than manually provision the server and add to the estate. Furthermore, the RDS servers can now leverage vDGA and vGPU enhanced 3D capabilities formerly available on VDI desktop OS only.

In terms of delivering Hosted Apps, the Cloud Pod architecture now supports RDS, but perhaps equally as useful is that the HTML Access feature also supports hosted applications. This means you can deploy RDS in any site, entitle it globally and the users will only need a HTML5 browser to use them.


Firstly, AMD have come to the 3D graphics party with their support for vDGA, so you’re no longer limited to NVidia. With respect to the underlying virtualisation stack, there’s support for vSphere 6.0 U1 and Virtual SAN 6.1. Deploy together and Stretched Cluster support is available. For external access, we always had the old Security Server. This is still around, but VMware now have the new Access Point virtual appliance. This is a hardened Linux VM, locked down out of the box.

Using Horizon 6.2

The most visible feature in the Admin console is that it answers an old criticism – lack of licensing information. It now displays the key as well as the count of named users and concurrent connections.

A further graphical tweak at the virtual desktop is support for 4K resolution monitors, though I imagine this might eat up the bandwidth requirements a little!

Closing thoughts

Well, a healthy dose of evolution going on here. The big attraction is the improved support for RDS, particularly across sites with Cloud Pod, and deployment with Composer. Combine this with App Volumes, and you have a really powerful ability to dynamically scale up and down the RDS environment. By creating a template RDS server with nothing installed, and creating an App Stack with the applications entitled to the hosts in an Active Directory Organisational Unit folder, it will be possible to deploy additional servers, complete with applications at the touch of a button. Add the HTML5 access for Hosted Apps and you have a slick App delivery solution. This is really quite a nice tool for the armoury. If you’d like any assistance with a VMware Horizon project or simply want to learn more about how Xtravirt can help your organisation, please contact us, and we’d be more than happy to use our real world experiences to support you.

Using VMware Automation to address a Virtual Machine Provisioning Challenge

This blog post describes how VMware vRealize Automation (vRA) and vRealize Orchestrator (vRO) can be used to complete a typical virtual machine provisioning task. Scenario A common action required after provisioning a new Windows Server virtual machine is to apply … [More]

This blog post describes how VMware vRealize Automation (vRA) and vRealize Orchestrator (vRO) can be used to complete a typical virtual machine provisioning task.


A common action required after provisioning a new Windows Server virtual machine is to apply the latest Windows patches. In this scenario, I will apply patches to virtual machines based on the Active Directory group membership of their computer objects.


  • You are running vRA 6.x and vRO 6.x
  • You have one or more published Windows virtual machine blueprints in vRA, which can successfully deploy domain-joined VMs via the vCenter Server in your environment with virtual machine customisation (e.g. Sysprep)
  • The vCAC Plug-in has been installed and configured in vRO (if this is a separate installation)
  • The AD Plug-in has been installed in vRO and configured to communicate with your AD with the appropriate rights
  • An endpoint for vRO has been configured in your vRA system, and the appropriate tenant has been configured to use the correct vRO instance

Activity Overview

The steps covered are:
  1. Prepare the Active Directory Groups
  2. Configure vRealize Automation
  3. Configure Extensibility in vRO
  4. Create the Custom Workflow
  5. Create the code
  6. Assign the workflow to the blueprint
We will create the appropriate ‘patching groups’ in AD and configure vRA so that it requires the user to select, during deployment, the patching group to be used for a given VM. We will then use the vRealize’ extensibility features to hook into the machine lifecycle to run a custom vRO workflow during provisioning, passing in the name of the VM and the patching group selected. This custom workflow (which will include some bespoke scripting) will add the computer object to the AD group. 1.       Prepare the Active Directory Groups First, create the required patching groups in AD nbblog1 2.       Configure vRealize Automation In vRA the user deploying a VM from a blueprint will see a drop-down list of the patching groups that they can select from. For the sake of simplicity this list will be defined within vRA, and not built dynamically from AD. The first step within vRA is to create an object in the Property Dictionary, in order to utilise it as a custom property in a blueprint. In this example, we’ll call this VirtualMachine.PatchingGroup. a)      Go to InfrastructureBlueprintsProperty Dictionary and add a new property definition: nbblog2   b)      Set a suitable display name and an optional description, and select DropDownList for the control type c)      Select the Required check box (as we want this to be a mandatory selection) and save the definition d)       Edit the property attributes and create a new property attribute named Select Group whose type is ValueList e)      Provide a comma-separated list of the patching group names as the value of this attribute, as shown below: ngblog3 Note the list items must exactly match the names of the patching groups that you created in AD earlier, as this is the data that will be passed to vRO during the MachineProvisioned stage. f)      Next add a custom property to the appropriate blueprint(s) in vRA which uses the property definition just created. Edit the blueprint and on the Properties tab, add a new custom property. Use the property name defined in the definition (this must match exactly). Leave the Value field blank but select the Prompt User check box, then save the custom property.   nbblog4 The next time this blueprint is used to deploy a VM, the user will be prompted to select a mandatory patching group: nbblog5 3.       Configure Extensibility in vRO Next configure vRO to hook into the machine lifecycle managed by vRA and run a custom workflow. To do this, run the VMware provided Install vCO customization workflow. This workflow installs vRO customisation, including customised state change workflow stubs and vRO menu operation workflows. It can be found in the vCAC Plug-in under vCloud Automation CenterInfrastructure AdministrationExtensibilityInstallation. Note that it is only necessary to run this workflow if that has not been done previously, so check for a previous successful run first. nbblog6 This workflow may take a while to complete 4.       Create the Custom Workflow Next step is to create a custom workflow to process the data from vRA and add the VM’s computer object to the AD group. a)      First, create a new workflow in the desired vRO library folder. In this example I will call it Add vRA VM to AD group. The quickest way to do this for a workflow that will be triggered by a state change workflow is to duplicate the Workflow template  workflow that can be found under vCloud Automation CenterInfrastructure AdministrationExtensibility. This will pre-set a number of input parameters for you, and save a lot of time. b)      Edit the new Add vRA VM to AD group workflow to add a scriptable task. You can name this task whatever you like. In this example I will call it Process vRA VM data. Delete the default Display inputs scriptable task. c)       Add the inputs for the scriptable task that are shown as selected in the screenshot below, i.e. vCACVm and vCACVmProperties. These parameters will be used to provide input from vRA to the scriptable task. The list of parameters that you can select comes from the template workflow, but we don’t require any of the others for this example: nbblog7 a)      Create the following outputs for the scriptable task: An attribute of type AD:UserGroupAD called userGroupAD An array attribute of type Array/AD:ComputerAD called arrayComputerAD nbblog8 These attributes will ultimately contain the AD representation of the computer object and the group to which it is to be added. Later we will pass these values into the next element of the workflow that we are building. The bindings for the scriptable task should now look like this: nbblog9 5.       Create the code Now, on to the code itself! a)       In the Scripting tab, paste in the JavaScript code shown below:   A quick run through of what the above code does follows: The function findADGroupByName will be used by the scriptable task to search AD for a group matching the name passed in via the groupName parameter, and return an array of AD groups. As an exact match search is used, there should only ever be a maximum of one element in this array. First, the script reads the vCACVm.virtualMachineName property to determine the VM name, in order to locate the computer object in AD later. It then iterates the vCACVmProperties property collection to find the patching group name. Note that this property has the name of the custom property we created in vRA: VirtualMachine.PatchingGroup. The script then uses the getComputerAD() method of the ActiveDirectory object to locate the computer account object in AD, and then pushes it into the array arrayComputerAD. This is necessary because the next element of our workflow, the out-of-the-box workflow Add computers to group members, expects an array of ComputerAD objects to be passed in to it. Finally, the script uses the previously discussed findADGroupByName function to locate the group object in AD, which it places into the userGroupAD variable. This is an output which is passed to the next element of the workflow to actually add the computer to the group. You will note that I have included a fair amount of debug logging in the code to assist in troubleshooting, but in order to make it short enough to describe in this blog post, no significant error handling. If desired, you can extend the code to include error handling that is suitable for your environment. b)      Save the scriptable task and then add the out-of-the-box workflow Add computers to group members, (which can be found under MicrosoftActive DirectoryUser Group) after the scriptable task: nbblog10 c)      Configure the bindings for this element as shown below:
Add computers to group members workflow input In Attribute
userGroup userGroupAD
computers arrayComputerAD
nbblog11   d)      Finally, validate the Add vRA VM to AD group workflow. There will be a number of unused parameters flagged up. Simply delete these parameters and then save and close the workflow. nbblog12 6.       Assign the workflow to the blueprint So now we have the custom workflow completed, the next step is to assign it to the appropriate stage in the virtual machine lifecycle. The stage we are interested in is when vRA reports the deployment process as having reached a state known as MachineProvisioned. At this point in the lifecycle, the operating system of the VM is up and running, the machine has joined the domain, and depending on timings, is likely to be applying the final VM customisations. Note that the procedure outlined below will be similar whether the custom workflow assigned is being used to add the computer to a group (as in this case), or a for more advanced activity such as adding the new virtual machine’s details to an asset management system. In vRO, the ‘workflow stubs’ that are executed at the various stages of virtual machine provisioning can be seen in the vCAC Plug-in under vCloud Automation CenterInfrastructure AdministrationExtensibilityWorkflow stubs: nbblog13 In order to assign our custom workflow to the WFStubMachineProvisioned workflow stub, we need to run the VMware provided Assign a state change workflow to a blueprint and its virtual machines workflow. This can be found at the same level as the workflow stubs folder. Do this as follows: a)      Start the Assign a state change workflow to a blueprint and its virtual machines workflow, select MachineProvisioned as the vCAC workflow stub to enable, and select the appropriate vRA IaaS server as the vCAC host: nbblog14 b)      Browse to and Add the required blueprint(s): nbblog15 c)      Select the Add vRA VM to AD group workflow as the End user workflow to run: nbblog16 In this case, there is no need to add the vCO workflow inputs or the last vCO workflow run input values as blueprint properties values, as the blueprint already has the required properties – i.e. the name of our patching group. d)      Submit the form and once the workflow completes, we are ready to deploy a VM from the blueprint and check that it gets added successfully to the patching group. Before doing that however, review the blueprint properties in vRA and note that a new property called ExternalWFStubs.MachineProvisioned has been created, whose value is set to the ID of the Add vRA VM to AD group workflow (this ID is visible in the vRO client). Note that this is a handy troubleshooting tip, and the first thing to check if your custom workflow doesn’t get executed during provisioning. nbblog17 nbblog18 7.       Deploy a Virtual Machine from the Blueprint Finally, deploy a virtual machine from the blueprint, selecting the required patching group when prompted, and verify that the computer account gets added to the patching group in AD. During the provisioning process, you can observe within the vRO client when the various workflows are executed, and whether they are successful. The vRA audit log (InfrastructureMonitoringAudit Log) can be used to identify the points at which the VM moves through each provisioning state: nbblog19


This post has described an automation and orchestration approach that can be adopted to add a computer account to an AD group specified by the user at deployment time, in this case for the purposes of managing Windows patching. Of course, an almost identical process could be used to add the computer to a group for any number of other reasons.   Nigel Boulton joined the Xtravirt consulting team in November 2014. As well contributing to the Xtravirt blog, Nigel blogs on his own site at www.nigelboulton.co.uk. If you’d like any assistance with an automation project or simply want to learn more about it please contact us, and we’ll be more than happy to use our real world experiences to support you.    

vROps 6.1 what’s new?

With VMware vROps 6.1 now gone GA, I thought I’d take a quick view on what’s new and what the VMware guys have added. Below I have listed what I consider to be the highlights. The maximum of 8 nodes … [More]

With VMware vROps 6.1 now gone GA, I thought I’d take a quick view on what’s new and what the VMware guys have added. Below I have listed what I consider to be the highlights.
  • The maximum of 8 nodes has been doubled to 16!
  • SSO integration has been added (requires vSphere 6.0)
  • Support for SRM has been added
  • vRealize Hyperic functionality has been added.  With the addition of End Point Operations Management, the value of vRealize Hyperic functionality has been extended to the vRealize Operations Manager core product, without the need to deploy vRealize Hyperic
  • Remote collector resiliency. New functionality enables you to assign solutions to collector groups. Collector groups provide high availability access to data collection for the solution.
  • Support for IPv6. You can deploy vRealize Operations Manager in Internet Protocol version 6 (IPv6) environments.
  • Support for Windows Server 2012 R2. However it is still recommended to go with the appliances.
  • Dashboard and report enhancements. New functionality enables you to post a dashboard as a report, and post a report to a shared drive.
  • Automated workload placement and re-balancing. You now have the ability to re-balance workloads to optimize performance and preserve license optimization.
  • Telemetry. A new collection of deployment and usage statistics for vRealize Operations Manager has been added, to help improve product usability and performance.
  • Upgrade options. Direct upgrade path from 6.0 or migrate 5.8 to 6.0, then upgrade to 6.1
So overall there are some really great enhancements in v6.1 however the one key disappointment for me is that HA is still limited to a logical DC. For more information on vROps 6.1. click here for the VMware release notes. If you’d like any assistance with a VMware vROps project or simply want to learn more about it please contact us, and we’ll be more than happy to use our real world experiences to support you. Images from VMware    

The Xtravirt XenDesktop v Horizon View debate

At a recent company-wide meeting the Xtravirt consultants presented a debate discussing Citrix XenDesktop and VMware Horizon View and their merits across some particular areas.

DISCLAIMER: This document is a collection of feedback from the community IT professionals based upon experiences to stimulate debate, and not intended to be a scientific product comparison or detailed analysis. Responses may be subjective.


At a recent company-wide meeting the Xtravirt consultants presented a debate discussing Citrix XenDesktop and VMware Horizon View and their merits across some particular areas.

Whilst our consultants came up with many compelling arguments for both solutions, we thought we would also survey the community and users to see what their thoughts were. Thank you to those of you who completed the survey and shared your views.

We’ve consolidated the survey responses from the community and our consultants, and summarised in the tables below.

Response summary

Question 1: Which solution is more mature?
XenDesktop Horizon View
XenApp has been in the market longer although XenDesktop and View were released in much similar timeframes Has been a gradual improvement with no major changes in the architecture over the years
Has had multiple changes of the architecture over the last few years, however the current integrated architecture is a big improvement Requires vSphere to realise all its features
Supports multiple hypervisors, therefore gives enterprises more options Rules out customers with Hyper-V and XenServer however vSphere and View is a very well integrated proposition
Xtravirt have deployed successful XenDesktop solutions for scalable multi-1000 seat solutions Xtravirt have deployed successful Horizon View solutions for scalable multi-1000 seat solutions
XenDesktop is an end-to-end solution. It has mature remote access and traffic optimisation with NetScaler/cloud bridge, desktop, application delivery and mobile management all integrated into the same solution Rich set of technology, from application virtualisation (ThinApp), application delivery (App Volumes) Workspace App Store portal, physical device management (Mirage) to persona management (UEM)
Citrix provides a rich set of management and monitoring products vRealize Operations for View provides detailed monitoring capabilities, balancing complexity with useful information

Whilst Citrix has undoubtedly been in the business of delivering centralised desktop solutions for more years than VMware, however in the last 3-4 years VMware has produced a mature and proven desktop technology stack.

From Xtravirt’s experience in having designed and deployed multi-1000 user deployments, both Citrix and VMware are mature solutions and ready for production use at scale.

Question 2: Which solution is easier to support?
XenDesktop Horizon View
Each Flexcast component is modular, you only build the parts of the infrastructure you need Seamless integration with vSphere. Smaller learning curve and less complexity
Two great consoles for core features. Citrix Studio for Architecture and Provisioning. Citrix Director for support and monitoring with EdgeSight built-in as standard Single support number to phone no matter if it’s a hypervisor issue or broker
Significant improvements in v7.x in terms of installation and reduced complexity Large community support and knowledge pool/td>
Large community support and knowledge pool There are less moving parts
Large numbers of Citrix qualified people in the industry Intuitive solution that can build on existing VMware skillsets

The full Citrix FlexCast suite of products has a number of moving parts and the opinion is that it is a slightly steeper learning curve than View. However with the release of XenDesktop 7 this installation and management has become more integrated.

Citrix provides more options in terms of hypervisor support and some granularity of configuration, however View offers a more integrated approach to vSphere and is an easier product to learn and upskill on for someone that was starting from scratch.

Question 3: Which solution gives you more granular control?
XenDesktop Horizon View
Citrix Policy, integrated into Group Policy gives very granular control over all client, server and protocol configurations Provides granular level of control using the various group policy templates
Citrix supports both GUI and CLI management Horizon supports both GUI and CLI management
Many low level configuration options such as offering the ability to split ICA into multiple streams and provide QoS on each stream Gives enough fine grained control to meet customer requirements without adding significant much complexity

If we simply took a count of the number of configuration options Citrix provides vs. VMware Horizon View it suggests Citrix would come out on top. However granular configuration needs to be balanced with supportability and complexity. Both products provide a fine level of configuration options for most enterprise deployments, and whilst Citrix may have more options, View has been proven to provide a suitable level of management for enterprise deployments and multiple use cases.

Question 4: Which solution has the better protocol?
XenDesktop Horizon View
Close between the two, however the ability to split ICA into multiple streams and QoS those individually as it’s TCP, give Citrix a slight advantage PCOIP gives a good experience and dynamically adjusts to the available bandwidth without impact on the end user
Citrix uses less bandwidth consumption Will use more bandwidth if available which can sometimes give misleading results when comparing with ICA
Been around for many years and is well proven Hasn’t been around as long as ICA but has been proven in many enterprise customers including those with high latency or high graphical capabilities
Has been developing and improving since mid-90’s and offers more granular control and uses less bandwidth Multi-codec protocol provides intelligent image decomposition and optimised rendering

Side-by-side, ICA arguably has the edge, however it’s generally not considered to be a compelling factor as it once was. Both ICA and PCoIP have been deployed by Xtravirt on customers with high latency, low bandwidth links. Both required some configuration optimisation to obtain the best performance in these challenging scenarios.

Question 5: Which solution addresses the most use cases?
XenDesktop Horizon View
Citrix Flexcast addresses many use cases. It provides delivery options ranging from streamed desktops, mature hosted apps, shared hosted desktops or hosted virtual desktops. The Horizon suite delivers a similar number of use cases as Citrix including Shared Hosted desktops using PCoIP, hosted virtual desktops to physical desktop management
Citrix offers products such as NetScaler for load balancing as well as WAN optimisation App Volumes and the new UEM profile management provide a great way to delivery applications and user environment management
Application virtualisation with Citrix XenApp and Citrix VMHA (Virtual Machine Hosted View offers products for managing physical desktops such as Mirage and Horizon Flex
Applications) as well as built-in functionality for Microsoft App-V to make better use of the licenses included with Microsoft Remote Desktop Services (RDS) VMware now have acquired an improved user profile solution (UEM) which provides a more intuitive and powerful method for delivering use profiles and personalisation
Proven mature and reliable user profile management with Citrix Profile Management

Both solutions provide solutions for a number of use cases. Shared Hosted Desktops (XenApp/ RDSH) are a strong selling point for Citrix and they offer a more powerful and configurable solution here compared to VMware’s offering. However VMware provide a strong, integrated solution in the hosted desktop category and offer VMware Mirage and for managing applications physical desktops as well as AirWatch for mobile device management.

Overall both solutions provide strong support for the majority of use cases found in the enterprise and it’s unlikely either will fall short of meeting the majority of typical customer requirements.

The Consultant View

The tables above are a selection of comments we received from the community and our internal consultants who are skilled and highly experienced in delivering both Citrix and VMware EUC solutions. A few years ago the common sentiment suggested Citrix had the edge, with their strong protocol, and XenApp offering. However with the development and refinement of VMware’s EUC stack, VMware have produced a robust and streamlined offering. For existing customers of either technology there’s unlikely now to be a compelling default reason to swap out one for the other, unless to satisfy specific needs. However for customers looking at some form of centralised computing solution for the first time or who are having issues with their existing deployment, a proof-of-concept of one or both will be worth conducting.

Whether you are running or exploring Citrix, VMware, Microsoft solutions, Xtravirt has the expertise to deliver the solution that is best for you. If you would like to discuss how we can help you with your IT transformation project contact us today.

App Volumes – Apps in a snap!

VMware have long been a presence in the virtual desktop market – even ignoring their own VMware View® product, vSphere® is used to support other brokers, such as Citrix® XenDesktop. Regardless of product set, a common problem is how to … [More]


VMware have long been a presence in the virtual desktop market – even ignoring their own VMware View® product, vSphere® is used to support other brokers, such as Citrix® XenDesktop. Regardless of product set, a common problem is how to deliver applications to the desktop. Let’s have a bit of background. Most applications are provided as either a batch of files dropped in place on an end point, while others are provided using an installer application of some sort. This is generally the case regardless of operating system, but let’s focus on Microsoft Windows® as this is most prevalent. Historically, most application delivery systems have been focussed on pushing down the binaries to a system and installing as required. This approach has a number of limiting factors – it takes time to download the package to the machine and install the application, not to mention in many cases, the application would need a reboot, causing further disruption. This was acceptable for physical PCs and Laptops as these are persistent devices on the whole so a little disruption is manageable. Of course, one of the key benefits of virtual desktops is supposed to be flexibility and rapid delivery – so spinning up non-persistent desktops, only to have them scuppered by long application delivery times meant other measures were necessary. Building applications in the base build is very common, but far from efficient, particularly if the application is only used by a user subset. Application Virtualisation solutions such as VMware® ThinApp answer a number of issues, notably the sandboxing of applications which prevents the need to reboot, and streaming over the network improves delivery times. However, there is still the time-to-deliver over the wire (especially with large applications), plus Application Virtualisation will not work with all applications (such as those with device drivers or Windows Services). Most organisations would fall back to persistent desktops or Remote Desktop application delivery solutions such as Citrix XenApp to make up the short fall, but there is another way.

CloudVolumes to VMware App Volumes™

CloudVolumes was founded back in 2011 with a simple premise – publish applications using a virtual disk container as a delivery mechanism to a hosting operating system using an agent that merges the virtual disk contents with the operating system partition. VMware scooped them up in 2014 and re-branded the technology as App Volumes. By using a Virtual Disk as a container, you can collect multiple applications in a single virtual disk (an AppStack) and push them out together. Of course, in a virtual infrastructure, this disk sits on shared storage, alongside the virtual desktop, so delivery time is reduced to how fast the solution can mount the virtual disk. The other clever feature of this ‘agent plus disk’ affair is that the AppStack disk is read-only (I’ll discuss where changes go later). Being read-only means that the single disk can be shared by many desktops (on SSD, potentially 1,000-2,000 per disk) – doing wonders for scaling and storage efficiency. So, consider our non-persistent VDI example. Both VMware and Citrix can spin up a non-persistent desktop based upon a template Windows operating system installation in next to no time. The user logs on to the client, the agent takes the identity of the user and pushes it to the App Volumes management server which, in turn, subject to entitlements, attaches the relevant AppStacks to the user’s desktop. All this happens as the user logs on. No mess, no delay – and consistent too. Creating an AppStack requires only a basic Virtual Machine to serve as a provisioning desktop. A blank AppStack is assigned to this VM in a provisioning mode (read-write) – applications are installed as required. Once it’s ready, it can be entitled to Active Directory users, groups or even computer objects. AppStacks can be assigned to a VM at either boot up, log on (or immediately – though this is not recommended).

App Volumes and Terminal Servers

A clever use case for App Volumes is in conjunction with Terminal Servers. By assigning a common AppStack to a farm of Windows 2008R2/2012 VMs running as a Terminal Server (for example Citrix XenApp, VMware View RDSH), it’s possible to rapidly deploy a consistent farm. A really nice side-benefit is that if you want to re-allocate hosts between multiple farms with different applications, this is possible simply by changing AppStack and re-assigning the host – it’s no longer a rip-down and rebuild affair.

So AppStacks are read-only – what about writing back?

A mighty fine question. If you need to make writes back to an application installation, they are written to the C: drive of the VM (as you’d expect) – don’t forget, our AppStack is invisible – the C: drive is merged. While many applications drop configuration changes etc. in the user profile, some less than well written ones do tweak content in the installation path. This isn’t too useful for non-persistent applications – such setting changes would be lost as soon as the non-persistent desktop is destroyed. However, another App Volumes feature comes into play here – Writable Disks. A Writable Disk is a user-specific disk (so one per user) that can be deployed alongside AppStacks (it’s always the last one attached). This too is merged and is otherwise invisible. A Writable Volume can be deployed to support three policies –
  • User Profile only - basically, it can replace VMware View Persistent Disks for User Profiles. Don’t consider it as a replacement for a proper environment management solution though.
  • User Installed Applications only - so it can manage user writes to the C: drive excluding the user profile.
  • Profile and User Installed Applications - a policy covering both of the above.
This disk, particularly with the latter policy is a powerful tool. As it can capture writes back to the OS disk, it provides the ability to capture writes to applications in AppStacks as well as allowing the user to install their own applications (subject to user rights, of course). The implication of this is that it’s possible to deploy a non-persistent desktop and allow it to behave in the same way as a persistent desktop!

Maintaining Applications in an AppStack

This is a pretty straight forward affair. There’s an edit function in App Volumes that effectively clones the current version of an AppStack. This can be attached to a VM in a read-write manner and the application stack upgraded, enhanced etc. prior to being assigned as a replacement for the existing stack. Don’t worry - rolling back is easy too – just re-assign the previous version.

Nice. So….what’s the catch?

As with everything, there are limitations.

Multiple AppStacks can conflict

An AppStack can contain multiple applications, some of which might include shared components, such as specific Java versions, DLLs etc. Presenting multiple AppStacks can have a negative effect – AppStacks are applied in sequence, with the last one applied taking precedence. For example. AppStack A has an application which uses wibble.dll version 1.0, while AppStack B has an application that uses wibble.dll version 2.0. If both AppStacks are assigned to the VM, and AppStack A is attached last, then the application in AppStack B might fail due to wibble.dll 1.0 taking precedence. There are three ways around this issue:
  • For a given application, try deploying it as a ThinApp package inside the AppStack – The virtualised application sandboxing avoids conflicts.
  • Override the order in which AppStacks are deployed – this is possible from the App Volumes Manager.
  • Plan your AppStacks – try, where possible, to group applications for a department together to minimise the need to deploy multiple AppStacks.

How many AppStacks can you deploy to a VM?

In many ways, this is a limitation of the technology – You’re mounting virtual disks – the more you mount, the longer the log on time. In addition, the more you mount, the more you might risk application issues between AppStacks. VMware recommend less than 15 disks, including AppStacks and Writable disks. Try not to look at this as a solution for delivering granular applications – use it to shift stacks (the clue is in the AppStack name…). Generally, in most environments, departmental users have applications in common – so using AppStacks containing a common stack can be used to meet 90-100% of a user’s needs in this manner – if granularity is required for the last 10%, consider using Writable Disks, either allowing the user to install their own or using Writable in concert with an alternate delivery system such as SCCM or VMware Horizon® Workspace published ThinApp packages.

Kernal mode content

AppStacks are pretty powerful. Even device drivers and services are possible, however anything that needs direct access to the kernel is a problem – so deploying a PDF printer will work, but deploying an Antivirus tool or disk encryption software probably won’t.

Doesn’t support Windows XP or physical devices.

At present, App Volumes doesn’t support physical devices – for that, you can use VMware Horizon Mirage App Layers for a similar experience. As for Windows XP – No support here either and this is a good thing. Time to move on – ye olde Windows XP is not in support any more!

Closing Thoughts

So, for delivering applications to a VDI solution, App Volumes is a really slick answer. Provisioning of stacks of applications in a LAN-free rapid manner is an attractive selling pitch, especially as it can support not just virtualized application packages, but full, natively installed Windows applications. The writable disks are a bonus too – the combination of non-persistent desktops, AppStacks and writable disks makes a compelling case for ditching persistent desktops in nearly all scenarios. The interesting part is that it is (and will remain) broker agnostic. It can be delivered as easily in a Citrix XenDesktop/XenApp solution as it can a VMware Horizon View solution. This is why VMware now offer App Volumes as part of the new Horizon Application Management Bundle – a suite that takes VMware Horizon solutions that complement an existing Citrix solution. Otherwise, VMware App Volumes is now included as part of the VMware Horizon Enterprise suite or can be purchased individually. If you’d like any assistance with a VMware App Volumes project or simply want to learn more about it or any aspects of VMware Horizon please contact us, and we’d be more than happy to use our real world experiences to support you.

VMware Introduces New Open Source Projects to Accelerate Enterprise Adoption of Cloud-Native Applications

Last week I was fortunate enough to be part of a blogger early access program covering VMware’s announcement around two new open source projects built to enable enterprise adoption of cloud-native applications

Last week I was fortunate enough to be part of a blogger early access program covering VMware’s announcement around two new open source projects built to enable enterprise adoption of cloud-native applications – Project Lightwave, an identity and access management project that will extend enterprise-scale and security to cloud-native applications – Project Photon, a lightweight Linux operating system optimized for cloud-native applications. Below is more information about the two projects and the awesome abilities they are opening to VMware customers and the ability for cloud native applications: Project Lightwave will be the industry’s first container identity and access management technology that extends enterprise-ready security capabilities to cloud-native applications. The distributed nature of these applications, which can feature complex networks of microservices and hundreds or thousands instances of applications, will require enterprises to maintain the identity and access of all interrelated components and users. Project Lightwave will add a new layer of container security beyond container isolation by enabling companies to enforce access control and identity management capabilities across the entire infrastructure and application stack, including all stages of the application development lifecycle. In addition, the technology will enable enterprises to manage access control so that only authorized users will be capable of running authorized containers on authorized hosts through integration with a container host runtime such as Project Photon. Features and capabilities will include:
  • Centralized Identity Management – Project Lightwave will deliver single sign-on, authentication, and authorization using name and passwords, tokens and certificates to provide enterprises with a single solution for securing cloud-native applications.
  • Multi-tenancy – Project Lightwave’s multi-tenancy support will enable an enterprise’s infrastructure to be used by a variety of applications and teams.
  • Open Standards Support – Project Lightwave will incorporate multiple open standards such as Kerberos, LDAP v3, SAML, X.509 and WS-Trust, and is designed to interoperate with other standards-based technologies in the data center.
  • Enterprise-ready scalability – Project Lightwave is being built with a simple, extensible multi-master replication model allowing horizontal scalability while delivering high performance.
  • Certificate authority and key management – Project Lightwave will simplify certificate-based operations and key management across the infrastructure.
Project Photon, a natural complement to Project Lightwave, is a lightweight Linux operating system for containerized applications. Optimized for VMware vSphere® and VMware vCloud® Air™ environments, Project Photon will enable enterprises to run both containers and virtual machines natively on a single platform, and deliver container isolation when containers run within virtual machines. Future enhancements to this project will enable seamless portability of containerized applications from a developer’s desktop to dev/test environments. Features and capabilities include:
  • Broad Container Solutions Support – Project Photon supports Docker, rkt and Garden (Pivotal) container solutions enabling customers to choose the container solution that best suits their needs.
  • Container Security – Project Photon offers containerized applications increased security and isolation in conjunction with virtual machines as well as authentication and authorization through integration with Project Lightwave enabling customers to further secure their applications to the container layer.
  • Flexible Versioning and Extensibility – An industry-first, Project Photon provides administrators and enterprise developers with extensibility and flexibility over how to best update their container host runtime by supporting both rpm for image-based system versioning, and a yum-compatible, package-based lifecycle management system, allowing for fine-grained package management.
Today, Pivotal also announced Lattice which packages open source components from Cloud Foundry for deploying, managing and running containerized workloads on a scalable cluster. Together, VMware and Pivotal will provide end-to-end cloud-native solutions from infrastructure to applications. VMware’s resilient infrastructure for cloud-native applications complements Pivotal’s Cloud Foundry application platform solutions. To encourage broad feedback and testing from customers, partners, prospects, and the community at large, Project Photon and Project Lightwave will be released as open source projects. By open sourcing the software, developers will be able to contribute directly to the projects to help drive increased product interoperability and new features. Project Photon is available for download today through GitHub. Project Photon has been packaged as a Vagrant box so users can easily test its capabilities on any platform. The Photon Vagrant box is available for download through HashiCorp’s Atlas here. Project Lightwave is expected be made available for download later in 2015. I’m really looking forward to learning more about these projects and trying them out once they are released. With the popularity of docker it’s no wonder VMware decided they needed to start integrating with the technologies.

Any device, anytime, anywhere… oh and keep me secure please!

Some of the top trends for 2015 in enterprise IT are focussing on cloud, security and mobility. Since Microsoft’s ‘mobile first, cloud first’ strategy announcement

Current IT

Some of the top trends for 2015 in enterprise IT are focussing on cloud, security and mobility. Since Microsoft’s ‘mobile first, cloud first’ strategy announcement, followed later by VMware’s recent ‘one cloud, any application’ theme you will be hard pressed not to hear these topics being discussed. However, before we look at 2015 let’s take a look at what organisations used to do.

Back in the day

In the pre-cloud era I used to use the phrase “well managed IT”. Shocking to hear, but before cloud, orchestration and everything-as-a-service, in some organisations we operated the following models:
  • Self-service request fulfilment
    • A combination of automated and manual tasks depending upon the service
  • Any device, anywhere, anytime and secure
    • To provide a little more technical detail we used VPN’s, file/disk encryption, web publishing (usually an ISA reverse proxy), two-factor authentication, server-based computing (terminal services and Citrix metaframe/presentation server).
  • User-centric computing
    • Role and policy based configuration combined with roaming profiles, login scripts and intelli-mirror technologies, we designed user experiences that followed you. If you logged into our remote desktop solution the experience also followed you (customised to cater for lower bandwidth by providing a slightly reduced feature set)
    • Applications that followed you. Using Microsoft systems management server we could target applications at groups, advertising the application ensuring you could use your application wherever you went. Fancy stuff!
    • Self-service recovery. If your machine had a software failure that was not catastrophic we could re-image your device over the wire, or, as was our standard solution, utilise a local source. We would even run tools to try and keep your documents safe during this process. The only downside was that you had to be on the corporate network.
    • Orchestration and automation. Before fancy orchestration engines and workflow tools were invented using open standards we had to settle for using our trusty notepad. We did however build scripts using standard languages, protocols data structures (xml), draw our workflows on paper (Visio) and build task based engines, using databases, files, registry and xml datasets as reference and tracking tools (we even updated configuration and user data into other systems)
    • Automation. Xtravirt’s co-founder and CIO Paul Davey and I used to try to automate everything, and this is also where Xtravirt’s SONAR cloud-based analytics service emerged from. To this day I still believe in automation, so much so I wrote a quick script to read my firewall log out to me (useful? maybe less so than previous work but it kept the brain working). Even though we architected and designed the solutions, we also built the management systems and images etc. To this end we worked a lot with the operational teams to provide automated tools and processes to make the support team’s life easier.
  • Configuration management
    • We used a number of methods and systems, including systems management server and many bespoke scripts to maintain asset and configuration information. Again we built release management tools & processes to ensure we were in control (as much as possible) of the activities and were able to accurately report on the configuration and asset baselines of the estate.
I could probably go on forever about different areas we used to work on, however the main theme here is that we’ve been doing this over the past 10 years.

Moving into the mobile and cloud era

So fast forward to the present. We are talking cloud, enterprise mobility, software-defined everything. No longer are we making bespoke solutions in notepad, and we no longer have a host of tools to orchestrate automate and provide self-service everything, all out of the box. While it’s true our technology capabilities have improved, what used to require some special magic now ships as a standard capability with the products. The reality is, to achieve a well-managed mobile and cloud-based model there is still a ton of effort that is required. So can we provide access to our systems and data on any device, anytime, anywhere and in a secure manner? Well the answer is yes; we could before and we still can today. The main advances I see are we can spend more time on providing business solutions and less time writing bespoke engines cobbled together from a set of scripts. Remember the IT landscape is still incredibly complicated with billions of transactions occurring, weaving this web together into a well-managed, efficient, cost effective and business valued service still requires more than just opening the box.   If need help along your virtualisation journey and moving into the mobile and cloud era, Xtravirt can deliver the right strategy and architecture for your business, so contact us today.

Thought provoker: How could the adoption of Cloud affect my IT organisation?

Over the past 40 years or so we have moved from centralised mainframe computing onto client/server applications and there began the stacking of beige servers in every server room.

Looking into the crystal ball

For starter’s I don’t have a crystal ball (if I did I would probably have won the lottery and be on an island somewhere hot), so predicting the future isn’t that easy. We can however at least give it a go.

Evolution of Computing

Over the past 40 years or so we have moved from centralised mainframe computing onto client/server applications and there began the stacking of beige servers in every server room. We then realised we could consolidate, and swapped out the numerous beige servers for fewer but larger shiny silver rack mount servers running virtual machines. Once we had virtualised as much as possible the next logical step was to then consume these offerings as a service. This is what we currently describe as Cloud computing. Whilst evolution has provided the ability to allow us to consume serviced offerings today, the stark reality is we are currently somewhere between the adoption curves of virtualisation and cloud.The question on many people’s minds is what does life look like for IT post-cloud? For this prediction I’m going to assume that cloud has been adopted by the masses as opposed to the world moving into an era of cyber warfare where secrecy is paramount and the idea of using multi-tenant services is off the cards. For this prediction, the IT department now has the role of a cloud broker.

Internal IT organisation gap analysis

The following matrix outlines typical existing IT departmental capability, with a view that in the post cloud era requirement in a particular organisation capability will increase, reduce or remain the same: So the GAP analysis produced is very high level and incredibly speculative, I have however begun to consider the likelihood and impact of changes to the technology landscape; who knows maybe at some point we will have answers to questions such as:
  • Will my level 1 headcount need to increase and level 2/3 be offset to vendors and cloud providers?
  • Will we rely on a far greater maturity of supplier management?
  • Will IT security internally become outsourced to cloud providers?
  • With the increase in 3rd party services and solutions increase the requirement for strong central governance?
  • How will cyber warfare affect the corporate IT landscape?
  • Will cloud be overtaken by a far greater disruptive force?
What’s your view on the post cloud era? Will it go full circle and bring IT back in house? Will we exist in a hybrid world, or will we become consumers of service? Anyway this is just a glimpse of the types of conversation the team at Xtravirt have when they aren’t out solving current customer issues. We are always here to help you with your virtualisation challenges so if you have a requirement, please contact us and we’ll be happy to assist.

A day at London VMUG – January 2015

I was looking forward to the London VMUG meeting a great deal as aside from the interesting and thought provoking sessions I hadn’t been able to get along to a London VMUG since May 2014. VMUG meetings are also a … [More]

I was looking forward to the London VMUG meeting a great deal as aside from the interesting and thought provoking sessions I hadn’t been able to get along to a London VMUG since May 2014. VMUG meetings are also a great opportunity to catch up with friends and peers who share a passion for virtualization. As ever Alaric Davies kicked off the meeting in his own unique and amusing style outlining the agenda for the day and also presenting the five community contributor/speaker awards. It was great to see a few of my Xtravirt colleagues in the list of community speakers from 2014. The first session of the day was PernixData’s “FVP Software in a real-world environment” presented by the ever eloquent Frank Denneman (PernixData) and James Leavers (Cloudhelix). This was a great session outlining how PernixData was being currently used in a large environment and what benefits, cost savings and performance gains it had provided. Key quotes of the presentation for me were “mountains of greatness” and “molehills of mediocrity” when displaying performance data on the slide deck. Next up were the “vFactor” lightening talks where five community members (who had volunteered) were asked to do a strictly 10 minute presentation on any relative technical subject/project. After which, we the audience would get the opportunity to vote for our favourites. This was an excellent session and a great way to see what other folk are doing in their unique environments. All five of the presenters did a fantastic job and I am looking forward to seeing this happen again at future meetings. After the break I sat in on the very first Xtravirt Lab which was a technical preview and demonstration of SONAR (Reporting-as-a-Service) presented by Peter Grant. It was good to observe the Q&A and feedback after the demonstration and there was very apparent interest in the product and rolling beta programme! After the lunch break, I headed into the Simplivity session titled “Making sense of converged infrastructure” presented by Stuart Gilks. This was an enjoyable session where the case for having converged infrastructure was made using a great analogy of motorsport. It was also good to learn more about the Simplivity product and its capabilities. Xtravirt’s very own Michael Poore presented one of the next sessions which I missed, (sorry Michael!) but I heard a lot of great feedback from those that attended. The final session I attended was by Valentin Bondzio from VMware Global Support Services, titled “RDY, NUMA and LLC Locality”. Anyone who attended this session will likely agree this was an excellent deep dive communicated in an easy to understand and often humorous fashion. To cap off the day everyone was gathered together in the main meeting room where the winners of the vFactor were announced (all five of the guys were given prizes, a well-deserved pat on the back and round of applause) Also the winners from the various vendor prize draws were announced so lots of smiling faces to finish what was an excellent day. If you have never been to a VMUG meeting I would strongly recommend it as the content is always pertinent and engaging. There are VMUG groups all around the UK so if you need more information on which one is local to you, visit the VMUG website. Xtravirt is always here to help you with your virtualization challenges so if you have a requirement, please contact us and we’ll be happy to assist.

EUC at the Top of the World

‘Twas the last week before Christmas and Santa’s head of IT, Eric the Elf, was quietly satisfied. He’d had a busy year carrying out a long overdue refresh of ‘End Elf Computing’ at the North Pole.

‘Twas the last week before Christmas and Santa’s head of IT, Eric the Elf, was quietly satisfied. He’d had a busy year carrying out a long overdue refresh of ‘End Elf Computing’ at the North Pole. Of course, the users (including the Boss) may not recognise the amount of work put in by Eric, to make the solution scalable, reliable and able to deliver – but nor should they be – a quiet user is a happy user! Such is the lot in life for a busy IT elf.

So what did Eric have to do?

Eric had to upgrade a considerable number of users from old Windows XP machines to something new and had spoken to Xtravirt about how to not just ‘rip and replace’ the desktops, but to move to a more flexible working environment. He had to think about the following:
  • VIP Users: Obviously Santa, being the boss, is THE VIP user. He’s also pretty mobile and network connections can vary while he’s on his travels.
  • Toy Factory Users: These need a reliable platform to handle desktop style applications. Downtime is a problem, particularly in the last quarter of the year.
  • Christmas Admin Users: These are also largely desktop based, but require a secure client – after all, these elves look after the list of who’s been naughty and nice. They’ve recently moved this application to a browser based system, but have other smaller packages too.

What was the solution?

Eric decided VMware Horizon would work best for his organisation, and worked with Xtravirt to ensure the design and transition went smoothly. It was decided that a virtual desktop approach would suit the Toy Factory and Christmas Admin users, so a Horizon View solution was deployed. The Toy Factory users accessed non-persistent virtual desktops via zero clients. If they hit a software problem, it was easy to fix by simply logging off and back on again, with the virtual desktops refreshing themselves. The zero clients, being relatively simple and quite rugged devices, proved more reliable too. The Christmas Admin users went down a similar path, however they were given security tokens for two-factor authentication. They liked the security aspect, but what they liked more was that their desktop sessions weren’t set to log off immediately on disconnect, which allowed them to move between devices in different locations and still access their running session. While some applications were installed straight into the Virtual Desktop image, Horizon Workspace was deployed to provide applications assigned as needed. Because Santa’s List application was SAML (Security Assertion Mark-up Language) compliant, the application was published via Horizon Workspace based on membership of the Christmas Admin group, leveraging Horizon Workspace Single Sign On capabilities. Christmas Admin users could log onto View using their token and access their key application in a secure manner without the need to keep entering passwords. Older applications, were re-packaged using VMware ThinApp for deployment through Horizon Workspace to the virtual desktops.
For Santa, and a few other VIP users, Eric deployed Horizon Mirage. Now Santa has a shiny new Windows 8.1 touch screen laptop, with his data protected over the internet via Horizon Mirage. If his new laptop falls off the sleigh, Eric can recover the data to either a replacement laptop to be picked up later, or to a virtual desktop, allowing Santa to quickly access it over Horizon View using View’s web browser access from a Web Café, or via the View client on his mobile phone.
More importantly, he can still access his locally installed applications and data locally, regardless of connectivity – which is important when he’s checking his Excel spreadsheets full of addresses… When Santa does have a connection back to the North Pole, he can also access the Horizon Workspace portal to gain access to his allocated applications. This means he doesn’t have to maintain favourite URLs for web based applications as they’re all available in the workspace. And so Eric can sit back with some eggnog, content that he’s done his bit to helping Santa in the smooth running of yet another Christmas. If you’d like to learn more about the VMware Horizon Suite, or any of our virtualisation solutions, we have lots of experience to share, so please contact us.

A technical viewpoint of UK VMUG 2014

VMware User Group (VMUG) host global events designed to enable customers and end users to interact with the community through knowledge sharing, training and collaboration. I’ve attended a couple of the London based VMUGs over the last couple of years, … [More]

VMware User Group (VMUG) host global events designed to enable customers and end users to interact with the community through knowledge sharing, training and collaboration. I’ve attended a couple of the London based VMUGs over the last couple of years, but this was my first time attending the UK VMUG. Due to a late change in work commitments, I was fortunate enough to be able to attend this year’s UK VMUG. From my perspective, these events provide an opportunity much closer to home than VMworld, to interact with other community members and continue to stay up-to-date with announcements, industry trends and technical content. The first thing that struck me this year was the agenda and line-up. I’ve taken note of this in recent years (just out of curiosity), but I genuinely thought ‘wow’ that’s some line-up in terms of speakers, session and content, considering the event is sponsored with no registration fee. It took me around 20 minutes to come up with the sessions I wanted to attend, which showed the variety, quality and quantity of the sessions were of the highest order After arrival on Monday afternoon, I headed to the vCurry evening (thanks to Jane of the VMUG committee), tucking into some food (yes, a curry!), catching up with colleagues and then awaiting the start of the vQuiz. The quiz was entertaining and fun, 30 questions with a mix of categories, although it took me back in time to my VCP 3 and 4 exams, with expectancy around memorising and knowing maximum supported\configuration numbers. The table I was located on finished 3rd, but following some technicalities and overrule by VMware EMEA CTO Joe Baguley, we were promoted to 2nd place! As a group on the table, we decided to feed the prize back into the community for the main show.

On to the Conference

Keynote Address

A cold morning, started with a brief introduction from VMUG leader Alaric Davies, welcoming attendees, followed by the keynote by Joe Baguley (CTO VMware EMEA) titled ‘Rant as a Service’. A high level summary, that today, the goal for those of us in IT is to deliver applications to the business, through whatever means. We’re on this iterative IT business process circle of Data, App and Analysis, whether this is a 12 month or 2 year circle to complete projects. How can we reduce this? VMware are continuing the journey towards the software defined enterprise, driven by policy management and automation, abstracting the entire physical layer and moving this into software, with the obvious advantages of the intelligence and flexibility of the code within. No longer is the focus specifically on just hardware or infrastructure, but the layer above that in software described as ‘Infrastructure as Code’, and the innovation VMware is rapidly delivering to the market across the datacentre to achieve this.

Breakout Session #1

The first breakout session I attended was around a hybrid storage solution and the upcoming Virtual Volumes (VVOLs) integration, which held particular interest, as this is going to change how we manage storage capacity and provisioning. It’s clearly part of VMware’s overall strategy to define the datacentre by software, and bring policy management to admins, without worrying about the underlying characteristics of the storage hardware. The interesting thing to note about this vendor is that their integration of VVOLs will come using the vSphere APIs for Storage Awareness (VASA) provider on the storage array, plus array firmware updates. In contrast, the presenter mentioned a few other vendors (not all) are going to be using virtual appliances for the integration, so how does this address manageability and availability concerns around the appliances?

Breakout Session #2

vRealize Operations 6.0, commonly known as vCenter Operations Manager (vC Ops), is due for release by the end of the year. Following the announcements at VMworld 2014, I attended this breakout session to gain further insight and clarity into the new offerings. The product has undertaken a massive overhaul (for the better), in terms of architecture, scale, deployment and usability to name a few. Almost 1 million lines of code have been changed, however the core principles and concepts have been ported across into the new product. We still deal with the familiar Health, Risk and Efficiency major badges, for example. A simple migration path from existing deployments does exist for customers (dependent on current version). I’m looking forward to getting my hands on the product and taking it for a spin!

Breakout Session #3

After lunch (including nibbles, biscuits and coffee with a few colleagues), I headed to the Horizon Architecture and Design session, as this fits around my core skills and interest. The important message to take here, aside from the various technical input around host, storage and network design for example, was focusing on your specific use cases and behavioural working patterns of the end users (engaging with them) and analysing assessment data before beginning to consider a proof of concept or design of the solution. Depending on these outcomes, you may not require a full Windows 7 desktop for example; perhaps publishing applications or shared desktops are going to meet your requirements, thus drastically reducing infrastructure required and cost.

Breakout Session #4

The final breakout session I headed to was the vSphere Availability Update. This session focused on products such as vSphere Data Protection, vSphere Replication, vCenter Site Recovery Manager and Stretched Storage Clusters. Out of all these, I’ve worked more closely with Site Recovery Manager, and the deployment of the new v5.8 is now quicker and simplified with the optional ability to install SRM using an internal (vPostgreSQL) database, therefore eliminating the need to request Database Admins to setup a database, with the necessary privileges and roles. Also, there is now full integration with the vSphere Web Client, among many other enhancements. Future versions scheduled for next year, are being completely re-written from the ground up, and barriers will be removed within the code, which should allow SRM to use three sites instead of the current limitation of two, although a topology does exist today for a many to one relationship, which is commonly used more for service providers. Further, there are solutions available from other vendors combined with VMware, which could utilise three sites today, if needed.

Closing Keynote

The theme of the closing keynote presented by IT industry expert, Chris Wahl, was ‘Stop being a Minesweeper’. Generally I didn’t know what to expect from reading the title, but having used some of the training materials Chris has produced, I knew it would be presented in an entertaining fashion. Overall, the message delivered was that automation is the way forward, and to begin considering learning some scripting now such as PowerShell, PowerCLI or Python to ‘get the skills to pay the bills'. Finally, in addition, vCenter Orchestrator for automation is a ‘hidden gem’.

Final Thoughts

To summarise, I thoroughly enjoyed the event and the opportunity to meet folks I’ve only communicated with through social media before. The UK VMUG provides an optimal platform to collaborate with the community, partners and the VMware staff who have been asked to present. The VMware Global Support Services team were also on hand, to answer any pending questions or escalate existing support tickets, overall a fantastic idea. Also, the exhibit hall is worth visiting to speak directly with vendors and learn about new technology to help overcome current business challenges. I would like to thank the VMUG committee for all the hard work that goes into the preparation and planning, to organise and finalise such a smooth and efficient event. The UK VMUG presentations are available online and can be downloaded from here. **vFactor** If you are interested in presenting at a VMUG event for the first time, you can register here for London VMUG for January 2015. This will be a lightening talk of 10 minutes, and you will be mentored, prepared and advised by a current community speaker, to provide guidance and wisdom around your presenting skills before delivering this at the London VMUG. There are also some fantastic prizes on offer as an incentive. Xtravirt is always here to help you with your virtualization challenges so if you have a requirement, please do contact us and we’ll be happy to assist.

Other UK VMUG Blogs

Xtravirt at UK VMUG User Conference by Ather Beg UK VMUG 2014 – the conference in review by Jonathan Medd UK VMUG – a session overview by Nigel Boulton

UK VMUG - a session overview

I have for quite some time been a regular attendee at the London VMUGs, but have only been to one UK meeting before – that was in fact the first one ever, so one of the first things I noticed … [More]

I have for quite some time been a regular attendee at the London VMUGs, but have only been to one UK meeting before - that was in fact the first one ever, so one of the first things I noticed when attending this year was how much the event has grown over the last three years. The event was well attended by Xtravirt consultants, which was ideal as, having only been at Xtravirt for a week, it gave me the opportunity to meet and converse with a number of my new colleagues for the first time – all of which are highly active within the community. VMUGs are a great opportunity for us to keep up to date with product and technology developments, talk to customers and fellow IT professionals about their requirements and how they are using (or looking to use) VMware and partner’s products, and participate further in the virtualisation community. Xtravirt’s commitment to supporting the community by this means was demonstrated to me very clearly by the fact that five of the company’s twelve vExperts were present at the event, two of whom were presenting sessions.

The line-up

Joe Baguley, CTO of VMware EMEA, gave a very entertaining opening keynote titled ‘CTO Rant-as-a-Service’. It wasn’t so much of a rant, but it was a great view of what's going on in the industry from VMware’s point of view. One of the key themes was the significant decrease that will be seen in the time between the traditional IT refresh cycles going forward, and how the whole Software Defined Enterprise/SDDC concept supports that. He also talked about some of the exciting announcements that VMware made at VMworld, such as EVO:RAIL and EVO:RACK. I noted that Joe also commented that the VMUG is the best community event that he has involvement with, which reinforces my point above as to their significance and importance within the ecosystem. The next session I attended was Julian Wood’s ‘The Unofficial Low Down on Everything Announced at VMworld’. I wasn’t able to attend VMworld this year, so I thought this would be a great opportunity for me to get an overview of pretty much all the new products and improvements that were announced at VMworld. I was right - Julian put together and presented an excellent, information-packed session, and to be honest I was struggling to make coherent notes without missing anything as the information was coming thick and fast, but fortunately the VMUG Committee have kindly uploaded his comprehensive slides (each of which includes supporting links) to the London VMUG workspace on Box.com. The next session I selected was ‘What's Coming for vSphere in Future Releases’ presented by VMware’s Chief Technologist Duncan Epping. Duncan expanded on a number of the products that Joe had mentioned in his keynote and also detailed some exciting improvements to existing products and features we all know and love! The first session I attended after lunch was presented by our very own Jonathan Medd, and was entitled ‘Designing Real-World vCO Workflows for vRealize Automation Center (vCAC)’. This session was one of the reasons I really wanted to attend the UK VMUG this year – Jonathan is an expert in his field whose sessions always draw a good crowd. I am lucky enough to have worked with him personally for a number of years on and off, and every time I speak to him I learn something new, so I was expecting a good session. I am going to be heavily involved in vCAC and vCO at Xtravirt because of my interest and skills in scripting and automation, so I was quite excited about hearing tips from someone who has most definitely ‘been there and done that’. And I wasn’t disappointed..! The space available in the mezzanine section for this session was overcommitted by at least 100%, and people were crowding round the table two rows deep in places, which to my mind demonstrates the community interest in automation based around vCAC and vCO. Jonathan ran this as an interactive session and got everybody to think about important aspects of designing an automation process that need to be considered at an early stage, and we discussed within the group the pros and cons of many of the possible approaches. As someone who is just getting up to speed with these products, I found it (as I expected to) an incredibly interesting and informative session – thanks Jonathan! Up next was ‘vSphere Availability Updates and Tech Preview’ by Lee Dilworth, Principal Systems Engineer at VMware. This was a great opportunity to brush up on the significant number of improvements that VMware have made, and continues to make, in this area. The slides for this session have also helpfully been uploaded to the Box workspace. After this, I went along to a partner session, ‘Re-thinking Storage by Virtualizing Flash and RAM’ by Frank Denneman, who is Chief Evangelist at Pernix Data. Pernix Data are doing some exciting things with their ‘FVP Cluster’ technology which allows any VM to remotely access flash RAM on any other vSphere host, enabling fault tolerant storage write acceleration, with pretty impressive results. FVP supports all VM operations with no impact on performance, so features such as vMotion, DRS, HA, snapshots, VDP and SRM continue to operate transparently. Nice! The final session of the day was a hugely entertaining closing keynote by Chris Wahl, a double VCDX, prolific blogger, author and vExpert from Chicago who describes himself as a ‘Virtualization Whisperer’! This session was entitled ‘Stop Being a Minesweeper’, and in it Chris talked us through his journey into automation and included a number of good resources to help people begin the learning process. So all in all, a great day, and thanks go to the London & UK VMUG Committee who once again did a fantastic job of organising the event – primarily Jane Rimmer, Alaric Davies, Simon Gallagher and Stuart Thompson, and of course also the wider VMUG organisation.

Want to get involved?

The Committee are running a competition for new community speakers, known as ‘V-Factor’! Entrants will have the opportunity to give a 10-minute lightning talk at the London VMUG meeting in January 2015 and could win one of a number of great prizes. You can find more here if you are interested in entering. Xtravirt is always here to help you with your virtualisation challenges so if you have a requirement, please do contact us and we’ll be happy to assist.

Other UK VMUG Blogs

Xtravirt at UK VMUG User Conference by Ather Beg UK VMUG 2014 – the conference in review by Jonathan Medd A technical viewpoint of UK VMUG 2014 by Steven Dunne

UK VMUG 2014 – the conference in review

The UKVMUG is a yearly full day event under the banner of the VMware User Group organisation. Larger than the regional VMUGs held around the UK, the idea is to gather those from all parts of the UK interested in … [More]

The UK VMUG is a yearly full day event under the banner of the VMware User Group organisation. Larger than the regional VMUGs held around the UK, the idea is to gather those from all parts of the UK interested in VMware virtualisation to a user conference with some of the best content available to you outside of VMworld. Held at the National Motorcycle Museum near Birmingham the day includes breakout sessions, side sessions and discussion groups. There are also opportunities to ask questions with well-known VMware employees, including the EMEA CTO Joe Baguley, and meet with many of the most popular vendors in the marketplace and community contributors with real world experience.

The warm-up

The event is pre-pended the night before with the now traditional vCurry and vQuiz night. An excellent opportunity to relax with fellow attendees, enjoy some Birmingham curry and test your knowledge of obscure items from the vSphere Configuration Maximums Guide, supported Operating Systems and the History Channel. During the evening my Xtravirt colleague Ather Beg and I were interviewed for a future episode of the popular vNews virtualisation podcast. We chatted about what we thought of the vCurry and vQuiz and what we were looking forward to for the following day’s event.

The main event

The main event kicked-off with the opening Keynote delivered by VMware EMEA CTO Joe Baguley with his take on the future trends of IT, particularly for us infrastructure folks. Joe has a very relaxed presenting style for an executive and is also not afraid to tell it like he thinks it is. Telling a room full of hundreds of infrastructure people that they will need to change how they currently approach their career because changes in technology may significantly impact their existing role is quite a tough but compelling message. ‘Infrastructure as code’ was the key takeaway message. After the keynote session the Solutions Exchange opened with the opportunity to tour the various vendors and the solutions they have to offer. A significant part of the rest of the day gave everyone opportunities to take part in breakout sessions from vendors, VMware employees and community sessions on topics including VMware Horizon View, vCloud Air, Virtual SAN, NSX and vSphere Futures. I was fortunate enough to be given the opportunity to contribute to the event by hosting one of the community discussion sessions. The title of my discussion was ‘Designing Real-World vCO Workflows for vRealize Automation Center’ with the idea to generate some conversations around some of my recent experiences on a project utilising and delivering these technologies. The session was run twice and both groups contributed to some excellent discussions around the questions to ask and what to think about when identifying the requirements for automation projects and what would be needed to develop vCO Workflows to implement the requirements. The day was finished off with the closing keynote from well-known virtualisation expert Chris Wahl and again it was good to hear a lot of emphasis on the suggestion that infrastructure professionals will need to learn how to code. Alaric Davies from the UKVMUG organising committee closed out the day with prize giving and a round of thanks to all contributors. A big thank you to all of the organisers for putting on such a great event and for giving me the opportunity to contribute! Xtravirt is always here to help you with your virtualization challenges so if you have a requirement, please do contact us and we’ll be happy to assist.

Other UK VMUG Blogs

Xtravirt at UK VMUG User Conference by Ather Beg UK VMUG – a session overview by Nigel Boulton A technical viewpoint of UK VMUG 2014 by Steven Dunne

Xtravirt at UK VMUG 2014 User Conference

UK VMUG is one of the biggest national level VMware User Group conferences and is held annually in Birmingham. For those who don’t know, VMUG stands for VMware User Group and they are an independent customer led organisation and hold … [More]

UK VMUG is one of the biggest national level VMware User Group conferences and is held annually in Birmingham. For those who don’t know, VMUG stands for VMware User Group and they are an independent customer led organisation and hold meetings and conferences for the benefit of users of VMware products. The annual conference is a day event with sessions/talks from prominent speakers and breakout sessions. Vendors also typically sponsor the event and are there to showcase their offerings and answer any questions. At Xtravirt, we aim to provide our clients with the best solution that fits their requirements, which makes this conference an ideal one for us to participate in. It not only gives us an opportunity to meet people from different industries and hear about their challenges but also allows us to speak to vendors, to see if their latest offerings can help fulfil the needs of our clients. For that reason, Xtravirt usually have a strong presence at such conferences and this one was no different. We were there as both attendees and presenters. Being spoilt for choice, we spread out to attend sessions that interested us the most, before it was time to present for some of us. As the day is jam-packed with interesting sessions but also great solutions, one has to pick what to attend very carefully. Our first hosted session was from Sam McGeown, who discussed VMware NSX. He is a VCP-NV and spoke about the architecture of the solution, things to keep in mind while designing such environments and how to prepare for the VCP-NV exam, which is decidedly much harder than the regular VCP exams. Another one of the Xtravirt team, Jonathan Medd hosted a session on “Designing Real-World vCO Workflows for vRealize Automation Center”. Given his experience and the fact that he’s currently working on such a project, makes him best placed to talk about it. He took his audience through the common issues surrounding such projects, complexities faced and things that one might easily forget when embarking on such a project. As always, it was great to be present at UKVMUG and meet so many like-minded people. If you are a user of VMware products, I would highly recommend you attend this yearly event (in addition to your local VMUG). It’s free, fun and a day well-spent as along with attending key relevant sessions, you may also find people who are facing the same challenges as you and get the chance to find out what they’re thinking to resolve them. A number of people including Alaric Davies, Jane Rimmer, and Simon Gallagher work very hard to make this great day possible and it is worth attending. VMUG is well worth being a part of; you can find out more and register for UKVMUG (or your regional one) at www.vmug.com. Xtravirt is always here to help you with your virtualization challenges so if you have a requirement, please do contact us and we’ll be happy to assist.

Other UK VMUG Blogs

UK VMUG 2014 – the conference in review by Jonathan Medd UK VMUG – a session overview by Nigel Boulton A technical viewpoint of UK VMUG 2014 by Steven Dunne

A brief look at Agent and Agentless Endpoint Device Discovery & Management

At an event recently one of the speakers advised the audience that agentless software auditing was the preferred method, I do not completely agree with this viewpoint.

I recently attended an IT Service management event recently, and one of the speakers advised the audience that using agentless software for IT auditing was the preferred method. I do not fully support this viewpoint, and this article briefly discusses the relative merits of both agent versus agentless management techniques in the auditing context. Technical Note: The idea of agentless doesn’t exist in my mind - if you connect to WinRM, WMI, SSH etc. you’re already connecting to a service (agent) running on a system – however for simplicity we’ll stick to agent vs. agentless. The key decision points for using agent-based or agentless is normally not a technology based decision, but one of operational versus project requirements, and whether there is an ongoing need for management post data capture and whether this is a one off event. Additional contributing factors can include change management, capex costs, and timeframes for data discovery. Below is a comparative view of both methods.

Agent based

Pros Cons
Device can be monitored regardless of network connectivity Requires an agent install – however, a well-managed environment should cater for this, eg: included in the gold image
Data can be collected prior to service starts Agent conflicts, some management tools can conflict, however this is usually mitigated by a suitable design.
Agents can run as a local system and communication can utilise certificates Access to systems management tools can come with political hurdles, however, effective sponsorship and good communication should mitigate this.
Scanning can be scheduled to run without requiring serial or multi-threaded connections
Agents allow for complete management, eg: Software Update Management, Software Deployment, Monitoring, Inventory
Agent credential management is often catered for by the systems management tool
Inventory/Report Access can be delegated
Systems management functions can be delegated
Known communication paths for firewall configuration


Pros Cons
Reduced risk by not deploying software to target devices Credentials must be supplied to the discovery service which could potentially be running from any device
WMI, SSH or WinRM connections are often accessible Network connectivity must be solid (e.g. not blocked by firewall, correct routes, low latency etc.)
Can be scheduled e.g. task manager Agentless scans still rely on remote management services which must be enabled and secured
Troubleshooting data collection can be time consuming
Catering for DMZ or multiple forest/domain can be problematic
Thread control can be problematic
In summary, choosing a single method of data collection is not ideal practice, a combination of technologies and methods will give you the most detail about an environment. Long term endpoint management strategies without agent based management in my experience result in a poorly managed environment. The idea of a Configurations Management Database is antiquated; a federated Configuration Management System is what is required for a well-managed environment. The path to achieve this however is not short or easy. Continual review of your requirements should occur, picking the right tools for the right outcomes, consideration of the short and long term objectives should ensure you are able to utilise solutions that give you the ability to make the right business decisions. If you would like to learn more about IT transformation strategy, virtualisation and cloud solutions, or wish to discuss your workspace challenges, we have lots of experience to share so please contact us

Rapidly Maturing SDDC solutions by VMware

VMworld 2014 Europe in Barcelona has just finished and I had the pleasure of attending again this year.

VMworld 2014 Europe in Barcelona has just finished and I had the pleasure of attending again this year. This is my favourite conference on the technical calendar as almost everyone from the VMware community is there and you get a chance to have great conversations about what is happening. It’s also a chance for VMware to show customers, partners and vendors the roadmap for existing and upcoming products. As always, Xtravirt showed its commitment to its team of consultants, and the event, by organising a big presence at VMworld. Whilst I blogged every day about my experiences at this year’s event (see links at the end of this post),  this post is about my views on why I think VMware is now in the prime position when it comes to providing solutions that can truly satisfy the definition of “Software-Defined Data Centre (SDDC)”. Last year, VMware announced that their focus for the year and going forward will be to create and develop products that allow organisations to create “policy-driven” deployments. Solutions that will be completely automated and once defined, can be made available to their users “as a service” based on their entitlements. We all know that VMware’s meteoric rise in the virtualisation world is due to their unparalleled solutions when it comes to compute virtualisation, but now the focus is on virtualising networking and storage layers. VMware has been working hard on this front for the past couple of years, not only on developing products like NSX and EVO:RAIL (RACK) but also on ensuring that these solutions are able to be driven completely from vRealize Automation (formerly vCenter Automation Center). vRealize Automation has come a long way in the past year or so and holds the key to VMware’s strategy of policy-driven automation of solution deployment. At VMworld, VMware was keen to demonstrate the power of vRealize Automation with all VMware and third-party products and it’s quite clear to see that these products are mature and integrated enough to satisfy almost any requirement. This overall integration is not limited to private cloud deployments only. There have been big investments in terms of availability and capabilities of vCloud Air. Stretching an environment to the public cloud has never been easier and with its elastic capabilities, there are great use cases where these capabilities can benefit all kinds of organisation. We all know that VMware already provides Infrastructure, Desktop and Disaster Recovery “as a Service” but more features are coming e.g. DBaaS (Database as a Service) and Automation etc. One other fact that makes me think that VMware is the vendor that currently has the most complete solution is their work on integrating with other technologies e.g. OpenStack and Docker. There are a lot of organisations out there that have existing investments or are developing interest in these technologies. While some people might see VMware’s “Better Together” philosophy as clever marketing, from what I’ve seen, VMware is making real effort in ensuring that ecosystems containing OpenStack and Docker can integrate and work together with vSphere and vRealize Automation. Considering all this, VMware has a mature, integrated and flexible offering when it comes to SDDC deployments and if you are thinking about starting on this journey the VMware suite is a good place to start. Being an Enterprise Solution Partner for VMware, Xtravirt has the required skills and experience to help you along this path so if you are interested in deploying any of these technologies, please contact us and we’ll be happy to assist. My summary of VMworld 2014 Europe is summarised by Day 0 and 1, Day 2, Day 3 and Day 4 posts.

NSX at VMworld Europe 2014

I recently attended the annual VMworld Europe event and, due to the current focus in my day job, decided to formulate a session schedule largely based on VMware’s NSX for vSphere (NSX-v). My goal was to build on the experience … [More]

I recently attended the annual VMworld Europe event and, due to the current focus in my day job, decided to formulate a session schedule largely based on VMware’s NSX for vSphere (NSX-v). My goal was to build on the experience that I’ve gained from working with NSX for the best part of the last year and also learn about the future of the platform along with both VMware and third-party integrations. The MGT1969 session with Ray Budavari and Zackary Kielich gave an update on the recently rebranded vRealize Automation (formerly vCAC) and its latest integrations with NSX-v. This included native NSX functions that previously relied on vCNS behind the scenes plus the powerful new vRealize Orchestrator plugin for NSX that now drives the REST API-based communications for automation. I also witnessed an impressive demo (NET1949) by Scott Lowe and Aaron Rosen on deploying elastic applications using Docker where NSX-MH (multi-hypervisor) provided the logical network provisioning agility required to scale to this demanding degree. Attending Dimitri Desmidt and Max Ardica’s session (NET1586) on Advanced Network Services with NSX was a refresher for me due to the fact I had originally trained with them at VMware. It was a useful revision exercise with a comprehensive overview of NSX logical network functions including logical firewalling, load balancing and VPN. Some good questions came up that also forced me to reevaluate my knowledge on a couple of topics and provided me with some test cases to investigate upon returning to my lab environment. The first day finished with Anirban Sengupta and Srinivas Nimmagadda’s session (SEC2238) on Micro-Segmentation Use Cases with the NSX Distributed Firewall (DFW). I’ve been working with this tool a fair amount and micro-segmentation is one of the most compelling reasons to deploy NSX for a lot of companies. The DFW allows granular vNIC-level firewalling on Virtual Machines, distributed at the Hypervisor layer. The typical model of trust zones, common to traditional data centre firewalling, only really cater for perimeter security and do not address the possibility of lateral attacks once the inside of the network is compromised. NSX facilitates an extremely powerful approach by inspecting traffic directly at source i.e. the vNIC. Integration with Tufin Orchestration Suite was also announced with features including change management and real-time compliance checking for the DFW. The MGT1878 session by Vyenkatesh Deshpande and Jai Malkani was a highly interesting deep dive into the new vRealize Operations integration with NSX-v. This allows previously unheard of centralised visibility into the platform for monitoring purposes such as tracing both physical and logical topologies for VMs for troubleshooting purposes. Traditional networking opinion may have concerns that overlay technologies such as VXLAN are too opaque from the monitoring perspective but this session did wonders to dispel that perception. Scott Lowe and Brad Hedlund’s session (NET1468) on IT Operations with VMware NSX covered how to approach delegating administrative access to NSX-v for both network and server admins and gave me some immediately usable material around Role Based Access Control. It was also a very entertaining and well-presented session! Possibly the session I gained the most from was Nimesh Desai’s talk on the NSX-v reference design for SDDC (NET1589). This was a relatively advanced session with good coverage of topics such as VTEP teaming recommendations, NSX Edge scale out with ECMP and physical data centre topologies and how to map NSX-v deployments to them. Other sessions of note included Francois Tallet’s vSphere Distributed Switch Best Practices for NSX (NET1401) and Ray Budavari’s session on Multi-Site NSX (NET1974). The latter is a topic that is very much of note as currently NSX-v maintains a mapping to a single vCenter server and out of the box implies a single-site configuration. There are, however, multiple means by which a multi-site configuration for disaster avoidance or recovery can be architected when involving technologies such as vSphere Metro Storage Cluster, NSX’s L2 VPN and when considering optimising egress traffic using NSX Edge Service Gateways. Overall it seemed that, despite the recently debuted technologies such as EVO:RAIL and VMware Integrated OpenStack there was a huge buzz around NSX at VMworld Europe 2014. The goal of rapidly deploying applications in the data centre cannot easily be achieved when network provisioning lags behind compute in its agility. NSX is rapidly developing a rich feature set building upon its core network hypervisor and network function virtualisation and is experiencing tighter integration with VMware’s core toolsets in the vRealize suite that facilitate automation and monitoring. This will surely see it become deployed in more and more data centres and I relish the opportunity to continue architecting these solutions for our customers. If you would like to learn more about our cloud solutions, or wish to discuss your workspace challenges, we can help - please contact us today.

The impact of cloud (IaaS) on Change Management

Introduction If you’re thinking of implementing a private/hybrid infrastructure as a service (IaaS) platform, then one of the key considerations is how to operate the platform. I’ve been researching online to see if there are any industry standards in this … [More]


If you’re thinking of implementing a private/hybrid infrastructure as a service (IaaS) platform, then one of the key considerations is how to operate the platform. I’ve been researching online to see if there are any industry standards in this area and have found any detailed analysis to be lacking. VMware and IBM provide some guidance which appears to be aimed at a policy and people perspective, but I’ve not found much that describes these activities at the process and procedural level. In this article I’ll explore the creation of a virtual server on a private cloud tenant to see how this fits in with ITIL guidance when considering change management. The word Cloud can be used to describe a number of different solutions. For the purpose of this article we are looking at Cloud from an infrastructure perspective. VMware’s definition of cloud computing (one that I and the industry seem to agree with), has the following characteristics:
  • Resource on-demand
  • Pay for what you use
  • Accessible as a loosely-coupled service
  • Scalable and elastic
  • Improves economics due to shared infrastructure and elasticity

ITIL Lifecycle Elements

A typical ITIL service management lifecycle would normally contain the following processes -

Cloud Platform solution components

To understand one of the key differentiators between traditional IT and Cloud based computing is the concept of multi tenancy.  The following diagram shows the distinct layers that make up a cloud solution. In this example we are going to talk about the customer facing “tenant” and “platform” layers.  A key differentiator between traditional and cloud computing is the introduction of an additional layer - tenancy.  Traditionally we would use a standard Change Management Process across the board, however in a cloud environment one of the elements we are looking for is agility and self-service. This is because in a “cloud” environment we have further layers of abstraction to consider:
  • Platform Change Management - changes to the hardware or software that provides the cloud services e.g. hardware, hypervisor, management systems, web portal etc
  • Tenant Change Management - changes that affect the environment provided through a tenant abstraction which may include tenant configuration and virtualised guest services

Example Requirement – Single Server Deployment

Take the following scenario: Note: for the purposes of this article I’ve provided a simple view without refining other process interactions such as configuration management, service level management etc. Jane is a member of the applications development team at a fictitious company called BlueStar. Jane is working on a project where a single virtual server is required. This project has a valid business case and has been approved by the programme governance board. In a traditional environment a service design package would be created, acceptance criteria fulfilled, a normal change raised/reviewed/approved, a server be procured, the server deployed, tested, handed over to support and change closed. Now how would that work in a cloudy world?

Public Cloud

Jane would log into a self-service portal and request a single virtual machine from the Cloud Service Catalogue. In a public cloud system e.g. Azure/AWS/vCHS out-of-the-box, after specifying a few details a server would be provisioned granting Jane administrator rights to that server. She would be billed on a pay-as-you use basis, it would be accessible as a loosely-coupled service (vpn/internet access/api’s etc.).

Private Cloud

In this example I’m using VMware vCAC, vCO and a pseudo ITSM tool. Jane has project/financial approval to proceed, Jane logs into vCAC and goes to the service catalogue (I’ll refer to this as the cloud service catalogue as this doesn’t replace the technology of business service catalogue). Jane requests a single virtual machine and provides the relevant details. This initiates a workflow which registers a standard change in the ITSM suite/CMP. Due to the nature of the IaaS system operating as a utility, the act of deploying a virtual server is pre-authorised in the change management process (CMP) (essentially this can now be managed through the Request Fulfilment process). We have options, we could simply log the change, and provision the server is an automated/routine manner and provide change feedback through to close via a workflow. We could also use approval mechanisms to provide an additional level of governance and control. Whichever method is used the provision of a new server is in line with ITIL good practise. Utilising technology we can also integrate with other processes in an automated manner, for example as part of the automated deployment of the virtual server we may have included a software agent which provides integration to a configuration management system, we would also be able to notify different process owners of the action either in real time or via reporting mechanisms.

Change Governance, Control & Lifecycle Management

In this example we have utilised project and programme management to provide a level of governance and control rather than utilise the change approval board. This does however highlight some potential areas for concern. Below are just some of the concerns that may exist in relation to cloud and service management:
  • Does the project/programme board ensure that the service provision aligns with the enterprise IT strategy?
  • Are existing services analysed to check if functionality already exists?
  • Is risk and security considered thoroughly prior to authorising the provision of a new server?
  • How will testing be conducted?
    • It is assumed that the service template (Virtual Machine) will be in a highly tested and verified state so this shouldn’t be a problem, however the changes in configuration and application load may have far and wide reaching implications. This would suggest that post deployment the standard/normal/emergency change route would still be required.
  • What continual governance process will be used to assess system/platform usage?
  • How do we ensure financial approval is in place?
  • How do we conduct demand management in a fully autonomous environment?
    • Peak Demand vs. Average Demand etc.
    • How do we communicate with our customers to understand demand?
    • Does our chargeback model accommodate standby/overcapacity?
    • Does providing “room to grow” capacity negate the benefits?
    • Do we have a good supply chain and integration model for rapidly bolstering our IaaS platform?

Strategy, Design, Change, Release and Deployment

There are a number of policies, processes and procedures that play a part in the fully defined and managed change world. I have provided a subset of activities that need to be considered:
  • Service Portfolio Updated
  • Service Design Package (SDP)
  • Capacity Planning
  • Request for Change (RFC)
  • Release and Deployment
  • Change Closure


To achieve the agility and flexibility features of cloud computing whilst providing a valued customer experience and drive business value, a challenge is presented. For the internal IT division, becoming an IT Service Provider/broker is no easy feat. Understanding how ITIL and cloud computing complement each other is one of the key aspects. A rigid and inflexible change management policy may heavily impact the benefit realisation of cloud computing. A Just-Do-It (JDI) approach may scare the business away from cloud or worse (for the internal IT provider) into the hands of a 3rd party. Automation and agility brings many benefits, harnessing these powers can give IT the edge, being close to the customer, with robust people, process and technology skills will see IT as the valued enabler and the business will think twice about before considering outsourcing. If you would like to learn more about virtualisation and cloud solutions, or wish to discuss your workspace challenges, we have lots of experience to share so please contact us today.

A common mistake when sizing virtual environments

When designing a virtual infrastructure to host either desktops (VDI) or servers it’s important to size it correctly. Time and time again I see people misunderstand the approach to sizing by reading metrics on the face value but not fully … [More]

When designing a virtual infrastructure to host either desktops (VDI) or servers it’s important to size it correctly. Time and time again I see people misunderstand the approach to sizing by reading metrics on the face value but not fully understanding them. In this article I’m going to focus specifically on compute sizing, that is how much CPU and memory capacity will I need on my virtual hosts to run my virtual workloads. I’ll talk about Disk and Network IO in another post but the same principles apply. To size correctly we need to do the following:
  1. Understand what workloads are in scope
  2. Monitor these workloads over a representative period to understand their performance requirements
  3. Convert the metrics into hardware requirements
Let’s break this down…

1. Understand workloads in scope

This is a fairly simple concept. You need to know which workloads are going to run on the new environment so you know which ones to profile.

2. Monitor these workloads

To design a virtual infrastructure of any scale you’ll likely need to use a tool such as VMware Capacity Planner, PlateSpin Recon etc. Whatever tool you use you want to measure some core Metrics:
  • CPU utilisation (MHz)
  • Memory Utilisation (Active Memory)
It’s important….no it’s critical that the above CPU and Memory metrics don’t just show peak and average values but the actual values for each workload based on a function of time. Let me try and explain. Say for simplicity we have 4 VMs in scope. We profile these over a 30 day period and then report on their performance over a typical 24 hour period. The results may look something like the chart below where each VM is peaking at 500 MHz utilisation but each of them peak at different times of the day.

Now what figures should we use to size?


If we use the average of each VM over the monitoring period then this would come out at only 200MHz total required! This would be only a quarter of the total compute power you need to buy so clearly sizing on average is often going to get you in trouble and give you performance issues.


If we use the peak value of each VM and then add these up to come up with the total MHz required then this would come out at 2,000 MHz. This would be over twice what is actually required. I have heard a number of people say, “it’s better to be safe than sorry” etc however remember that ordering more than twice the compute required could cost your company or customer hundred’s of thousands of pounds extra. Money that could be better spent on other areas of the project.

Cumulative Peak

The term I use for correct sizing is cumulative peak. Take a step back and think what you’re trying to do here. You’re trying to size a virtual platform with enough computer power to run all the virtual machines during their observed peak period. If you have a large set of VMs (100s or 1000s) and if you profile them for a representative time period (30 days +) then this is going to give you enough accurate data to size correctly. The example I gave in the charts are to illustrate the point and are very simplistic. In practice you’ll also need to account for:
  • A margin of error (say 5-10%)
  • Growth
  • Your specific knowledge of the customer that might warrant additional considerations. i.e. You profiled during the quieter months and need to allow for busy summer months as an example.
The key point about this post is to be aware that sizing you virtual environment based on workload average or peak can have dramatic implications.

But peak values are still useful

Understanding peak workload values are useful when it comes to right sizing individual VMs in order to give them the correct CPU and MEM specification. Here the peak values should be used. This is particularly important when you’re planning on running these workloads on an environment such as vCHS where you cannot over commit memory. If you over allocate the memory when you don’t need to then you’re reducing the amount of VMs you can run on your vCHS cloud. If you would like to learn more about virtualisation and cloud solutions, or wish to discuss your workspace challenges, we have lots of experience to share so please contact us today.

A Thought Exercise – What do you get when you combine vSphere 5.5, vSAN, vCAC, Horizon 6 and NSX?

VMware released Horizon 6 on the 9th April, only a day after the demise of Windows XP support. Horizon 6 has an array of fancy End User Computing (EUC) related features that make a really compelling case, however, writing a … [More]

VMware released Horizon 6 on the 9th April, only a day after the demise of Windows XP support. Horizon 6 has an array of fancy End User Computing (EUC) related features that make a really compelling case, however, writing a piece solely around this is not the plan with this article. Instead let us consider the VMware portfolio for a moment. It has been a busy time recently, with a number of innovative technologies being released and starting to gain some traction. Let us consider a handful of these technologies as a bit of a thought exercise for a moment.

New and Improved vSphere and vSAN

VMware vSphere 5.5 Update 1 was released relatively recently. This accompanied the release of vSAN – VMware’s own storage virtualisation technology. This, in itself, is something of a game-changer. Taking a set of relatively proprietary servers with some SSDs and spinning rust, it is possible to configure a vSphere environment with performance and resilience without needing to buy an expensive SAN with all the paraphernalia that such a solution usually requires.

vCloud Automation – Presenting and Automating Service Provisioning

Next, let’s consider vCloud Automation Center. This provides a highly customisable self-service portal that allows an enterprise to present cloud provisioning servers to customers, whether internal private-cloud customers, or externally, in the case of cloud providers. A nice idea – a user can request a service from a catalogue and all the technical processes can be automated and hidden away.

Horizon 6 - End User Compute, Reloaded.

Now we look at the new boy on the block – Horizon 6. This is more than simply an extension of the Horizon View stack. With the release of Horizon 6, we start to see the integration between the somewhat disjointed elements of the previous Horizon Suite. We see a raft of changes:
  • Application presentation from different sources (ThinApp, web applications and Citrix XenApp).
  • An improved Horizon View, with enhanced performance and extra features.
  • A centralised Workspace interface for ease of use for the end user.
  • Local SSD storage
And let’s also consider that this release directly supports vCAC and vSAN, so we can do clever things with provisioning services to customers and the storage infrastructure without resorting to third party solutions.

Networking with NSX

The last item on the shopping list is VMware NSX. This is VMware’s new network virtualisation stack which allows the provisioning of a whole networking environment within a virtual infrastructure:
  • Provisioning of virtual VLANs – VXLANs – across a virtual estate. Pretty much as many as you will ever need, as well as several ways of bridging to physical VLANs upstream.
  • Firewalling – at several different levels, from the virtual NIC on a VM, to VXLAN wide and across network boundaries, NSX includes its own firewall solution, as well as providing integration mechanisms to support third party options.
  • Load Balancing – Not merely IP address sharing, but quite feature rich, including the ability to host certificate based load balancing.
  • Integration with vCAC for provisioning network services.

Putting This All Together

So, taking this list of products, we can consider our thought exercise. What do we get if we combine all of these into an integrated EUC solution? Firstly, we can look at provisioning. Using vCAC provides the ability to offer a console to present a catalogue of end user services – remote desktops of different specifications, access to applications and services. The service catalogue can then automate the provisioning of these services, as well as the underlying infrastructure where applicable. The infrastructure would, as you would expect, sit on a vSphere environment, augmented using vSAN and NSX. In the case of vSAN, considerable performance can be gained through the use of locally installed SSD presented across hosts as a virtual SAN. Scaling is relatively straight forward to accomplish too,as hosts are added in a scale-out fashion, so too is storage, presenting a potentially linear model. NSX as part of the environment is a subject for discussion in itself. Using commodity network hardware – a relatively cheap managed switch infrastructure, a dynamic, fully featured network infrastructure can be established by moving the network stack from the physical to the virtual world – Software Defined Networking. An End User Compute solution such as this is likely to include a management infrastructure separate to the virtual desktop infrastructure. View brokers, Horizon Workspace and Horizon Mirage all require network load balancing in order to scale in a resilient fashion with adequate performance. NSX Edge appliances can be used to provide this ability. In addition, use of routing and firewalling within the virtual infrastructure not only provides tighter security in a traditional single-tenant enterprise, but also opens up the ability to provide secure multi-tenancy on a shared architecture – with VXLANs supporting discrete customers in isolation. Of course, this becomes all the more important when internet connectivity for these services is required. On the infrastructure supporting Virtual Desktops, NSX can provide similar segregation between tenants. Client security using NSX is potentially a massive benefit. The NSX Distributed Firewall applies to VMs on the individual VM network interface, subject to rules established within NSX. This is much more flexible than a hardware appliance working at a global level – discrete policies can be applied using parameters such as what VXLAN the VM is located, or even VM parameters such as the VM name. One pretty intriguing feature of NSX includes integration with third party antivirus scanning solutions, for example Symantec Critical System Protection. Consider a default rule for firewalling applied to a VM. If the VM is picked up as being infected by the antivirus solution and tagged as infected, NSX can automatically apply a different policy to isolate the VM until it is cleaned by the antivirus solution. All in an automated fashion. So, all in all, potentially a slick, compelling solution, all provisioned using VMware’s product range. If you would like to learn more about virtualisation and cloud solutions, or wish to discuss your workspace challenges, we have lots of experience to share so please contact us today.

Horizon View 3D: a client engagement

Overview and requirement Over the course of the last month I’ve been working with one of our customers on a VMware Horizon View proof of concept project. Primarily, their main business driver and use case was to provide a virtual … [More]

Overview and requirement

Over the course of the last month I’ve been working with one of our customers on a VMware Horizon View proof of concept project. Primarily, their main business driver and use case was to provide a virtual desktop infrastructure capable of delivering desktops to their CAD users without impacting the functionality and user experience. Virtual desktops with high intensity graphic demands typically require more raw horsepower than would be necessary to deliver a traditional operating system and back office application, with this in mind I realised I’d need to seek out what options were available to support this requirement. Graphics card technology has advanced significantly in recent years and with the introduction of GPUs, Graphics Processing Units, tasks can be offloaded leaving the CPU to concentrate on serving application and operating system needs. Modern operating systems from Microsoft and Apple as well as virtualisation hypervisors from VMware, Citrix and Microsoft are now able to detect the presence of a GPU and natively pass graphics processing requests across to them, but how would this work in a VDI deployment? Do I need to dedicate a GPU per client (1 to 1 relationship) or share a single one (1 to many)? In this write-up I’ll be sharing items that I feel are often overlooked and sometimes assumed in these types of deployments. While I will be discussing the use of VMware Horizon View, it’s worth noting that Citrix and Microsoft offer the same functionality with graphics card hardware enablement and acceleration.

Desktop and application assessment

Before installing any software or creating a design document it’s extremely important to investigate the operating system and applications in scope, how they function (single or multi-threaded), the pre-requisites of the application and resource demands during peak conditions. For this reason a desktop assessment was completed on the existing physical CAD workstations to provide a greater insight into the workload metrics, such as CPU, RAM, network and disk.

Design consdierations

There were a number of design decisions captured in the vSphere and View design documentation and I’ve pulled out some of the more prominent ones relating to delivering the higher graphics demands and the impacts / constraints they introduce.
  • When considering large VMs for CAD users and dedicating, say, 4 vCPUs, 8GB RAM and 512mb Video RAM, how will this affect ESXi CPU co-scheduling?
  • What impact would a group of large VMs, with a specification as mentioned above, have on VMware HA Clusters and their design? vSphere’s Direct Path I/O cannot be used with HA, DRS and vMotion which introduces challenges for Business Continuity
  • Should the Horizon View Pool type be automated or manual? Both options require their own subsequent design considerations
  • Should the Horizon View Pool type be automated or manual? Both options require their own subsequent design considerations
  • Network connectivity and latency must be defined up-front. Poor bandwidth and high latency will present a poor user experience
  • The PCoIP protocol can be tuned, parameters such as image quality, caching, frames per second and maximum session bandwidth should be reviewed to prevent saturation from noisy neighbour(s)
  • Is the client device capable of handling 3D workloads? Review the specifications but more importantly try and acquire loan devices to see exactly how they perform side by side

GPU, dedicated or shared?

The Virtual Dedicated Graphics acceleration (vDGA) option within Horizon View presents a virtual machine with a dedicated GPU, this requires vSphere’s Direct Path I/O feature however; only that virtual machine can use the GPU. The alternative is to use Virtual Shared Graphics acceleration (vSGA), this permits multiple virtual machines to share a GPU and typically would be used for light weight 3D use cases. I ruled this last option out meaning Dedicated (vDGA) would be needed.

Future proof your deployment

In this POC the customer only had a requirement for vDGA but to support the use of vSGA at a later date for another application or ‘light CAD’ testing, it was agreed to install the graphics drivers in to the ESXi console. It’s good practice to do this at the initial deployment stage when the hosts aren’t being utilised as they will require a restart.

Virtual machine checks & ESXi tweaks

A few things to bear in mind:
  • VMware virtual machine virtual hardware v8 and below only supports 128MB Video RAM per VM, use v9 or higher if more is required
  • Install the latest graphics card drivers into the VM
  • Install the Horizon View Agent into the V
  • Run the VMware OS optimise tool (be careful not to disable settings required for 3D experience)
  • Remove the PCI device from the VM Parent Image before cloning, you won’t be able to otherwise
  • Following the graphics driver install on the ESXi host, configure the GPUs using:
    • ESXi > Advanced Settings > DirectPath I\O Configuration
    • Intel VT-d in BIOS is enabled (enable use of vDGA)
    • Power Management is set to OS Controlled
  • MS Windows OS registry update

Horizon View Pool Configuration

A number of configuration changes were applied to the virtual machines within the pool. If you do the same, remember, that after making changes you must power off the virtual machines then back on again for the changes to take effect. Restarting or rebooting a virtual machine does not apply the new configuration.

Performance tips

Performance tuning the virtual machines can be achieved in more than one area. Items such as virtual hardware, the PCoIP protocol and 3D application itself will all contribute to boosting the experience.
  • Increased vCPU count for high rendering performance.
  • PCoIP FPS - the application required a high amount so this was increased.
  • The FPS within the thin client configuration was also increased
  • Enabled ‘Disable-build to lossless’ to:
    • Reduce the amount of PCoIP traffic
    • Reduce load on virtual machine and endpoint device.
  • Application specific configuration – rendering setting changed to use hardware
  • PCoIP Image Caching was enabled - the thin client devices were capable of dealing with this setting
  • MS Windows OS registry change
    • HKLM\SOFTWARE\VMware, Inc.\VMware SVGA DevTap\ Value Name: MaxAppFrameRate=dword:00000000
    • By default this value is 30, if lag and fragmented display is observed during animation then change to 0 as above
Also consider using performance monitoring software tools and utilities provided by the graphics card manufacturer. Avoid monitoring the GPU on the ESXi host but instead monitor within the guest operating system especially when using vDGA. If the application has its own performance and / or benchmark facility use this to provide a before and after comparison especially when fine tuning.

Final thoughts

As you can see, there are a number of steps that must be undertaken and areas that shouldn’t be overlooked but I cannot emphasise enough that the only way to achieve a successful deployment is to assess the original application(s), benchmark and document. Once the new environment is up and running, test it thoroughly and use the original benchmarks to validate outcomes. Never assume the facts, figures and performance claims from manufacturers will be sufficient, the real test is when a customer nods then agrees it’s acceptable to them. For further reading, see the VMware whitepaper entitled ‘Graphics Acceleration in VMware Horizon View Virtual Desktops’ https://www.vmware.com/files/pdf/techpaper/vmware-horizon-view-graphics-acceleration-deployment.pdf If you would like to learn more about virtualisation and cloud solutions, or wish to discuss your workspace challenges, we have lots of experience to share so please contact us today.

My Synergy 2014 Experience

Citrix held its most important event of the year for partners and customers at the beginning of May. I have never had the chance to attend a Synergy event. Prior to joining Xtravirt I worked as a freelance consultant, meaning … [More]

Citrix held its most important event of the year for partners and customers at the beginning of May. I have never had the chance to attend a Synergy event. Prior to joining Xtravirt I worked as a freelance consultant, meaning I would have had to finance the trip myself. To say I was excited would be an understatement. For someone who has been working with Citrix technology for over 10 years, this really was the trip of a lifetime. Citrix Synergy is one of the largest virtualization conferences attended by everyone from IT professionals to C level execs. Synergy covers topics around end user computing, enterprise mobility, cloud computing and networking, in addition to core traditional topics. Those attending also hope to hear from Citrix about new products and features. At last year's show, Citrix unveiled XenMobile and the major changes to their XenApp and XenDesktop offerings. There were a couple of key points I wanted to ensure I got out of the conference. First, I wanted to get a further technical understanding in some areas I did not have a lot of exposure to, such as XenMobile, ShareFile, and Worx mobile apps. Then, I wanted to get a better understanding of how Citrix is making it easier for customers to adopt.


Opening Keynote

The opening keynote was very exciting and had me captivated. Firstly, we were serenaded by the entertaining “iBand” who played instruments exclusively on mobile devices! CEO Mark Templeton entered the stage dancing and very happy, to a standing ovation. As Mark started talking he was visibly emotional at what would be his last keynote. Brad Peterson was particularly impressive as he demoed a lot of new features. We learned his official job title was “Chief Demo Officer” and I immediately decided that was certainly going to be my next role.

Citrix Workshop

Citrix launched the Workspace Suite. It combines a host of Citrix end user computing technologies, including application and desktop virtualization, mobile application and device management; file syncing and sharing with built-in enterprise security controls, WAN optimization, access gateway and a number of additional features to assist partners and customers aiming to build and deliver their own mobile workspaces, as well as their own desktops as a service offerings (DaaS). What’s most impressive is that it integrates with not just Microsoft’s Azure offering, but also with public clouds, private clouds, and customer’s own data centres. Workspace Services will go into a technology preview in Q2 2014.

NetScaler and MobileStream technology

To help improve mobile network and app performance, Citrix is launching a new technology that will improve the user’s mobile device experience, but will also provide increased network visibility and enhanced security. NetScaler MobileStream optimizes the amount of data a mobile device downloads when it opens an app, and it maximizes mobile page downloads and rendering. It also makes use of wireless and cellular connectivity with network mode technology, meaning apps will run five times faster, according to Citrix. NetScaler also includes Citrix’s TriScale for deploying mobile services over the cloud. NetScaler MobileStream should be available Q2 2014.

Social Media

Since this was my first visit to Synergy, I relied heavily on social media to figure out which sessions to attend, experts to speak to, and also which training courses were of value. The Citrix social media team at Synergy were very active, so was the community. They live-tweeted all the keynotes, and kept everyone up-to-date on all of the conference goings on.

In Conclusion

Synergy 2014 was a very well executed conference with some great speakers and excellent training opportunities. The Anaheim convention centre was a very nice venue with great facilities. I will definitely try to attend Synergy 2015. And the highlight of this amazing experience was passing my final CCE “Designing Citrix XenDesktop 7 Solutions” exam.

How to manage vCHS via the vSphere client

In order to manage the VMware vCloud Hybrid Service with the vSphere client you first need to install the vCloud Connector. The vCC comes in one free version now, it used to be two but now you get all features … [More]

In order to manage the VMware vCloud Hybrid Service with the vSphere client you first need to install the vCloud Connector. The vCC comes in one free version now, it used to be two but now you get all features for free. This post shows the steps to set this up:


  1. Download the vCC Node and vCC Server appliance into your local vSphere environment
  2. Configure the vCC Node and Server to talk to each other and the local vCenter
  3. Register the vCC plugin with the vSphere client
  4. Configure the vCC Server to connect to the vCHS cloud node and provide credentials

Download the vCC Node and vCC Server appliance

Go to the VMware website and download the vCC here:http://www.vmware.com/uk/products/vcloud-connector The vCC consists of two components that must be installed. Both are Linux virtual appliances that can be imported as an OVF
  • vCC Node
  • vCC Server

Configure vCC Node

Open a browser and point to: https://<vCC Node IP>:5480/ NB: I’ve had browser compatibility issues so if you get an error when connecting to any try using Chrome). If you use Firefox you may get an error saying “Failed to initialize”. This is a browser issue not an appliance problem. Login using:
  • Username: admin
  • Password: vmware
Under the Node tab select Cloud and then select vSphere (or vCloud if you have an internal vCloud deployment) and then type the URL of the vCenter Server (or internal vCloud) Click Update Configuration

Configure the vCC Server

https://<vCC Server IP>:5480/ Login using:
  • Username: admin
  • Password: vmware
1-cloud-server-login Click on the Nodes Tab Select Register Node 2- Select Cloud Type as vSphere (or vCloud if you have an internal vCloud Director environment). Under Cloud URL type the local vCenter FQDN or vCloud FQDN Click Ignore SSL Cert (unless you have one registered). Do not select Public at this stage as you’re connecting your internal vCC Server to your internal vCC Node. Under Cloud Info enter vSphere and then the credentials to connect to your internal vSphere environment Click Register

Connect to the vCHS Node

Now click on Register Node once more and enter details to connect to the vCHS Node. Note: The URL can be found in the vCHS Dashboard under the vCloud Directory API URL. This needs an ’8′ added to the port to make it 8443 (not 443) 11 Select Public as it’s a public node. Choose to ignore the cert unless you have one installed. Select Cloud Type = vCloud Director VCD Org Name can be found at the end of the URL above before the last backslash. Username is the e-mail address you use to log in to vCHS.

Registering the plugin with the vSphere Console

Connect to the vCC Server once again Click on the Server tab and click vSphere Client Type the vCC Server URL using format https://vCCServerIP Enter username / password for the vCenter server Click Register

Register the vCloud Hybrid Service

Open the vSphere client (not the Web Client, as this isn’t currently supported) 1111 Select on the Clouds icon on the left have side and then click on the green ‘+’ to add the vCHS cloud and also your internal vSphere vCenter 222   All Done!

SDD Conference 2014 - Keeping up with software design and development

The nature of the work we do in the Advantage software practise team here at Xtravirt means that when it comes to software design and development, we get exposure to tons of different technologies and lots of new techniques and … [More]

The nature of the work we do in the Advantage software practise team here at Xtravirt means that when it comes to software design and development, we get exposure to tons of different technologies and lots of new techniques and design principles. Staying on top of all of these elements is not always easy.

Keeping up

As with all other areas in the IT industry, software design and development is constantly evolving. A side-effect of this massive progression means that it can be challenging for us practitioners to keep up with changing times. As developers, we really need to be on the cutting edge, keeping up to speed with new techniques, technologies, design principles and patterns. Doing this allows us to take advantage of the latest and greatest offerings out there, and means we are able to be as efficient as possible at what we do, resulting in software crafted to the highest of standards. Keeping up with all of this means that we need to constantly study; learning and absorbing information from multiple sources, often with very tight time constraints. As most people know, dedicating time to study and learn about new or existing techniques and technologies whilst working on projects is challenging at the best of times, and finding the kinds of information that allow us to keep on top of trends all in one place rarely ever happens.

Gaining knowledge

Conferences are a great way of solving this predicament if you or members of your team can afford a bit of time away from the office. This year some of the Xtravirt development team attended SDD 2014 (Software Design and Development Conference). Attending the conference gave us the opportunity to keep abreast of current trends, refresh ourselves on the latest technologies, and also to network with fellow developers and designers. The conference this year had over 100 sessions and workshops on offer, and the Xtravirt team was able to pick up some great knowledge and ideas over the week from the various presenters and workshops on hand. Topics ranged from technical hands-on coding sessions, to design and patterns, and even some UI/UX related content.

Relevance to Xtravirt

As many of you may be aware, from as far back as 2008, Software development at Xtravirt has underpinned our services, and this is continuing as we find new and unique ways of utilising its value for our customer’s benefit. New automation, analytics, and reporting software (with Advantage Engine as the workhorse) are just some of the ways we are augmenting our cloud, datacentre, and workspace services. Many of the workshops at SDD 2014 focused on different programming methodologies, and one of these that Xtravirt adopts is BDD (Behaviour Driven Development). Attending these sessions was a great opportunity to pick up on the latest trends. As an example, we even picked up on some interesting ways of writing user stories for BDD and getting test stubs automatically generated based off of these user stories written in “plain English”. The conference also provided a good opportunity to review our own progress and setting against current industry trends. It was quite enlightening to see that the frameworks and techniques currently being used within our software development practise are closely aligned with current trends. There were also plenty of new frameworks and methodologies that we had exposure to at SDD 2014, so the benefit of this aspect is that we can now easily identify opportunities or benefits in these when faced with certain challenges in our day to day operations.

Key takeaways

Some of the key points we took from this event were:
  • As CPU architectures evolve and become better at handling multiple tasks, programming for these gets more and more difficult. The ways in which developers program for multiple cores/CPUs is a hotly debated item at the moment, and alot is being done to revolutionise this area of programming.
  • UI and UX are extremely important for any kind of application, and there is much more to these than just pretty interfaces. All kinds of considerations need to be thought of, from the way a user interacts with controls, to the way feedback and layout affects a user’s conscious and subconscious thoughts about their experience with your application.
Staying current with trends is important in any IT-related profession, and conferences provide an excellent way to discover, learn and network with like-minded practitioners and experts in your field. Putting aside the topic of software design and development, there is much to benefit from conferences related to any industry. If your company does not already have a training or conference budget set aside, perhaps it is worth bringing up the topic with your manager or boss at the next opportunity!

Understanding the basics of vCHS networking

Internally within vCHS there are two types of networks you can create: Isolated and Routed. Isolated networks allow traffic only between virtual machines within that network therefore they are not able to communicate to the outside world. The use cases … [More]

Internally within vCHS there are two types of networks you can create:
  • Isolated
  • Routed

Isolated Networks

Isolated networks allow traffic only between virtual machines within that network therefore they are not able to communicate to the outside world. The use cases for these are more limited than for routed networks, but could be useful when testing completely self contained services or infrastructures such as test, dev or lab environments. When configuring an isolated network you have the option of enabling basic DHCP services. Here you can define the IP range to allocate and the lease time. If you want more advanced DHCP options then you’ll need to deploy your own DHCP server within a VM. You can also configure the range of static IP addresses and Pri/Sec DNS servers that vCHS will allocate and configure on new VMs. This saves you from manually having to configure yourself but should not to be confused with DHCP. dhcp1

Routed Networks

Routed networks allow VMs to communicate to other VMs in different networks. These networks could be other routed networks within the vCHS service, directly to the internet or to an internal corporate network via a VPN connection (via the external vCHS internet link). (Direct connection is another option mentioned later) In addition to DHCP, Routed networks are connected to a gateway (vSphere Edge Gateway) which provides more options  including:
  • NAT
  • Firewall
  • Static Routes
  • VPN
  • Load Balancer
19-04-2014-14-49-55 Each of these options provides the core functionality that one would expect without getting overly complex. vCHS-Networking1  

Connecting a routed network to the Internet

In order to connect a routed network to the external internet there are a few things that need to be done:
  1. Open the firewall on the edge gateway
  2. Configure NAT on the edge gateway
  3. Configure DNS

Open the firewall

Opening the firewall is simple, simply click on the gateway within the Gateways tab in the vCHS dashboard and add the required rules (i.e 80, 443 etc) 19-04-2014-15-15-19-300x244

Apply NAT rules

There are two different types of NAT rules, SNAT and DNAT. (Source and Destination NAT) SNAT is for traffic leaving vCHS going to the external internet (or other network). DNAT (is for traffic originating outside the vCHS cloud coming in). To access the internet from a VM within the vCHS network you must create an SNAT rule specifying the source IP or IP range of the vCHS internal VMs and then specify the external or Public IP as the translated range. This public IP will be shown on the gateway in the main dashboard. In the example below, the IP address starting in 213…. is the public IP address 19-04-2014-15-20-24

Configure DNS

The last thing that needs to happen is for the VMs to point to a DNS server that can resolve internet addresses. The edge gateway internal IP should be used for the DNS server address if you want the connection to go directly out to the internet. In my example the Edge gateway is 192.168.<x>.1

Connecting vCHS to your internal network

If you wish to connect to your corporate network there are 3 main options:
  1. Create a Site-to-Site VPN using the Edge Gateway VPN
  2. Setup a Direct Connection between your datacentre and the vCHS datacentre
  3. Deploy an alternative VPN device on a VM within vCHS and connect (via the external network)

Creating a site-to-site VPN

This option uses the Edge Gateway VPN service to create a VPN connection from the Edge gateway to a VPN device within your internal network. The VPN connection will go via the external Internet connection so you should ensure that the firewall rules are configured appropriately. site-to-site

Creating a Direct Connection

For customers serious about vCHS they can create a direct connection from their existing datacentre to the datacentre hosting their vCHS service.

Deploy an alternative VPN device

There is nothing preventing you from deploying your own VPN service within vCHS running on a VM. A little while ago I had a customer who required a VPN connection from various laptops spread around the UK. A site-to-site VPN connection wasn’t suitable so I installed Routing and Remote Access on a Windows 2008 vCHS server and by opening the appropriate firewall and NAT rules the customer was able to connect using a standard VPN Windows client to a Windows VPN server.   Xtravirt are leaders in planning, designing and transforming organisations to gain the benefits of hybrid cloud.  If you would like talk to us about assisting your organisation, please contact us.  

London VMUG, May 2014 – in a nutshell

The VMware User Groups provide a fantastic opportunity to rub shoulders with VMware technology enthusiasts. Whether you’re running the latest, the previous or perhaps even the unsupported versions of their products there’s always someone to share a story with. You’ll … [More]

The VMware User Groups provide a fantastic opportunity to rub shoulders with VMware technology enthusiasts. Whether you're running the latest, the previous or perhaps even the unsupported versions of their products there's always someone to share a story with. You'll typically find breakout sessions available overviewing products, providing troubleshooting guidance or demonstrations of new features. Occasionally there's an opportunity to ask questions of a roundtable panel of almost any virtualisation topic whether it is product or industry trend related. These events aren't just UK based either, have a look on the www.vmug.com to see where your nearest meeting is. Last week it was the turn of the London VMUG and while based in the city it pulls in attendees from all over the country. It's a slick operation run by Alaric Davies, Stuart Thompson, Simon Gallagher & Jane Rimmer. Xtravirt were there too, seven of us in fact, as attendees and presenters. I rattled off a 15 minute lightning talk entitled Surprising Replication Machine, where I focussed specifically on VMware’s vSphere Replication and how it could be used to migrate data centres opposed to being treated just as Disaster Recovery tool. Gregg Robertson co-presented with Craig Kilborn overviewing the learning path and upfront work required for the VCDX Programme. Based on their recent experiences they fired out facts and figures of the number of hours, lab time, review cycles and openly admitted how much of their personal time had been swallowed up – and the impacts to home life. Many questions were asked and frantic note taking made by some. Technical deep-dive sessions were provided by Frank Buecshel delving into SSL Certificates and their usage in vSphere and later in the day discussing SSO Architecture, Deployment & Common Issues. Having attended the first of his sessions it was apparent that his role within VMware as an Escalation Engineer had exposed him to many complex issues to resolve. The vCAC Real World deployment session presented by Simon Gallagher later in the day opened up good discussion from the floor as he ran through the deployment aspects and gotchas, pre-requisites and un-documented configuration requirements (of which there were many). The room was full, people were either standing or sitting on the floor –  it was clearly a hot topic for the attendees. There were many other sessions available throughout the day. The sponsors themselves are able to showcase their products and present their use-cases and market prowess. Unfortunately, I was unable to attend every session but hopefully you’ll understand there’s something for everyone, whether you’re deep into a deployment and want to learn more, or fascinated about a new product. At the close of the day prize draws were made, hands shaken and with business cards exchanged it was time to head to vBeers, sponsored by PernixData. The majority of the attendees took to foot and made their way to a local pub to talk more tech and continue networking. There’s no doubt it’s a long day but with such great rewards. Our slides are available for download if you’re interested to see them. Surprising Replication Machine  |  VCDX Application – What Does It Take?

IT Lifecycle Overview

A high level view of IT functional relationships Introduction When working in the area of IT strategy and planning it is important to understand the various roles and functional relationships within the business unit. In this article I provide a … [More]

A high level view of IT functional relationships


When working in the area of IT strategy and planning it is important to understand the various roles and functional relationships within the business unit. In this article I provide a summary of the highly complex mix of architecture, delivery and project management functions.

The Functions

There are a number of different functions from strategy through to service delivery which are briefly outlined below.
Enterprise Architecture (EA) Put simply, the function of enterprise architecture (EA) is to document the current state, work with the business to define a target state and plan transition architectures.The formulation of an Architectural Governance Board provides a forum to review transition and future state architectures, this follows the iterative theme of architectural practices.
Solution Architecture (SA) The practice of solution architecture (SA) is to provide a broad and deep level of expertise to a project. Working alongside a project manager, a solution architect will work with the business to fully understand functional and non-functional requirements, design a solution, produce a detailed financial analysis of the solution, work with subject matter experts during planning, design and delivery, and provide technical governance during solution implementation.
Technical Architecture (TA) A technical architect (TA) will own the design of a specific technology stack. This will tend to be a deep product subject matter expert who has a great deal of experience with technology often across a number of different streams.
Subject Matter Expert (SME) A subject matter expert (SME) is an expert in one or more fields. An example of this may be a messaging specialist who is responsible for maintaining and administering a Microsoft Exchange Server Platform.
Programme Management (PGM) The programme manager (PGM) is responsible for overall project delivery capability. They will have a number of project managers running a number of projects. The PGM is responsible and accountable for the success of the programme.
Project Management (PM) A project manager (PM) is responsible for the management and delivery of specific projects. They will work with EA, SA, SME and service operations (SO) staff to ensure projects are well managed, documented, have a valid business case and manage risk.
Project Management Office (PMO) The project management office (PMO) is responsible for managing and administering all project data.The function of a PMO is quite extensive; at a high level it provides:
  • administrative support for project managers
  • project status reporting collation and communication
  • project standards, methodologies and tools
  • promoting project management across the organisation
Additional activities may include: project prioritisation, project monitoring, training, estimating, quality management and much more.
Service Operations (SO) Service operations (SO) staff are responsible for administering systems from a day to day point of view. Functions may include:
  • 1st Line (service desk or contact centre departments)
  • 2nd Line (server/networking/telephony or client
  • 3rd Line (SME level usually in an 80/20 support to project role)
  • Access Control (security administration)
  • Change Management
  • Problem Management
  • Service Management

The relationship between the different functions

The relationship between the different functions is that of a layered approach from Strategy through to Service Delivery as shown in the diagram below. This pyramid is not necessarily a hierarchy of authority, but more a structure outlining the layers of governance that each function provides from an enterprise viewpoint. DCBlog1

Supporting Technologies

Enabling this collaborative, governed and controlled working model requires many elements, one of which is tools. Without going into specific solutions, there are some common attributes that systems should have, in order to ensure efficient integration and adoption:
  • Security - Records should be able to be secured and shared, and where possible, audit logging should be possible
  • Version Control - Recording changes to systems/documents is vital when working in a collaborate environment
  • Tracking - From a simple manual field, to automated data/meta data entry, it is key to be able to monitor changes, and where possible, measure the time between changes
  • Interoperability - Sharing data either in a push or pull model allows, linking of business data and reduces human error, increases efficiency and allows far greater level of analysis to occur
  • Multi-Device Support - A web front-end is often a very useful attribute, however even a simple spreadsheet can be used to great effect where required
Using a multitude of tools, such as ERP, CRM, ITSM Solutions, programme/project management solutions, spreadsheets, databases and many more, is common within organisations. Whilst there is nothing wrong with this, the key is to use effective tools, they may not always be pretty but they need to be able to manage and govern so that control is maintained.

High Level Project Process Flow

There are many activities involved in a project lifecycle. The following diagram, based on PRINCE2 principles, represents a high level, simple to understand view of a project lifecycle. It should be noted that project lifecycle is a complex and iterative process with many sub processes involved. DCBlog2

The Big Picture

Typical IT Org Structures

Each organisation is unique with each using a different set of terms, roles and descriptions, however, there are usually some common attributes between organisations. The following diagram gives an idea of a typical IT organisational structure. As you can see all the major functions have been shown alongside each other. DCBlog3

Managing Across the Enterprise

The diagram below shows a high level view of how the various functions sit together. DCBlog4

Framework Positioning

DCBlog5   There are a number of frameworks that can be used which all use iterative processes. Often these share common ground but are also aligned to different use cases. The following diagram outlines some use cases and framework alignment. The frameworks do share elements, for example TOGAF and ITIL’s service strategy and service design processes link well. This is also true of TOGAF’s approach and the project initiation phase in PRINCE2. DCBlog6

Adapt and Adopt

Frameworks are exactly as they say, they are processes, methods and tools for guidance that can be utilised to suit your organisation’s needs. Following a framework to the letter would be very costly, time consuming and most likely to fail. It is best practice to adapt and adopt the relevant parts of any framework to suit an organisation requirement.


By frameworks and best practice advice globally, a mixture of people, process and technology is required to build a successful IT and business aligned solution that is supportable, secure, efficient and cost effective. While this article takes a very simplistic view on methodologies and their practical implementation it does demonstrate their relationships and how a mixture of governance, control and collaboration are required to achieve an agile, business-enabling IT division. The common elements of all the frameworks are to document, govern, control, manage and most importantly use effective communication. By taking the right tools and techniques to your organisation you’ll be in a far better position to cope with both business change and keeping the lights on.

Nutanix - We can't find the IOPS limit!

I’ve recently had the opportunity to deploy a VDI solution utilising Nutanix Virtual Compute Platform at one of our customer sites, and wanted to discuss some of the benefits it brings to virtualised solutions. Nutanix is a converged infrastructure solution … [More]

I’ve recently had the opportunity to deploy a VDI solution utilising Nutanix Virtual Compute Platform at one of our customer sites, and wanted to discuss some of the benefits it brings to virtualised solutions. Nutanix is a converged infrastructure solution that consolidates the compute (your virtualisation hosts) and the storage tier into a single appliance. I’m personally quite impressed with this technology and the capability it brings not only to VDI solutions, but virtualised solutions as a whole.


You will often read about failed VDI projects, the two main reasons for failure comes down to cost and performance. In the main, these issues are closely related to storage. Lots of high performance storage is costly, so if you don’t provide enough performance to cater for peak usage during IO storms and high usage periods, the solution will underperform when under load. To address the gap in the market, Storage Optimisation has stepped in, with multiple vendors providing very differing solutions. These solutions range from Flash only arrays, VM-aware storage appliances and storage in memory, a handful of these solutions are also able to optimise local or older storage. One of my colleagues recently published this blog post with his experience of Storage Optimisation using Tintri. At the higher end of the scale, you also have larger consolidated platforms such as VCE’s Vblock and IBM’s PureFlex. These are all great solutions and remove some of the cost and performance barriers detailed above. However, when you add a layer of optimisation or select a large consolidated platform you can also start to add complexity and ultimately lose flexibility, see my blog post 'Sorry we don't do average IOPS' for more information.

Back to Nutanix

The Nutanix solution offers combined compute and storage in a single box. Starting with just three nodes you can scale upwards to cater for many thousands of workloads with many Nutanix nodes and multiple Nutanix clusters if required. Like many other solutions, Nutanix optimises the local storage attached to each node, but with BIG differences:
  • The management of the local storage is all “underwater”, The Nutanix Operating System (NOS) takes care of this for you
  • A central administration console provides true centralised management, even down to data store mounting
  • NOS ensures that the Virtual machines are stored on the local storage of the hosts where they’re running, therefore ensuring that the majority of IO is local to each node
  • If one of the controller VMs goes down, the “Auto Pathing” feature ensures that another controller takes over
  • There are SSD drives and SATA drives on each node, all tiering is controlled by NOS (you also have the option to go for ‘storage heavy’ nodes with larger SATA disks)
  • The Shadow Clone feature (GA in v4 of NOS, early preview in earlier versions) ensures that shared base images (for Citrix XenDesktop MCS and VMware View Composer linked clones) are automatically recognised and stored on the SSD tier
  • The storage is presented as shared storage (NFS) to your virtualisation hosts
  • As you add additional nodes, you automatically add more storage
The final point above is the deal breaker for me. When you scale out a platform, you have consider increasing compute, then ensuring the storage is matched and that the performance and bandwidth is suitable, you’ll probably have space, power and cooling requirements in the data centre to consider too. Being able to add compute and storage together in one box in a linier fashion makes planning and installation simple, and delivers balanced storage and compute resources in one go.  Nutanix utilises 10G networking, each node has 2 x 10G uplinks, all you need to ensure is that all hosts within a Nutanix cluster are on the same layer 2 network. Nutanix has excellent reference architectures making the design and implementation process simple.


We were working to very aggressive timescales and the Nutanix Solution helped us to deliver on time, networking is the only real infrastructure requirement (no fibre channel investment is required) and so we were been able to stand up the solution in a small number of days. Deployment is simple and very well documented and technical support is extremely responsive. During testing, we never ran out of IOPS. Utilising Login VSI (a load simulating and benchmarking tool) with a light workload we were hitting well over 100 desktops per host without hitting the VSIMAX figure. With the heavy workload we were hitting over 80 desktops per host with CPU being the constraining resource. These figures are greater than our planned production values and demonstrate that Nutanix delivers as promised on the IOPS front. I’ve tested boot/logon storms with other optimisation technologies on VMware vSphere before, I’ve noticed that hosts may become disconnected and unresponsive during the tests. Conducting the same tests with Nutanix produced better results, booting 600 desktops to VMTools starting in well under 20 minutes while being able to manage all hosts throughout. This test is a bit subjective, as you’ll attempt to design out such storms and test methodically with a tool such as Login VSI, but it’s a great indication of the performance of the Nutanix platform. Simulating a Nutanix controller failure is also impressive; VM’s pause and are back on line in around 15 seconds.

To conclude

Nutanix have just announced NOS 4, this is packed with new features and improvements including multi cluster management and increased replication factors. To learn more about the way Nutanix stores data check this video http://www.nutanix.com/how-nutanix-works/ For a deeper dive, Steve Poitras’s Nutanix Bible is an excellent reference http://stevenpoitras.com/the-nutanix-bible/ For information on Nutanix and vSphere configurations completed in a real world deployment read Xtravirt’s Seb Hakiel’s blog post here http://www.vwired.co.uk/2014/04/14/nutanix-configuration-with-vsphere-5-5/   If you’re about to embark on your own VDI or EUC project, why not contact us? We provide impartial advice, have the skills to understand the underlying technologies and have real world experience in delivery.

Machine deployment using vCAC 6’s Advanced Services menu

For the past few weeks I’ve been working on a customer engagement focusing on cloud automation using vCloud Automation Center (vCAC) and vCenter Orchestrator (vCO).  Throughout the project there has been a number of occasions where the typical vCAC way … [More]

For the past few weeks I’ve been working on a customer engagement focusing on cloud automation using vCloud Automation Center (vCAC) and vCenter Orchestrator (vCO).  Throughout the project there has been a number of occasions where the typical vCAC way of doing something hasn’t been exactly what we needed, we’ve therefore had to heavily rely on vCO to do these things for us.  Thankfully this was made easily possible through vCAC 6’s Advanced Services menu. Perhaps the most important example of where we’ve had to harness vCO’s power has been in the provisioning/deployment of new virtual machines.  Typically we would define rigid machine blueprints that state the size, template to use and storage needs of a machine for vCAC to provision.  This however, didn’t suit our customer’s need – they needed something more dynamic.  We needed to find a way to support:
  • Changing the cloning template at the time of machine provisioning (between Ubuntu Server 12.04 and Windows Server 2012)
  • Defining fixed sizing options for all machines that are provisioned through the blueprint (Small – 2GB RAM & 1 CPU, Medium – 4GB RAM & 2 CPU and Large - 8GB RAM & 3CPU)
  • Adding additional hard drives, with sizes defined by the user
  • Displaying a fully customised Service Catalog form.
We decided that the best way we could meet all of these demands was by creating a Service Blueprint in the vCAC Advanced Services menu and then bypassing the default vCAC provisioning workflow and instead passing it to our own, custom vCO workflow.  This would therefore allow us to fully define the whole machine deployment process including the layout and fields presented in the blueprint form. Before we did any Advanced Services configuration in vCAC we decided it would be best to first create the vCO workflow so we knew the inputs and outputs that were needed.  This ‘New machine’ workflow consisted of the following parameters: jt_blog_one The first thing that the workflow needed to do was parse these inputs, translating them into an appropriate data type for each parameter so we could pick out the template and machine size and do the custom clone.  Our parse script therefore consisted of two switch statements to select the right template/sizing option based on the OS and Size inputs.  This is easier illustrated in the code below (getTemplateObjectByName() method omitted):
	Set the template attribute
if (OS) {
	switch(OS) {
		case "ubuntu":
			template = getTemplateObjectByName("template-ubuntu12.04server");
		case "win12":
			template = getTemplateObjectByName("template_win12");
else {
	throw("Template not found");

	Get the size
if (Size) {
	switch(Size) {
		case "small":
			cpu = 1;
			mem = 2048;
		case "medium":
			cpu = 2;
			mem = 4096;
		case "large":
			cpu = 3;
			mem = 8192;
else {
	throw("No size given");

	Parse HDs
if (AdditionalHDOne) {
	HDOne = parseFloat(AdditionalHDOne);
if (AdditionalHDTwo) {
	HDTwo = parseFloat(AdditionalHDTwo);
The script also parsed the string inputs AdditionalHDOne and AdditionalHDTwo, converting them into the number attributes: HDOne and HDTwo.  These hard drive sizes were then ready for us to add the new virtual disks at a later stage.  We next needed to do the clone itself, this was relatively easily done due to the fact that we could work directly with the vCO vCenter plugin and use the cloneVM_Task() method, passing in a VM clone specification that would be responsible for defining the customisation we needed.  For this, we used this simple bit of code:
// Create the clone spec
var cloneSpec = new VcVirtualMachineCloneSpec();
cloneSpec.template = false;
cloneSpec.powerOn = true;

// Create the location spec
var locationSpec = new VcVirtualMachineRelocateSpec();
locationSpec.pool = resourcePool;
cloneSpec.location = locationSpec;

// Set customisation options
var configSpec = new VcVirtualMachineConfigSpec();
configSpec.memoryMB = mem;
configSpec.numCPUs = cpu;
cloneSpec.config = configSpec;

// Deploy VM
cloneTask = template.cloneVM_Task(vmFolder, VMName, cloneSpec);
It is important to note that in this case we were using a fixed Resource Pool and Folder, so we hardcoded these values – although making them dynamic wouldn’t be too much extra effort. The cloneTask was handed over to an action that ensured it had finished executing successfully before continuing.  The workflow was then responsible for adding the additional hard drives; I’m not going to run through a code example for this but effectively we retrieved the new VM as a VC:VirtualMachine object, called the vCenter createVirtualDiskFlatVer2ConfigSpec() method to create the virtual disks and then attached them to the machine using the reconfigVM_Task() method. By this stage we could fully deploy and customise a vCenter machine through the workflow, our final task was to add this machine to vCAC as a managed machine and return its equivalent vCAC:VirtualMachine object for provisioning as a custom resource in the vCAC portal.  Adding the machine to vCAC was achieved using the standard ‘Register a vCenter Virtual Machine’ workflow (available with the vCAC VCO Plugin) and the vCAC machine to return was retrieved using the following script:
var machines = Server.findAllForType("vCAC:VirtualMachine", null);
for (var i = 0; i < machines.length; i++) {
	if (machines[i].virtualMachineName == VMName) {
		newVM = machines[i];
After the vCO workflow had been created, we could tie it in to vCAC using the Advanced Services menu. As our workflow returns a vCAC:VirtualMachine object, we first had to create a custom resource of this type for vCAC to provision after execution. This custom resource would then provide us with a link to the machine from the portal and allow us to attach custom resource actions to it. jt_blog_two Once we had configured the custom resource we could create a new Service Blueprint that would provide the request form that would be filled out by the user.  We followed the usual process to do this, linking to our ‘New machine’ workflow and giving it an appropriate name.  The Blueprint Form however was customised quite heavily, most importantly we made the OS and Size parameters Dropdown lists so the user could only select the values that we defined in our switch statements earlier. jt_blog_three The final stage of the Service Blueprint creation process was to point the output vCAC:VirtualMachine object of the vCO workflow at the custom resource that we had previously defined.  We now had our custom made form set up and pointed at our ‘New Machine’ workflow.  After adding the blueprint, publishing it and giving it the appropriate entitlements, our solution was complete.  A quick test ensured that it worked as expected and efficiently got around our original problem, allowing us to dynamically deploy the machines from a singular, fully customised form.   If you would like talk to us about assisting your organisation with designing a VMware private cloud, please contact us

Creating an IT Strategy & Succeeding in Strategic Execution

Introduction Over the years, I’ve worked in and for a number of organisations in a variety of roles and often hear the word ‘strategic’ being used but without any real definition of what, why, how, who and when. In sales … [More]


Over the years, I’ve worked in and for a number of organisations in a variety of roles and often hear the word ‘strategic’ being used but without any real definition of what, why, how, who and when. In sales we love to throw the word around by saying we will align to organisations strategic objectives or improve strategy. What we tend not to do is actually work out how we can help the customer begin to understand their strategy and objectives, let alone how we can give them a roadmap to get them there. dc_blog In this article I describe some of my experience in writing a number of strategies, whether this is aimed at a specific area (e.g. client device, cloud etc.) or the overarching strategy for a service company. “The world is changing at an ever increasing pace, the only way I foresee in succeeding is to become a master of change”

Will having a strategy be useful?

The question probably should be, will not having a strategy be useful! It’s a common misconception that strategy requires mountains and mountains of paperwork, what I can say is that what it doesn’t need is to be only in someone’s head. Defining a plan (yes, strategy is a plan), and managing and communicating it requires some form of documentation. This could be in the form of a word document, intranet page, poster, presentation or any document that fits your need. Having it written down, a) allows you to share it, b) allows you to review the plan in the future, and c) enables you to focus on specific areas. Like most things, to effectively manage you need to be able to measure, monitor, govern, control and most importantly effectively communicate. Having a written IT strategy will certainly enable you to do this far more effectively than if you do not plan and write it down.

Where do I start?

dc_blog_two From a green fields to a long established business you’ll need a starting point, the best place to start is with your business goals. From there you should be able to define a skeleton of the areas of priority. Another important step I recommend is to conduct maturity assessments, even in a green field environment a maturity assessment can draw out current and future state possibilities and enable a gap analysis to be conducted. You can then align capability to objectives to identify likely priority areas. Some good examples of maturity assessment are the Microsoft Core IO model and ITIL’s Process Maturity Framework. You can also look at the capability maturity model index (CMMI) which has specific modules (e.g. CMMI for services) that may be a good starting point. I will however point out that CORE IO, ITIL PMF, CMMI or other related maturity assessments are not always quick to pick up. It may be worth utilising an external consultant to assist in this area as they will have knowledge and experience that will cost a fraction of the price and time of trying to do this on your own.

Keeping the lights on

One area people seem to struggle with is when there is no time. When things are always on the go and keeping the lights on is top priority, strategy isn’t important right? Well sure, if you’re busy 24/7 then there is no time, but perhaps there’s a reason why you have no time. It can be because your organisation really has overcommitted to that extent, the problem then is that without spending time with your head up looking around you may have missed the exact reason why you are overcommitted. It may be that further resource is required, or that time efficiencies are not being made, projects with little or no value are taking up valuable time or perhaps that ineffective management of systems or people is occurring. What is important is to understand, that to identify these and present a case for change, someone has to put some time into understanding the root cause and planning a way to change the outcome.

Why Strategy Fails

The following is a list of ten reasons why strategy may fail:
  1. Strategy is considered highly confidential
  2. The company finds it difficult to move on from past successes
  3. There is no structure or method
  4. Not enough time is allowed or it takes too long
  5. Strategy tries to do everything
  6. The strategy is not joined up
  7. Strategy is kept high-level and has no supporting plan of activities
  8. Strategy is not communicated
  9. Strategy is not flexible
  10. No one knows when it has been successful
Taken from: Fast Track to success STRATEGY by David McKean

Why Change Fails

The following list of why change can fail was extracted from: http://www.projectsmart.co.uk/change-management-in-practice.html
  1. The organisation had not been clear about the reasons for the change and the overall objectives. This plays into the hands of any vested interests.
  2. They had failed to move from talking to action too quickly. This leads to mixed messages and gives resistance a better opportunity to focus.
  3. The leaders had not been prepared for the change of management style required to manage a changed business or one where change is the norm. "Change programmes" fail in that they are seen as just that: "programmers". The mentality of "now we're going to do change and then we'll get back to normal" causes the failure. Change as the cliché goes is a constant; so a one off programme, which presumably has a start and a finish, doesn't address the long-term change in management style.
  4. They had chosen a change methodology or approach that did not suit the business. Or worse still had piled methodology upon methodology, programme upon programme. One organisation had 6 sigma, balanced scorecard and IIP methodology all at the same time.
  5. The organisation had not been prepared and the internal culture had 'pushed back' against the change.
  6. The business had 'ram raided' certain functions with little regard to the overall business (i.e. they had changed one part of the process and not considered the impact up or downstream) In short they had panicked and were looking for a quick win or to declare victory too soon.
  7. They had set the strategic direction for the change and then the leaders had remained remote from the change (sometimes called 'Distance Transformation') leaving the actual change to less motivated people. Success has many parents; failure is an orphan.

My Experience of Strategy and Change

In both successes and failures to devise or execute strategy/changes I’ve noticed a number of similar traits:
  • Strategy Confuses People
  • Strategy is considered a one-time activity
  • Strategy is poorly designed and planned for
  • Strategy is poorly communicated
  • Strategy is kept confidential
  • Change is not communicated effectively
  • Training is not provided or effective
  • Documentation is considered a non-essential extra (and therefore often not created/maintained)
  • Change is not measured or controlled
  • People don’t recognise repetition of past mistakes
  • Help is not sought (internal/external)
  • Long term gain is overridden by short term expense
  • The right people are not engaged at the right time
While this list is not exhaustive, to me it provides a summary of common themes I’ve encountered in the last 13 years of my professional career.


The research I’ve done over time comes from a fairly extensive amount of information and my experience in the industry. I’ve tried to include a few references here: http://www-935.ibm.com/services/us/gbs/bus/pdf/gbe03100-usen-03-making-change-work.pdf http://hbr.org/2007/01/leading-change-why-transformation-efforts-fail/ar/1 http://itilservicestrategy.blogspot.co.uk/2008/09/4-ps-of-itil-service-strategy.html http://www.12manage.com/


Change is a complex and difficult game - a well thought out plan has far greater chance of success. Recognising the need to continually review and manage change should enable you to make far greater progress towards your organisations aims. Biting off more than you can chew can be just as dangerous as doing nothing. Taking the right approach, with the rich processes and people will see you set on a journey that should bring benefit to the business and enable greater IT agility.

The first South West UK VMUG

I remember the first VMUG that I ever attended. It was a little daunting as I didn’t know what to expect, who I’d meet or whether I would appear amateurish compared to everyone else there. I recall being waved into … [More]

I remember the first VMUG that I ever attended. It was a little daunting as I didn't know what to expect, who I'd meet or whether I would appear amateurish compared to everyone else there. I recall being waved into a room that I'm now very familiar with by a very enthusiastic man who turned out to be already well known to the community that I was joining. Since that day I've tried to attend VMUGs regularly and through those meetings I've contributed to, and gained from, a network of people who possess a vast wealth of knowledge and experience. Fast-forward to 2014 and I've gone from a regular attendee of VMUGs to being an organiser of them along with co-leaders Jeremy Bowman, Barry Coombs and Simon Eady. This month saw the first ever South West UK VMUG, which was held in Bristol at the mShed. Compared to a few years ago, when there wasn't enough interest or awareness to warrant a South West chapter, the support and participation that was shown during the planning and execution of this first event was instrumental in making it the success that it was.  Additionally, as my work and Xtravirt are a significant part of my life, the support that they have provided allows me to explore and develop professionally and it is a testament to their people-based philosophy. swvmug1 Being the first one, striking the right balance between hosting an interesting and engaging event and trying to do too much, too quickly was important. As such, we opted to make this first meeting a half-day event and selected a theme to focus content around. The participation of several key industry sponsors (including Nutanix and Veeam) helped significantly by allowing us to hire a suitable venue and also by providing some relevant content to add to presentations from VMware. For many of the attendees, this was their first VMUG. The majority of them were from the local Bristol and Bath area with a few coming from further afield. Most of the people that I talked with were in operational roles within VMware customer organisations with a few coming from resellers or other software / hardware vendors. For future events, we're hoping to extend our reach down into Devon and across to Wales a bit more and build up a following of repeat attendees. We had originally planned for the first ever presentation to be delivered by VMware's EMEA CTO, Joe Baguely, but a change in his schedule had forced us to move things around a little. Peter von Oven from VMware stepped into the breach however with a detailed tour of End User Computing. He was followed up by an excellent technical and architectural overview of Nutanix's converged infrastructure. swvmug2 In a combined vendor and community presentation, Nathan Prisk from Falmouth University gave an excellent presentation about his VDI rollout project, the challenges faced, the benefits gained and a story about how the recent stormy weather had tripped up data centre power systems in an unexpected way. swvmug3 Joe Baguely arrived in the nick of time from his event with Lotus F1 in Oxfordshire to deliver a very well received and, as usual, very engaging and thought provoking talk that involved only a single PowerPoint slide. In hindsight, closing with this presentation was very effective and rounded the day off brilliantly before we decamped to the nearby Piano and Pitcher bar. swvmug4 Our planning for the second event (provisionally 3rd June 2014, also at the mShed in Bristol) was well underway several weeks ago. I look forward to it and hope that it will be as successful as our first one.  

Xtravirt at the VMware UK vCHS launch

Strictly by invitation only, it was quite an honor to be one of the few bloggers invited along to VMware’s London launch event for their new vCloud Hybrid Service (vCHS) offering.  With an amazing view of the city from Paramount … [More]

Strictly by invitation only, it was quite an honor to be one of the few bloggers invited along to VMware’s London launch event for their new vCloud Hybrid Service (vCHS) offering.  With an amazing view of the city from Paramount skyline bar at the Centre Point building, the scene was perfectly set for an inspiring event. VMware’s vCloud Hybrid Service became public in the US in September last year.  Swiftly afterwards, VMware announced their plans to bring the service to EMEA in 2014 and, as of today, it is now generally available in Europe. The launch of the service in London today has been anticipated for several weeks following a beta programme that was oversubscribed ten-fold. Initially, vCHS will be available via a single UK data centre.  An additional data centre is due to come online in the 2nd quarter of this year and VMware already have plans to expand the service into more European countries. The relative importance to VMware of this launch was perhaps best emphasized by the presence of their CEO, Pat Gelsinger, who flew in from California for it.  VMware have invested heavily in vCHS and will continue to do so as demand for public cloud services grows. Obviously, VMware aren’t the first to market with a public cloud offering (think Amazon AWS or Microsoft Azure for instance), but a significant portion of the launch briefing was focused around how vCHS benefits existing VMware customers more than a move to a 3rd party cloud provider does.  For this, two of the service’s beta participants talked about their experiences. Betfair’s business activities, as part of the online gaming industry, are heavily regulated within the UK. One of their IT challenges is providing the business with sufficient agility to grow and develop. However, Betfair found that the potential benefits of cloud economics are balanced against the complexity of maintaining regulatory compliance when using cloud service providers. The key differentiator that they picked out in vCHS for them was the integration with their existing virtual platform (vSphere). Being able to migrate workloads from their on-premise platform to their dedicated vCHS space and (using other parts of the vCloud Suite) presenting business users with a single interface to request and manage virtual infrastructure made their adoption of vCHS for development and testing purposes possible. Cancer Research UK’s story is similar. Their key driver is to reduce their spend on “tin and wires” as they’re not an IT business. As a charity, regular and predictable costs are far more preferable to infrequent capital outlays for growth and hardware refreshes. Cancer Research wanted something they could just plug into and use to maximize their IT efficiency and move away from legacy systems. So why the UK and why now? The feedback from EMEA customers indicated that many of them were concerned about data locality and the sovereignty of their datacenters. A Vanson Bourne survey of 200 IT decision makers conducted earlier this year on behalf of VMware indicated that:
  • 86% recognised a business need to keep data within UK borders
  • 85% said current clouds were not integrated with their own internal infrastructure
  • 81% said that they need to make public cloud as easy to manage and control as their own infrastructure
As a Senior Consultant at Xtravirt who is one of VMware’s launch partners for vCHS in the UK, we are looking forward to engaging with customers to accelerate their deployment of vCHS, and leveraging Xtravirt’s expanded professional service offering which will help customers achieve maximum benefit from their hybrid cloud.

Storage refresh key to VDI success

This blog post came to life after the initial workshops and subsequent discovery that a previous upgrade of the customer’s VMware View environment was no longer able to meet the adoption demands


Recently I’ve been involved in a VDI refresh for one of our customers, around 800 desktops using VMware’s vSphere and View products.  As with any VDI solution the success can only be attributed to careful planning and design as well as thorough understanding of the environment. This blog post came to life after the initial workshops and subsequent discovery that a previous upgrade of the customer’s VMware View environment was no longer able to meet the adoption demands. A major pain point related to the performance of the infrastructure and virtual desktops for the users. A full assessment identified the storage architecture as one of the major bottlenecks.  Multiple RAID5 LUN groups (5 disks each) had been provisioned, with as many as 100 virtual desktops or ‘Linked Clones’ located on each datastore.  With the lack of spindles, IOPS, throughput and use of RAID5 with write penalty (x4), this all resulted in a less than desirable architecture and performance to handle the desktop workloads (generally a 20% read, 80% write I/O profile), which differ greatly from server workloads.


The VDI refresh project was granted additional funding to identify and implement a storage solution that would eliminate these performance pain points. The VDI provision was to remain through VMware and updated to Horizon View v5.2 but had to deliver a measurable performance within 10% of a native physical desktop. Administration overheads were to be reduced where possible by using a less complex infrastructure. As a truly agnostic organisation we provided the necessary assistance and guidance in conjunction with our customer to ensure the technologies they were reviewing would be fit for purpose. As it transpired the technology and offerings provided by Tintri addressed our customer’s requirements. In this blog post I wanted to share the initial decision making features followed by a brief overview of some of the product’s features that assisted us during the deployment.

Technology chosen

Following a successful proof of concept and extensive load testing, the Tintri 540 was selected. You can read more about the company and their offerings on their website but for this solution here are the key items identified:
  • NFS solution – Simple and can leverage existing Ethernet infrastructure and eliminate one of the previous problem points, VMFS locking and SCSI reservations.
  • Minimal configuration and setup required.
  • A self-optimising storage appliance without the overhead of manual tuning.
  • Comprised of 8  3TB disks and 8  300GB SSD (MLC), providing the required total capacity (13TB), and a good amount of flash to serve read and write I/O.
  • Instant performance bottleneck visualisation using real-time virtual machine (VM) and vDisk (VMDK) level insight on I/O, throughput, end-to-end latency and other key metrics.
  • Support for up to 1000 VMs providing enough capacity for day 1 and predicted future growth.
  • Supports up to 75,000 IOPS. All read and write I/O is delivered from flash and provides low latency performance for VMs.
Note: To achieve the highest possible VDI density Tintri appliances require the use of 10GbE connectivity between the appliance and core switching. The Ethernet-based infrastructure in this implementation consisted of dedicated redundant switches for storage traffic but running only at 1GbE. While greatly reduced this still permitted 80  virtual machines per ESXi host which was well within the design and capacity planned consolidation ratio.

Previous VDI

The initial deployment of the virtual desktop infrastructure was based on using MS Windows 7 serving approximately 700 users. The majority were ‘Linked Clones’ with less than 20 persistent desktops.  The virtual desktops remained powered on and controlled by each Horizon View pool policy to enable quick access and logon to each desktop. Various desktop workloads were in use however, none of these were extreme use cases in terms of I/O profile and were typically task workers, knowledge workers, with a small number of power users. As with all our VDI engagements we completed an assessment of the physical and virtual desktops before proceeding. Identifying use cases and mapping these to the different pools to ensure the new environment was sized correctly and able to handle peaks and additional overhead.

The Tintri Dashboard

Here’s a quick overview of the management user interface, focusing on items used during the initial deployment to assist with performance measurement.
  • The dashboard provides real-time insight and monitoring. You’re able to drill down further into all of the metrics for further analysis and pull in metrics from VMware’s vCenter to provide deeper insight.
  • Within the Datastore performance sub-heading the main IOPS, throughput, latency and Flash hit ratio counters are presented in real-time (10 second average), and a 7 day range (10 minute average).
  • To the right hand side, you can view which VMs are ‘changers’, in terms of performance and space and by what degree of change.
Note: Other VM names have been removed from the screenshot to protect the customer’s data. tintri1 The Diagnose, Hardware screen allows visibility into the status of the hardware for components such as disks, fans, memory, CPUs and controllers. tintri2

Real world performance

IOPS  can be monitored in real-time, 4hrs, 12hrs, 1 day, 2 day or 7 days to a granular level that can even reveal details of a single I/O from any VM. Using the Datastore chart you can click on different points to view specific offenders (such as VDI-T2-48 in the screenshot below) or hover over a point to bring up the data on screen. During the first week of production the chart below reveals statistics for the deployed 715 virtual desktops (with a peak of 400 concurrent active sessions). The total IOPS generally remained under 4000 with bursts highlighted by various logon storms throughout the day. The dramatic peaks are largely due to replica VMs or maintenance (recompose) operations. tintri3 Note: Horizon View Storage Accelerator (VSA) is enabled on each pool, which can dramatically decrease the number of read I/O that is required from the backend storage system. This feature caches common blocks across the desktops and serves them from a content based read cache (CBRC). This requires and consumes physical RAM (max size 2048MB) on each ESXi host. You can read more about the VSA feature here.

IOPS versus throughput

The ability to compare two charts, side by side, proved to be very useful feature during the testing and go-live. In the screenshot below there’s a comparison between IOPS and throughput.  The total IOPS peaked at 10444 at 6:10 PM, with 8396 read I/O (shown in yellow) and 2048 write I/O (shown in blue).  The replica disk shown below, is contributing 13% to the overall total IOPS. tintri4


Latency is a vital statistic to be aware of and monitor because it measures how long it takes a single I/O request to occur end to end, from the VM (guest OS) to the storage disks. If latency is consistently greater than 20 – 30ms then all round performance of the storage and virtual machines will suffer greatly. In the example screenshot below, green indicates the latency occurring at the host (guest OS), rather than the network, storage or disk.  The total latency is 2.68ms, which results from the host (2.05ms), network (0.12ms), storage (0.51ms) and disk (0ms).  Maintaining consistent latency around this point will provide excellent end to end performance. tintri5

Flash utilisation

This chart excerpt reveals the amount of I/O (read and write) that’s being served from the flash disks.  As can be seen, 100% is being served with only a couple of small drops to 98% meaning the best possible I/O performance is being delivered from flash rather than mechanical, spinning disk. tintri6

Virtual Machines

Drilling into IOPS and throughput is all very well where forensic analysis and investigation is required, but what is interesting is how this correlates to virtual machines. This screenshot is taken from a real-time graph. Virtual machines can be seen running on the same Datastore and usual ‘sort’ activities can be completed by clicking on the metric column headings. Double-clicking on a VM will display a graph, which allows historical data and ability to display two graphs side by side, perhaps comparing IOPS and throughput, for example. tintri7


On each of the graphs which are presented throughout the management user interface, you can observe ‘Contributors’ shown down the right hand side, allowing visibility into individual virtual machines and the contribution to the overall number of IOPS, throughput or latency.  Below we can clearly see a couple of Replica VMs recording high IOPS, a result of Linked Clones reading from the parent image (Replica) disk. tintri8

7 day zoom – IOPS versus Latency

Taking advantage of the side by side view again, a 7 day view of IOPS and latency can be observed clearly revealing the peaks and troughs of IOPS throughout the week. In this example the majority of I/O activity on the Tintri storage is write based (shown in blue) which means the VMware View Storage Accelerator is really taking the initial hit and reducing the read I/O requirement from the storage. Total end to end latency (host, network, storage & disk) remains consistently low (around 3ms) with the occasional spike which is to be expected. In the example screenshot below, green indicates the latency is occurring at the ESXi host (guest OS), rather than in the network, storage or on disk. tintri9


For this project the Tintri storage appliance has proven it’s been able to deliver in terms of reduced management, with no additional performance tuning required and the capability to handle all workloads during peak periods. Performance monitoring evidences the I/O throughput is well within the device’s capability and delivered using a high flash percentage (that’s I/O served from SSD) with a low end-to-end latency. Virtual desktop performance has been validated to ensure it meets the initial requirement to be within 10% of native physical performance. The testing revealed in certain use cases, the virtual desktop performance exceeded that of physical performance. In this write-up it’s very clear to see the time investment and due diligence completed by the customer provided a solid starting point and a contributing factor to the success of the project. If you would like talk to us about accelerating your VDI platform, please contact us.

VMware Virtual SAN - My view

At VMworld last year, there was much continuing buzz about the “Software Defined Datacenter (SDDC)”. This initiative was announced by VMware last year but now the products have started to appear, making it a reality. We’ve worked with virtualisation and … [More]

At VMworld last year, there was much continuing buzz about the “Software Defined Datacenter (SDDC)”. This initiative was announced by VMware last year but now the products have started to appear, making it a reality. We’ve worked with virtualisation and abstraction of resources for a good few years now but the next level is to bring seamless automation and policies to control resource allocation. When it comes to storage, VMware’s solution for the SDDC is: Virtual SAN.

What is Virtual SAN?

VMware VSAN is a vSphere host-based storage solution to provide fast, scalable and resilient storage to any vSphere environment.  The idea is to have hosts with internal storage (has to contain at least one SSD and one traditional disk) and as long as you have three or more, VSAN can create shared storage for you, using a “RAIN” model.  There is only one SSD in a “Disk Group” but its job is to provide write buffering and read caching i.e. it’s not included as storage capacity.  Write performance is where low cost storage systems suffer but having an SSD to front the spindle-based disks makes VSAN a good choice for most applications.  The number of replicas and stripes depend on your resilience requirements and policies can be set on a VM basis.  The beauty of this system is that once you provide the storage components required, VSAN takes over and configures them according to your policies for performance and/or availability.  The whole environment is scalable and if more performance or storage is required, disks can be added later to scale-up or one can choose to scale-out by adding more hosts. The most important thing to bear in mind is that this is not an appliance.  All data is handled at the VMkernel level, cutting out expensive trips through hardware interfaces and across multiple buses.  The result is extremely fast response times and therefore very well suited to a wide number of applications.  This is a significant change in strategy as storage is now moving back to the host system, while still providing the shared and resilience aspects of it.  More and more architects are finding that while all-flash based storage is brilliant in terms of delivering lots of IOPS, it doesn’t help much if the pipe to the host(s) can’t carry the throughput required.  Having resilient shared storage locally solves that problem and delivers many times the throughput as compared to traditional network connectivity options currently in use.  Best of all, the solution is generally far cheaper than storage systems offered by big name vendors. It goes without saying that performance of such a system is reliant on the sum of all components involved so skimping on those would be counter-productive.  They still need to be “Enterprise-Level” and even if one can use non-approved hardware, it’s not recommended as that would seriously affect not only performance but also uptime of the system.  That’s especially true for SSDs, given consumer-grade SSDs have a relatively short write life span and replacing those regularly, would not help with uptime. There are so many things that should be mentioned on the subject but I am not covering them here because focus of this article is on expressing my views and enthusiasm about VSAN and not on all that there is to know about VSAN.  For that, I would like to point you towards an excellent collection maintained by Duncan Epping here.

Who should use VSAN?

VMware VSAN needs some time to prove itself and it’s certainly not going to replace traditional dedicated storage systems overnight. However, I do think that VSAN will enable virtualisation for a lot of companies that can’t afford enterprise-class shared storage but can do with extremely fast but affordable shared storage. Sure there are appliances out there that do similar things but I don’t feel they’re scalable in the same way as one generally has to buy a whole appliance/enclosure to scale up/out. That coupled with automation/integration with the hypervisor, VSAN looks like one of the simplest, quickest and cheapest solution to me, in terms of overall CAPEX and OPEX. I think that the biggest subscribers to VSAN would be companies starting their journey towards virtualisation but not wanting to invest in dedicated and resilient shared storage initially to prove capability. There are also companies that want to have shared storage but with good performance for individual applications that they might want to keep separate from the rest of the environment. These could be test/development, environment for a specific group/application or even a disaster recovery environment. By combining compute and shared storage, VSAN becomes a very attractive option for such applications. Last but not least, VSAN is quite well suited to smaller VDI environments and enables deployment without investment in expensive storage systems. Despite not having expensive hardware at the backend, VSAN delivers great performance and resilience at a low cost. All this makes it possible for smaller companies to embrace virtual desktop technologies who were previously prevented from going down that route due to the costs involved.

Can I get it now?

At the time of writing, VSAN is still in “Public Beta”.  If you have hardware compliant hosts (and SSDs/HDDs to put into them) or a lab with sufficient resources, there is no reason why you can’t start experimenting with it.  To get your copy, click here. If you would like talk to us about assisting your organisation with storage requirements for your data centre, workspace or cloud project, please contact us.    

Using Puppet to Automate your Infrastructure

Preamble Small to medium IT environments are typically simple to manage and maintain for the average sysadmin. By utilising a combination of scripts, tools and other utilities, they are generally able to keep these environments in a manageable state, and … [More]


Small to medium IT environments are typically simple to manage and maintain for the average sysadmin. By utilising a combination of scripts, tools and other utilities, they are generally able to keep these environments in a manageable state, and to specification. Almost. Most would however agree that it is almost impossible for a team of people to keep systems and services in a particular desired state. Configuration drift is always prevalent when there are multiple people responsible for maintaining systems. This is never going to work as a long-term solution, and if the business is to scale, this most certainly will not work. IT teams will find themselves fire-fighting on a daily basis, and as a result will struggle to keep up with the demands of the business. Some sort of configuration management tool is required to help teams management their infrastructure, and today I will be looking at one of these in particular: Puppet.


Puppet exists as an open source tool, from a company called Puppet Labs. As the previous paragraph suggests, it is designed to manage configuration of Unix/Linux and MS Windows systems by way of a set of declarations that are setup by the user. These declarations are set to define in high-level terms, the desired state of system nodes managed by Puppet. Puppet then enforces this state upon nodes in an automated fashion, in most cases without the need to supply commands specific to individual OS types. This post will be looking Puppet Enterprise, which exists as a licensed per-cumulative-number-of-nodes solution. I will be using the free version of Puppet Enterprise, which allows for management of up to 10 nodes. Puppet Enterprise can be used to automate the provisioning of services, and can handle configuration and setup of all layers on managed nodes, from the OS, to networking, to middleware and even the application layers. Providing for a fully automated infrastructure. It's abilities do not limit it to private cloud usage though. Puppet can also extend to other cloud services, allowing you to manage infrastructure in your public, or hybrid cloud environments too. At its roots, Puppet is all about describing the desired state of nodes, in terms of what are referred to as "resources". This is done by using a DSL (Domain Specific Language). To give a basic example of what this looks like, let's take a look at a user resource on a particular uBuntu Linux machine. We do this on a linux machine managed by Puppet, by issuing the command "puppet resource user". puppetuser Here we can see how the current state of these users looks on this particular machine. This format is written in "puppet configuration language", and if we were to save this to a file called "xtravirt.pp", it could now be used as what is known as a Puppet manifest file. A manifest is what is used to describe the desired state of a resource. So if the "xtravirt" user did not exist on this system, we could demonstrate manually applying this state by running "puppet apply xtravirt.pp" on this node. Puppet would then ensure that this user exists, therefore creating it by running the appropriate Linux command. Puppet is all about automation though, so this would realistically be setup and Puppet would handle this for us when it ran its interval check on all nodes - by default this run interval is every 30 minutes. Another great feature of Puppet is the ability to simulate a change before applying it. We could do this by running "puppet apply sean.pp --noop" (no operation). Puppet would then output the simulated changes for you to view & ensure you are happy with first. In this case, the user "Sean" has a desired state in the sean.pp manifest file. puppetcmd To get a basic, fully automated environment up and running, you can follow the Puppet Enterprise quick start guide found here. This will allow you to try out Puppet and follow along with this blog post. As high level overview, the process will entail the following tasks:
  • Setup Puppet master node, holding the master server, console server, and database support roles
  • Setup a couple of Puppet 'test' nodes (Linux / Windows machines)
  • Ensure DNS is setup correctly - all machines should be able to resolve DNS correctly for your deployment.
  • Network connectivity - nodes need to be able to communicate on certain ports - detailed in the quick start guide
This master node is what is responsible for holding all of the configurations and desired states of nodes. Note that you can have more than one master node/server. The client nodes then perform periodic checkins against the master to receive their desired configuration states.
In my case, I deployed a linux VM running uBuntu 12.04 server as my puppet master node. I downloaded the puppet enterprise tarball using wget, and installed it on the system (master.development.lan), specifying to install the master, console, and database roles when prompted by the installer script. I decided to also create an alias CNAME record in DNS to point puppet.development.lan to the same system. Once complete, I was able to access the console on https://puppet.development.lan.
The console is the GUI for Puppet, and is primarily used for classification (which is essentially telling managed nodes what to do) and reporting (i.e. report on what nodes are doing, what has changed, etc...)
puppetenterprise From this point on, we can use various built-in classes and modules to define how our nodes should behave. We can also create groups with different classes attached, so that we can manage groups of nodes in different ways. I always like to learn by example, so below we'll run through an automation scenario using Puppet.


Every time we deploy a Windows Server 2008 R2 VM to our "Management" cluster, we would like to ensure that various configurations are applied to this VM, and that going forward they are adhered to. Let's keep this example simple and say that we need PowerCLI to be installed on each of these nodes.


To start, we'll just need to setup one VMware template and a Customization Specification for vCenter to use when cloning the template. This would just involve the following couple of steps:
  • Create a basic Windows Server template, and place the puppet-enterprise-3.1.0.msi installer in the C:\deploy folder on the template machine
  • Create a Customization Specification with a run once command to install the .msi file, specifying the PUPPET_MASTER_SERVER parameter for the installation that points to our master puppet server
puppetwin The above will ensure that each time a VM is deployed using this specification, it is flagged to be managed by our master puppet server. All we have to do is accept the node in our list of nodes awaiting acceptance from the Puppet console when it gets deployed using the "Pending node requests" tab. pending
Next we'll get into actually defining a module to manage these nodes. This module will contain a single "class" which will define the software package that should always be installed on our nodes.
Normally, you can simply download existing modules from Puppet Forge (a repository of modules written by the community). This is however going to be our own basic module, and instead of using the built-in "puppet module search/install" commands to install modules, we'll just simply create a directory structure and a couple of files to make up a simple module on our master server ourselves.
  • Using SSH, on the master puppet server ensure you are running with elevated privileges. (sudo -s)
  • Create a new directory called "vmware_mgmt" under /etc/puppetlabs/puppet/modules/"
  • Under the new "vmware_mgmt" directory, create another directory called "manifests"
  • create a file called "vmware_mgmt.pp" under the manifests directory and populate it with the following:

class vmware_mgmt
file { 'c:\packages':
ensure => directory,
} file { 'c:\packages\VMware vSphere PowerCLI.msi':
ensure => present,
source => 'puppet:///files/VMware vSphere PowerCLI.msi',

package { "VMware vSphere PowerCLI":
ensure => installed,
source => 'c:\packages\VMware vSphere PowerCLI.msi',
install_options => { 'INSTALLDIR' => 'C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCL$
require => File['c:\packages\VMware vSphere PowerCLI.msi'],

This is our basic vmware_mgmt class, and defines that a folder should exist on our managed nodes called "packages". In this packages folder, we need a file called "VMware vSphere PowerCLI.msi" which can be downloaded from our Puppet files repository (located on our master server), and that this .msi package should always be installed. This will essentially ensure that the package is downloaded, placed in the folder, and installed, if it is not already installed on the concerned node. You can take a look at this page to learn more about writing custom Windows manifests.
  • Next we should create a basic metadata.json file in the /etc/puppetlabs/puppet/modules/vmware_mgmt directory
  • Populate this file with the following metadata:

"project_page": "blank",
"license": "Apache License, Version 2.0",
"source": "blank",
"dependencies": [

"types": [

"description": "This module ensures VMware Management tools are installed on Windows nodes",
"summary": "This module ensures VMware Management tools are installed on nodes",
"name": "seanduffy-vmware_mgmt",
"author": "seanduffy",
"version": "1.0.0"

Note: there is a lot more to writing a complete module. You can visit this page for best practises and other guidelines.
  • Now that we have a module defined, in the Management Console, in the left panel, click "Add classes" and start typing "vmware_mgmt" in the search text box. Once your class name appears, put a check on it, and then choose "Add selected classes".
  • Click "Add group" in the side panel and create a group for your nodes that should have PowerCLI installed
  • Add the "vmware_mgmt" class to the group
  • While editing the group, add any Windows nodes that you have deployed and accepted to be managed by Puppet by typing their names into the "Add a node" text box in the edit node group area
  • Finish the group creation by clicking "Update"
We should now ensure that we have the actual .msi installer ready for Puppet to fetch and send to nodes when required. There are a variety of options available here - UNC path, local to the node, or on the master puppet server itself. You may have noticed above in our vmware_mgmt.pp file, we defined a source location using source => 'puppet:///files/VMware vSphere PowerCLI.msi' this points to a file on the puppet master server. To do this, you should edit the "fileserver.conf" file under /etc/puppetlabs/puppet/ to specify a location to use to serve up files. I simply added the following to my configuration file:

path /etc/puppetlabs/puppet/files
allow *

I then made sure to place a copy of the VMware vSphere PowerCLI.msi file in /etc/puppetlabs/puppet/files on my master server.

Finishing up

We now have two options to see the results applied to our nodes in this group:
  • Once off "run once" - this allows us to invoke a single Puppet run on any number of nodes we select. From the console navigate to "Live management -> Control Puppet -> Run once -> run" (choosing the nodes you wish to invoke Puppet on from the list on the left before clicking run)
  • Wait for puppet to invoke on our nodes on it's default 30 minute interval. Puppet will simply run every 30 minutes by default and nodes in our custom group will pick up changes automatically


Quite a lot of setup went into this, but we now have something re-usable that can automate the deployment of any number of nodes in our infrastructure. We looked at how to setup and install a basic Puppet Enterprise environment, create a custom Windows VM template that automatically installs the puppet agent on deployment and connects to our master puppet server, and finally we looked at how to define a configuration for Windows nodes that ensures a specific software package is installed.
There are many more powerful features available to use with Puppet, and a lot more can be done. Puppet has a bit of a steep learning curve, but once you have it deployed, configured, and you have your various classes and modules setup, it really shows its power in being able to completely manage and automate an entire infrastructure from the ground up.

Whistle-stop tour of Horizon Workspace

Horizon Workspace is VMware’s one-stop-shop product for the End-User Compute experience in a corporate environment.  It provides users access to applications, data and virtual desktops via either a single pane of glass Web browser interface, or through a range of … [More]

Horizon Workspace is VMware’s one-stop-shop product for the End-User Compute experience in a corporate environment.  It provides users access to applications, data and virtual desktops via either a single pane of glass Web browser interface, or through a range of mobile device applications.  Since the recent release of version 1.5, it’s been gaining some traction in the market place and as such, I’ve recently been doing some work in the Xtravirt lab on this, as well as paying special attention to Horizon Workspace during my visit to VMworld last month. One thing I’ve noticed during my digging is that it’s a powerful product, with lots of scope, and some interesting features on the roadmap.  However, it’s a complicated product under-the-hood with many interacting components with complex relationships.  This blog entry is essentially a whistle-stop tour of the components of Workspace, how they scale and some of the surrounding architecture.

Lighting the Fuse…

This part of the article isn’t intended to give you a blow-by-blow guide to installing Horizon Workspace, but it’s worth describing briefly for a bit of background. Workspace is deployed as a vApp on top of vSphere.  It has a number of pre-requisites in order to get it installed successfully.  One key thing to get right is DNS.  It’s important to pre-stage the names for each of the appliances in DNS (including reverse lookup too) as it relies on DNS for configuration and maintenance of the component appliances, as well as communications between them.  Equally, you need to establish an IP pool in vCenter to support these too.  A load balancer should be present in the estate hosting the Fully Qualified Domain Name that users will use to access the solution, complete with a trusted SSL certificate. Oh, and make sure you have sufficient resources to run the initial installation – the configurator’s initial installation script does not suffer fools who run out of resources (like me in our labschoolboy error!) - If this happens, it’s a re-install from scratch… When you install the vApp, it creates a total of 6 VMs.  Five of them are the first appliances that are established as a minimum for the estate, while a sixth, that isn’t powered up (data-va-template) is a template VM used to deploy further appliances. horizon-workspace Once the vApp is installed via the vSphere console, the configuration wizard needs to be filled out on the Configurator’s console to complete the installation.  Once this is all running, the basic estate is up and running and the estate can be configured (mostly) via web interfaces.

Appliances in Workspace

So, we have a set of appliances.  Next we need to consider what they are each used for, how many do we need of each and what do we need to do to configure them.


As its name describes, it manages and maintains elements of the central configuration – common items required by all appliances are maintained and distributed from here (such as the root password management, networking, vCenter connection, certificates within the vApp).  It also hosts the wizard used to carry out the initial configuration.  It has a web page for managing a number of key aspects of the estate.  Several are detailed below:
  • System Information – a status page for the appliances deployed and also allows control of those that can be placed in Maintenance Mode.
  • Module Configuration – This page allows the administrator to enable (though not disable) the various functionality modules of the estate, such as Web applications, View integration etc…
  • FQDN and SSL – For configuring the Fully Qualified Domain Name used by users to access the estate and the SSL key chain for the estate.
  • License Key – This is managed centrally for the solution.
  • Password – the central admin password used on all of the vApps.
  • Log File Location – A text page describing where the logs various logs are located, rather than a settings page.
There is also a page describing the database connection.  It should be noted that Workspace requires a database to function (Postgres 9.1 or later is recommended).  For testing, an internal one is provided, however, for production, an external one is recommended.  This will be discussed later. So how large does this VM need to be?  Not very, as it doesn’t do masses of work.  Out of the box, it comes with a single vCPU, one gigabyte of memory and a 6GB disk.  This is sufficient and doesn’t need changing.  Only one configurator is needed – it’s not customer facing and it won’t break the estate if it is down for a while.


The Connector appliance handles a number of roles, including user authentication (Active Directory and RSA SecureID), connectivity to Active Directory and synchronising ThinApp repositories and View Pools. Multiple Connectors are likely to be required in a production estate.  For example, connectors to serve internal users authenticating via AD, while external users might authenticate using RSA SecureID, while multiple connectors would be needed from a resilience perspective too. According to VMware’s testing, each connector can handle up to 30,000 simultaneous users, with the out-of-the-box configuration of 2 vCPU and 4GB RAM.  VMware recommend retaining this sizing, but scaling outward for load.  The key thing to consider is that the figure quoted is simultaneous users, and even in a 30,000 seat estate, it’s unlikely that this many simultaneous requests would hit a single node. Only a single authentication mechanism is supported per node, so this may also define the design decision as to how many Connector appliances are needed. For ThinApp packages, a repository is required if these are to be distributed using Workspace.  Due to how ThinApp functions, a Windows file share (not a basic SMB/CIFS NAS appliance) is required to host this repository.  ThinApp packages only require the executable and accompanying DAT file (with the same AppID) in a Workspace environment, so storage needs aren’t massive.


The Gateway in some respects is poorly named.  It enables a single user-facing domain name for users, but beyond that, it largely serves as a policeman routing requests to the correct appliance node – so if a user selects File resources, requests go to the appropriate Data appliance. On a fresh installation, a Gateway has two vCPU, 2GB RAM and a 10GB disk.  VMware recommend increasing this considerably to 4 vCPU and 16GB RAM. In terms of numbers, there are a number of design considerations. For High Availability, multiple gateways should be placed behind a load balancer.  From a load perspective, a Gateway will support up to 2000 users. One item of note is that VMware recommend a minimum of 1 gateway for every two Data appliances.  The Data appliance puts the greatest load on Horizon Workspace, with the highest number of requests, all passing via the Gateways.


This is the intelligence behind the solution.  The administration web page is hosted here (even though logon is via the Gateway).  Application catalogues, entitlements, reporting and local Horizon Workspace based groups are all defined and managed here. It’s the Service appliance that connects to the Database. Out of the box, one of these is installed, and is configured with 2 vCPU, 4GB RAM and a pair of virtual disks totalling 18GB.  VMware’s recommendation is that two of these are deployed for resilience and that the vCPU and RAM configuration be increased to 4 and 8GB respectively.  At this level, up to 100,000 users can be handled by each node without issue.


The Data appliance handles the file services part of the solution, including the User Interface component, quota management, file sharing and hosting data itself. This server, being customer facing and the nature of its role, is under considerable load.  As such, VMware recommend a Data appliance per 1000 users.  It is recommended that the appliance is increased to at least 4 vCPU and 8GB RAM, possibly even to 8vCPU/16GB RAM. It is also recommended that, in a production environment, that at least two are established.  This is an architectural suggestion. The Data appliance can have two roles:
  • Master Data Node – hosts the LDAP meta-database.
  • User Data Nodes – host the actual users and their data (a 1000 users per node).
Much of the VMware scaling recommendation is dependent on throughput, size of dataset expected as well as design decisions such as not putting all of the users in one basket – do you want to stop 10,000 users working when a file node goes down, or only 1000, leaving 9000 operational? When a user accesses a file through a web interface, it is possible to preview the document within the browser.  This can be implemented using two different methods.  The first is to use Libre Office, which is free and integrated in the Data appliance, while the other is to implement MS Office Preview Server.  Where Libre Office is enabled, it may be necessary to increase the RAM/CPU provisioning of the Data appliance. Data storage space must be provisioned additive to the Data appliances.  While VMDKs are supported, due to the high number of files involved and the relative performance of VMDKs used in this fashion, it is recommended that NFS storage is used instead as these are generally easier to scale and have superior performance with high file counts.  It should be noted though that each Data appliance will require its own export. Storage sizing is a relatively simple proposition.  For each user, take the proposed quota to be applied and multiply by three (to handle version retention) and add 20%.  For example, a 1000 users with a 100GB quota will require 1000 x (100GB x3 x20%) = 360,000GB, or 351TB of capacity.  NFS appliances fit well here as they often support hot expansion, so extending storage as it’s required becomes more tenable.

External Services

As stated, a number of services can be provisioned externally from the Horizon Workspace vApp package.

Document Preview using MS Office Preview Server

Rather than use Libre Office for document preview, it is possible to set up Microsoft Office Preview Servers.  These have the advantage of off-loading rendering operations from the already busy Data appliances, as well as rendering Microsoft Office documents using Microsoft’s engine rather than a third party solution.  On the flip-side, it does entail license costs for Server and Office licensing, as well as requiring additional VMs to manage and protect. Office Preview requires at least one Windows Server 2008 R2 VM with MS Office 2010 Professional x64 and the Horizon Data Preview agent installed.  Size wise, the VM needs at least 4 vCPU and 4GB RAM.  As conversion of documents is processed in real-time, this is quite CPU and memory intensive.  It may be necessary to scale upwards and outwards if many users are expecting to use preview services on lots of devices, and if large documents such as PowerPoint presentations need rendering. The Preview service requires an account with permissions to add local users to the Server and UAC to be disabled.


As mentioned previously, in a production environment, a Database server is required by the Service appliances for retention of their data.  Horizon Workspace 1.5 supports either Oracle 11g or Postgres 9.1, though VMware recommend Postgres (possibly related to the fact they offer the VMware vFabric Postgres appliance). For CPU and RAM purposes, both databases should run adequately with 4 vCPU and 8GB RAM.  The documentation states that the database supports 100,000 users in 64GB, with 20GB per 10,000 users beyond that.  VMware’s recommendation is that 32GB is sufficient for most engagements.

So what do we end up with…?

So although the basic vApp gives us a basic five appliance estate, a production estate requires somewhat more, depending on the services required.  For example, a cursory 2000 seat estate might look something like the diagram below. externaluser There are a couple of Gateways tucked behind a load balancer.  Although one can handle 2000 users, we need two for resilience and to support our Data appliances.  There are three Data appliances, following the rule of 1000 per appliance, plus a Master.  Two Service appliances are provided for resilience.  There are four Connectors, two for RSA authentication and two for AD authentication, purely driven by the need for resilience and one authentication method per connector rule.  Last, but not least, a single configurator. As you can see, the five VM estate soon becomes one with twelve VMs, before we consider the database, NFS and ThinApp repository and possible Office Preview servers.  The estate can end up with quite a significant footprint, but this isn’t too surprising when consideration is given to the various roles it serves and the number of users it needs to support. It’s pretty clear that careful design is a critical task when implementing this product - ascertaining the use cases, sizing the solution as accurately as possible and then sitting down and putting the design on paper.   If you would like talk to us about assisting your organisation with VMware Horizon Workspace or any aspects of VMware Horizon Suite and their management, please contact us.

VMworld Europe 2013: A Few Useful Nuggets…

Well, this year’s VMworld Europe has been and gone for another year. Barcelona was a pleasant place, but it wasn’t all “sun and sangria”, there was treasure to be had…


vmworldarena (Gran Via Conference Centre) Well, this year’s VMworld Europe has been and gone for another year.  Barcelona was a pleasant place, but it wasn’t all “sun and sangria”, there was treasure to be had, and Xtravirt sent a ragtag band of hardened consultants to do some digging.  This particular consultant decided to put a little focus particularly on End User Compute, given that’s been the focus of a number of projects of late.

Horizon Mirage

Horizon Mirage hit release 4.3 at this VMworld.  I attended a number of sessions on best practice and so on, which were quite interesting, but I managed to catch up with Alon Goldin, who’s been involved with Mirage since before VMware took over Wanova.  He pointed out a few useful features that have been added. Firstly, one nice addition to the Windows 7 migration wizard is that it’s now possible to apply both a Base Layer and App Layers as a single task, rather than as separate jobs.  This should speed up deployments and reduce complexity nicely. A new management policy has been added that allows deployment of images without the need to upload from an endpoint first.  This is useful in scenarios where user data on a client is minimal (for instance, redirected document folders etc...) and whether the data is protected is less critical.  A time-saver in these instances. With Horizon Mirage 4.3, the Client agent has been optimised for use in a virtual machine and is now officially supported by VMware within persistent View desktops.  This is useful in many ways, particularly in the case of persistent desktops, where managing and maintaining compliancy isn’t as simple as re-composing non-persistent desktops. The Web Management console has seen some changes too, with the addition of a Protection Manager role that’s permitted to edit policies and build collections.  VMware’s intention is to move away from the legacy MMC console to the Web Console in the long term, pretty much in-line with the rest of the VMware portfolio (such as vSphere 5.5).

VMware Horizon View 5.3 and nVidia – Dedicated Graphics

One subject creating a bit of a buzz of late is support for high-end graphics in Virtual Desktops.  Up until this week, VMware’s support has been somewhat limited, for example nVidia’s GRiD series GPUs were limited to Virtual Shared Graphics Acceleration (vSGA) in VMware View, which, in itself is superior to the normal VMware SVGA driver, but lacked the horsepower required for CAD users (or gamers).  However, announced at VMworld Europe, VMware Horizon View 5.3 now has support for the nVidia GRiD GPU in Virtual Dedicated Graphics Acceleration (vDGA) mode. What this means is that a persistent virtual desktop can be attached directly to an nVidia GPU, essentially bypassing the virtual layer, complete with native nVidia driver support.  The main stream server vendors were demonstrating this capability in the Solutions Exchange running complex graphical models over VDI sessions.  Given that a single Nvidia GRiD adapter has two (in the case of the K2) or 4 (for the K1) GPUs, it offers great potential to host a handful of CAD users (or gamers) per server node if your hardware has available expansion slots.

VMware Horizon Workspace 1.5

Horizon Workspace is viewed by VMware as a centralised portal for all things EUC.  As a central portal, it provides users with the following services:
  • File services – A private equivalent of the popular consumer offering from Dropbox, complete with synchronisation capabilities, file sharing and client applications for Android, iOS and Windows (desktop), as well as browser access, complete with preview services using either MS Office Preview Server or LibreOffice Preview.
  • Web Application Publishing – Web application shortcuts can be defined and published via Horizon Workspace.  There is integration into single sign-on capabilities where available.
  • Thick Application Delivery - ThinApp packaged applications can be distributed via Horizon Workspace, complete with policies and access control (either within Horizon or via Active Directory Groups).  ThinApp deployment currently requires that the client be a member of the same Active Directory as the workspace estate, although this is going to change soon.  Likewise, Citrix XenApp application delivery is in development for imminent release.
  • Access to VMware View desktops – VMware View desktops can be accessed via the Horizon Workspace Portal.  While provisioning is all carried out in View, presentation via Workspace is possible.  If Blast protocol support is installed, the View session is directly accessible via a HTML5 browser, without need for a client.
There were a number of sessions on installation, scaling and other subjects too, enough for a healthy blog post on the whole subject of Workspace (watch this space). Another feature, still in its infancy, is mobile device access.  For Apple’s iOS, the single unified app from Horizon Workspace 1.5 has been replaced by a File app and an Applications app.  Android is evolving even faster, with a product feature that essentially behaves like VMware Player but for Android – VMware Switch.  Basically, this allows a managed Android image to be run on a user’s Android device in secure isolation, separating private applications and data from work functionality – an Android phone within a phone.

Hands-On Labs

hands-on-lab As well as the technical breakout sessions, one stand-out area was the Hands-on Labs.  Given the queues, it was consistently popular.  Accessible either though a BYOD access, or through timeslot limited thin clients, these provided access to a range of VMware or partner provided technical lab sessions where guests could try many of the VMware technologies. I tried a couple of the lab sessions, predominantly on Horizon Workspace and ThinApp, plus a demo of the latest version of NetApp’s Virtual Storage Console.
  • The Workspace lab I followed was an introductory guide, demonstrating provisioning file storage, applications and desktops to end users.  A useful guide for administrators, rather than implementers.
  • The ThinApp lab was more broad-ranging, covering how to package applications, and then how to implement them in Horizon Workspace, Horizon View or as part of a Horizon Mirage App Layer.  There was also a sneak-peak of a packaged 64-bit ThinApp package – this feature was formally released as part of ThinApp 5.0 in VMworld Europe.
  • The NetApp Virtual Storage Console lab demonstrated the vSphere integration tool for NetApp storage.  I’ve had quite a bit of experience with NetApp tools back to Virtual Infrastructure 3.0 days, including SnapManager for VMware, and this is by far the most impressive, including full integration into the vSphere 5.x vCenter web console.  It has also streamlined many processes for configuration that required additional work (such as implementing RBAC).  The lab was a pretty impressive tour of the application, including rapid VM provisioning, storage provisioning, backup and recovery.

The Solutions Exchange

vmworldstands The Solutions Exchange is a more traditional conference environment with many software, hardware and services vendors demonstrating their products.  I picked up a number of key items here.  In summary –
  • Liquidware Labs are adding VMware Horizon Mirage support to a number of products.  One element is to extend their Stratusphere FIT VDI assessment tools to be able to carry out assessments for Mirage migrations.  Another element is to use ProfileUnity to replace the user layer of Mirage to allow decentralised environments to manage user data transfer more efficiently (rather than replicating everything to a single point, as Mirage would).
  • NetApp demonstrated their E Series storage systems.  While not as fully featured or general purpose as a FAS series filer, they’re aimed at high performance, data-intensive work.  In particular, they push it as part of their StorageGRID object-based storage solution, though I was advised that they were about to bring out a completely new solution in this area.
  • HP, Dell and Lenovo were amongst a contingent of hardware vendors all demonstrating VMware View with shard or direct graphic support using nVidia GPUs, as well as Thin Clients to support this.


So, to conclude, VMworld proved to be quite the showcase of the latest and greatest from an end-user compute perspective, with both gains in performance, particularly on the graphical front, as well as new features for client and application management and end-user access.  

Sorry – we don't do average IOPS

During our customer engagements, we’re privileged to see multiple deployments utilising differing technologies whether that be for Data Centre or VDI workloads. Storage is a key factor in any deployment and in this article I’m going to look at some … [More]

It’s all about the Storage!

During our customer engagements, we’re privileged to see multiple deployments utilising differing technologies whether that be for Data Centre or VDI workloads. Storage is a key factor in any deployment and in this article I’m going to look at some of the storage approaches for VDI deployments and discuss their benefits and disadvantages.


Let’s start with performance, calculating IOPS in a VDI deployment is a hotly debated subject, you can “assume” industry averages for each workload, but these averages leave little room for growth and don’t cater for those peak workloads, such as a logon storms. Taking the maximum figures is a safer bet but you’ll pay a premium for a lot of performance that will go unused most of the time. So how do we find the correct sizing? One approach is to use the peak average. Using a planning tool to look at the total IOPS hour by hour will show when your largest IO spike occurs, determining how many machines are online during that short period and dividing the IOPS by those machines will give you a “peak average”; This approach can provide more realistic IOPS requirement, but the figures need to be monitored over a long period to ensure key business periods (such as month end; patching; AV scans) are included. This is a more realistic approach than the day long or industry average. If your planning tools don’t present this in a report, you’ll need to track the data hour by hour to gain the information, which can be labour intensive; some planning tools offer a 95 percentile rule, where the top 5% of IO is not accounted for, this can offer reduced storage requirements, but can actually contribute to a slowdown, as you’re effectively not providing the peak performance when it’s most required. A final note on performance is make sure you monitor the correct workload; don’t monitor XP machines then deploy Windows 7 machines with completely different agents installed as the figures will be skewed. If this is the only option open to you, update your findings during the proof of concept or pilot deployment when running the final build.

What storage should I use?

Now you have some performance figures what do you do? Get some quotes from some storage vendors, but make sure you’re sitting down as the cost is probably going to be high.  Another option is to deploy one of the myriad of storage optimisation technologies that are on the market. These optimisation technologies offer great potential; increased IOPS, de-duplication and help remove the CapEx barrier to deployment by reducing cost. On the down side there’s potential for the solution to be a little more complex in design, complicate operational tasks and many are still in their infancy frequently updating versions or architecture - so choose carefully and test thoroughly.


You’ll need to consider the license model, is it per GB or per user, concurrent user or named user? It can make quite a difference especially when coupled with maintenance.


Sizing comes in two flavours. You’ll need to work out how much space you’ll require and this will depend on a number of aspects. Where the user persona is, pooled or dedicated assignments, if you’re using linked clones (Citrix MCS or VMware View Composer) or full clones and what the de-duplication rate is for your selected storage. When sizing for pooled desktops, you need to size for the peak concurrency, plus room for growth and breathing space. When sizing for dedicated desktops you’ll need to cater for 100% of users and growth, as each user is assigned to a persistent desktop. The other sizing aspect is the impact on the host if your solution utilises a virtual appliance, you’ll need to account for the CPU and memory requirements when sizing your hosts, or deduct that resource from what’s available for desktop workloads. Some storage designs can produce additional or surplus space which is a by-product of adding spindles to provide performance; this should not be used for other purposes, or seen as available space as it will impact the performance of your planned workloads - guard it from misuse!

Linked versus Full clones

Linked clones offer a way to reduce storage requirements by storing a single base image and linking multiple “delta” disks to it, you can create massive savings. You’ll need to account for a number of base images, perhaps multiple base images per data store depending on the broker you use. Linked clones are great for pooled desktops, but not so great for dedicated as you’re tied to the base image / replica and therefore VMFS datastore. As long as you plan your storage for growth and performance they’re a perfectly viable solution, but you’ll lose storage vMotion capability and perhaps vMotion across clusters which can impact maintenance. Using full clones for dedicated desktops provides less of a tie than linked clones, it gives you full mobility within the data centre, and you’ll just need to leverage a storage solution that can de-dupe the data to a sensible point to make it affordable. Additionally, orchestration toward the build and integration into the broker may be required.

Local or shared storage

Optimising local storage provides the lowest cost storage, but in some cases can introduce increased complexity or loss of key hypervisor features such as VMware’s HA or DRS. HA however, can be provided by other means for pooled desktops, such as by the broker but not for dedicated desktops. Without DRS you’ll have to be more cautious with your sizing per host and manage the capacity at a host level rather than the cluster, as you’re unable to automatically balance out workloads across the cluster. While desktop workloads may not be as sustained as server workloads, it’s quite easy to find hosts in a cluster may consume excessive CPU while others use less. In certain cases it doesn’t take many guests with runaway CPU processes to put a host under pressure, thankfully you can manage these CPU processes with tools like RES and AppSense. Maintenance is also complicated with local storage, as you’ll have to manually drain a server of its users before entering maintenance mode rather than moving those workloads off to a standby host, you may even have to wait until out of hours for your maintenance window. With shared storage DRS is possible (and storage DRS if using full clones), dedicated desktops can be hosted and protected by HA without any additional solutions, but it potentially costs more than local storage, it’s a question of balance and what your main requirements are.


Finally, regardless of your virtual infrastructure make sure that you work closely with the storage team so that your management servers have guaranteed / isolated performance. This ensures that you can manage your environment even if your workloads are consuming all the performance for their allocated disks.   If you would like talk to us about assisting your organisation with storage requirements for your data centre, workspace or cloud project, please contact us.

My experience of the 44CON

As a virtualisation practice market leader we monitor industry events to see what trends, makes waves and generally seeds out snippets that grab our attention. Around about this time last year the 44CON Security Conference was brought to my attention … [More]

As a virtualisation practice market leader we monitor industry events to see what trends, makes waves and generally seeds out snippets that grab our attention. Around about this time last year the 44CON Security Conference was brought to my attention through my Twitter stream. A quick bit of digging and I soon learned this event is held annually in London and offered an independent and non-vendor approach to current security issues for customers and vendors. While vendor sponsored it still purported to offer a very non-product biased content schedule. Tempted by this I made a note in my diary to investigate the next conference and establish exactly what was on offer and how I could align it to my role within Xtravirt.


In a nutshell the conference covers many aspects; items that I drew upon were:
  • Attracting individuals whose role is security related whether dedicated or partially
  • Providing the attendees with an opportunity to meet and speak with the leading industry security professionals
  • Enabling attendees to discuss their concerns with likeminded individuals
  • Rubbing shoulders and meet the top 10% of Security Professionals within the UK
  • Absorbing facts and stories from industry recognised security speakers
  • Opportunity to actively participate in guided workshops to learn more about common security flaws and pitfalls
  • Open floor panel discussion with during the evening chaired by InfoSec recognised experts

For me?

A few items that I wanted to focus upon:
  • As an advisor within the Technology Office I wanted to hear the stories from presenters of how they're still fighting many of the same data centre issues
  • In a world where cloud computing is apparently adopted by everyone I wanted to meet people who'd been responsible for or a part of the securing aspects and learn what they experienced
  • If there's no sight of securing cloud environments then I'd aim to find out why
  • Open my eyes a lot more to this aspect of the industry
  • Get my hands dirty in one the guided workshops
So how did I get on?

How did the event unfold?

The networking opportunities exceeded my expectations. As a frequent attendee to virtualisation industry related events my networking peers are usually present but to arrive to an event without knowing who you’re going to meet can be a little daunting. Well, that certainly wasn’t the case. After picking up my badge it was relatively plain sailing after a few conversations were had.


The keynote session by Haroon Meer from Thinkst discussed the quantity and quality of InfoSec conferences globally and how much of the content is likely to be repeated and the quality of speakers may not be to the audience liking. The value of the conference is then in jeopardy as it stalls the value of the content. Is this speaker’s problem or the organiser? Further discussion leant this more toward the organiser and their due diligence plus pressures applied from sponsors to shape content for return on promotion. Big events command high entrance fees and travel expenses opposed to local events are often offered for minimal cost or even free – smaller events are now seeing a growing adoption due to their greater geographical placement. Smaller events provide opportunity for up and coming speakers to pave the way upward. The downside to the smaller events is the signal to noise ratio is often far less. There was far more content and context but the overall messaging seemed to resonate across many conferences regardless of whether they’re IT related or not.

Context clues

Moving on from the 147-slide epic I threw myself into a classroom session hosted by Carbon Black. Their session opened talking about the approach to cyber-attacks and how using traditional ‘prevention play’ tactics are dead but there are other ways. Global anti-virus companies make decisions about their product promoting that a single install will cover all eventualities but as we know this simply isn’t the case. How do you protect your company? Say ‘no’ to everything? Well that’s simply not going to work, users always find a way. Tracking down threats and assessing their viability of a reality can be broken into four headline areas.
  • 1.Visibility – do you know what’s going on in your environment? How many versions of the same product are deployed? – A threat to one version may not be a threat to another.
  • 2.Metadata – do you know your environment? Use your data to consider what you think is an anomaly. This is where the global anti-virus companies can’t help you.
  • 3.Frequency – Irregular patterns of activity don’t necessarily mean there’s a problem. If you have a grasp of your metadata you’d know whether it was a problem.
  • 4.Relationships – Combine the three topics above to create a relationship mapping and then you have far more intelligence than any one global anti-virus company would ever know.
Zero false positives and zero false negatives is far more achievable with this style of approach. The classroom exercise presented attendees with an environment using real world anonymised data of discovered files and it was from there the we had to review the versions, the frequency and relationships to each other to elaborate the points above but by applying a human identification approach. What made this exercise fascinating was how my thought process changed as I moved from one identification stage to the next and that the 2 guys I worked alongside with also challenged some of their previous decisions. Collectively we challenged each other as well as ourselves. Gut feel and experience (the human touch) meant we achieved a higher success rate initially but the further we progressed our ability to remember previous decisions lead to a far reduced outcome. A thoroughly enjoyable session.

Culture & CNA Behaviours

Char Sample presented my next session; she discussed Culture and Computer Network Attack behaviours. Much of the talk was based upon her recent work and discussed Hofstede’s cultural dimension framework and how much of this assisted and provoked more questions in her studies. Out of respect for the level of depth in the work Char has completed I’ll just impart a few areas that really stood out for me. The opening gambit posed the scenario about applying new methods to old problems:
  • Rather than thinking about IP addresses think about what the attacker is thinking to give an idea of the next move
  • Psychological profiling provides mixed results and placing people into different buckets usually peaks at 10 ‘types of people’
Introduce the cultural angle and, as an example, how we approach problems evidences that we’ll all get the same answer but establish it in many different ways. Why? The way we’re culturally brought up and exposed to experiences shapes this. The definition of the word ‘culture’ in this session was defined as, “The collective mental programming of the human mind which distinguishes one group of people from another.”. Another everyday analogy was offered relating to football and The World Cup. Every team plays football but they all do it differently and at times it’s clear to observe. The session continued discussing many cultural facets and how we’re moulded into a way of functioning throughout our lives, and that the influence of culture in cognition is inescapable and habitual. An example comparison was thrown out to the audience. Eastern culture takes more of a holistic approach to problems with everything considered to form an answer opposed to the western approach is to do what’s needed, fix the challenge and move on. Applying this thought process to software development could assist a would-be attacker to consider the originating development team location and style of code creation. Perhaps an initiative is needed to offer code reviews within designated Universities to understand what role cultures and personality play with blind spot and bug introduction. A very deep session that provoked many questions from the audience and opened up an area outside of the typical offensive and defensive stereotype attitudes.

Cyber Defence or Defending the Business?

The session delivered by Bruce Wynn focused on the pressures and challenges of how areas of the business are forced to make important decisions about Cyber protection but how it can often lead to a distraction and oversight in protecting the business itself. The content at times resonated with recent discussions I’d been party to and as a result drew me into the session further. There’s a perception by some that applying a traditional technical approach using penetration testing, AKA ‘pen-testing’, would be a one off exercise and would mitigate all concerns once issues had been addressed. That is of course not the case and in many respects could be seen to be opening the door to wider abuse. Penetration testing provides the ‘tester’ with a full report of your organisation’s technical vulnerabilities and so presents immediate areas to consider:
  • Are you using a trusted and known company or an independent contractor?
  • What happens if the recommendations aren’t implemented for a period of time?
  • The trusted ‘pen-tester’ has opportunity to gain access?
  • The trusted ‘pen-tester’ has the responsibility to keep the information safe, but what if it’s shared internally?
Assuming a test has been undertaken and identified issues addressed, where’s the update cycle? Defining a baseline ‘standard’ version or design in itself places an organisation into a known published state. A compromise will always exist and there will always be a need to update or upgrade.

Know what you have

What’s important to your business? An example was discussed openly in the session of a well known brand & their products. When the audience were challenged as to what we thought the most important aspect of their business was no one managed to provide the correct answer. In the context of the discussion it had nothing to do with the product or it’s design. In fact it was the financial aspects due to the nature of how the company trades. Until the right questions are asked you should never assume what’s important. Where third party supply chains are involved are you able to trust the suppliers? What about their suppliers? You may pass confidential information to a close provider but can you ensure that information doesn’t leave their environment? Keeping your own house in order is of a course a must. IT System Administrators and Security Team members have varying degrees of privileged access to the heart of IT systems for internal and external functionality. This was my last session for the day and was certainly a good way to bring it to a close, but did I manage to answer the question…

Who isn't moving to cloud?

Organisations that aren’t adopting cloud tended to be firmly rooted around issues where data retention regulations are the make or break of a company. This is something that I’ve gleaned snippets of from during my attendance at the CloudCamp conferences where it’s been drummed home that “laws are local and the internet is global” by Kuan. Once data is out of your physical control where does it go? A vendor will tell you exactly where but there’s a huge element of trust. Trust in the administration and the vendor’s ability to maintain their internal governance, the transmission method of data to and from your organisation and the ultimately where the data resides and isn’t moved to. Of the people I spoke with there is certainly a view that workloads shouldn’t be just shifted. Locale critical data should be considered to remain in-house and use of Public cloud for less mission critical workloads and thoughts around SaaS for service provision refresh. I think the message was clear here, people are moving to the Cloud but in small leaps of faith.  

vCAC 5.2 Distributed Execution Manager (DEM) Install Error

In preparation for an upcoming project, I’m installing vCAC 5.2 in my home lab. Anyone who has installed vCAC will have used the vCAC pre-requisite checker tool. This tool is simply fantastic. vCAC has a huge amount…

In preparation for an upcoming project, I’m installing vCAC 5.2 in my home lab. Anyone who has installed vCAC will have used the vCAC pre-requisite checker tool. This tool is simply fantastic. vCAC has a huge amount of pre-requisites that need to be configured, this tool does a great job in capturing everything. I like that it provides instructions on how to resolve issues when components need resolving. There is also a ‘Fix Issue’ button which allows for an automated fix of a handful of the requirements. sh_one With the checker reporting I was good to go, I proceeded with the install. All was going well until I came to install the DEM worker and I was met with the following error. sh_two I completed a few basic checks to ensure DNS was functioning correctly in the lab however; I found no issues there. Upon further investigation I looked to see what services the vCAC Server Setup had installed previously. There is only one service which is the “VMware vCloud Automation Center Service” and it wasn’t started. sh_three The vCAC server setup allows you to specify a service account to assign to this service, which I had ensured was a local admin on the server. When trying to start the service it halted with a permission error. After granting the account the right to ‘Log on as a service’ the service did start and I was able to finish the installation of the DEM worker. It seems odd to me that the Pre-requisite tool checker doesn’t check this, as it seems such a comprehensive tool. Anyway, problem resolved.   If you would like talk to us about assisting your organisation with cloud automation solutions or their management with the vCenter Operations Management suite, please contact us.

VMware vCenter Operations Management Suite 5.8 Overview

At today’s VMworld Europe general session VMware announced the launch of the new Cloud Management suite of products. I run through a highlight summary here with some of my thoughts too.

VMworld Europe Cloud Management Launch

At today’s VMworld Europe general session VMware announced the launch of the new Cloud Management suite of products. Of the products launched, vCenter Operations Management Suite version 5.8 was discussed in detail, and I run through a highlight summary here with some of my thoughts too.

vCenter Operations Management Suite 5.8

The new version of vC Ops defined three key areas of focus for VMware’s approach to simplify and automate Operations Management. Intelligent Operations
  •  Using patented analytics to provide better visibility into data centre operations
  • vCenter Operations analyses millions of metrics from vSphere and existing monitoring tools to learn the behavior of your infrastructure
  • Dynamic thresholds can be defined to trigger smart alerts so you can proactively address creeping performance problems
Policy-based Automation
  • Leveraging policies and thresholds to trigger orchestration workflows across a white variety of tasks, rather than manual intervention to kick off a script
  • Automated tasks include incident and problem remediation, policy enforcement for continuous compliance, and capacity analysis and planning to improve resource utilisation issues
Unified Management
  • Providing operations teams with a unified view of what is happening in their highly virtualised and cloud environments and this view is broken down into 3 areas:
    1. converged infrastructure management of network, storage and compute
    2. integration of the key disciplines of performance, capacity and configuration management
    3. a consistent management approach across virtual, physical and private/public cloud domains
Headline features aside, what does this mean? VMware have released a version with an abundance of new features. There’s the capability to link into Microsoft applications such as Microsoft SQL and Exchange with out of the box (OOTB) dashboards. There’s support for monitoring of Microsoft Cluster Services (MSCS) and Database Availability Groups (DAG) clusters. Additional OOTB storage dashboards providing visibility into physical storage infrastructure and data paths (HBA, Fabric, and Arrays), Hyper-V support and the ability to monitor and manage hybrid cloud deployments with Amazon AWS. In a nutshell VMware are addressing the core services as well as the service provision

The New Features

Let’s have a look at the features and overview how they look.

Intelligent Operations

Enhanced monitoring of Microsoft applications Clear to see and easy review is the traditional Red/Amber/Green presentation. one Out of the box dashboards for Tier one applications are now available with the release of specific Management Packs for Microsoft applications. What's in a Management Pack?
  • Knowledge
    • Based upon research conducted with SMEs on application specific deployment and common issues
  • Discovery
    • Automatically discover application components, their inter-dependencies and the connection to their underlying infrastructure
  • Policies
    • Built-in monitoring policies for common applications that include default metrics, collection intervals, thresholds and alerts
  • Dashboards
    • Pre-configured and pre-defined application specific dashboards for visibility and troubleshooting
  • Supported Applications
    • Microsoft SQL Server
    • Microsoft Exchange
  • Application Visibility
    • Application health according to clusters (MSCS & DAG), servers & instances
  • Services & Topology
    • Display roles, relationships and relation to virtual infrastructure (vSphere & Hyper-V)
  • KPIs
    • Display key metrics related to the component
  • Alerts
    • Built-in alerts and thresholds
New storage analytics capabilities Highlights in this area:
  • Out of the box storage dashboards
  • Visibility – physical storage infrastructure and data paths (HBA, Fabric, Arrays)
  • Provides a vSphere administrator with “good-enough” data to troubleshoot storage issues that are affecting a virtual environment to enable an efficient handoff to a storage administrator
  • Brings together topology, statistics and events from FC enabled Host Bus Adaptors, Fabric and Arrays by leveraging standard protocols such as CIM, SMI-S, VASA
Unified Management There's an acceptance of other vendor hypervisors, below I cover Hyper-V with screenshot evidencing connection and statistics: two                 What can you expect to see? Using the vCenter Hyperic and Hyperic Management Pack you'll be able to review results in a custom user interface:
  • Discovery
    • Hyperic agent deployed in Hyper-V Host
    • Discovery of Hyper-V hosts & associated VMs
  • Topology
  • Relationships created in vC Ops
  • Hyper-V Host -> Virtual Machine -> Operating System
  • Monitoring of critical levels
    • CPU
    • Storage
    • Network
    • Memory
  • One of the box Hyper-V dashboards
    • Cluster, Host, VM Utilisation
    • Top 25 by CPU, Memory, DISK IOPS, Network, etc
    • Database Capacity and Performance
      • Disk Space Used, Usage by VM, Latency, Commands per Second
    • Load Heat maps
      • CPU, Memory, Disk, Network
  • Additional Items
    • Support for SCOM Maintenance
      • Identify items from SCOM that are in maintenance mode
    • Hyper-V Events
  • Two Options for getting Hyper-V Information
    • Through vCenter Hyperic and the Hyperic Management Pack for vCenter Operations
    • Through Microsoft SCOM and the SCOM Management Pack for vCenter Operations
    • Hyper-V Data and Dashboards are the same for each source

Amazon AWS Support

Amazon’s web service management is now also accessible and I’ll run through a few highlights below. four
  • AWS Management Pack
    • Results available in the Custom UI
    • Pulls data from AWS Cloudwatch
      • Leverages the REST API exposed by AWS
    • Supports multiple AWS services such as:
      • Elastic Cloud Compute
        • EC2 instances five
        • Elastic Block Store (EBS) volumes
      • Elastic Map Reduce (EMR)
      • Elastic Load Balancing (ELB)
      • Auto Scaling Group (ASG)
    • Configurable by Service
      • Only bring in the services you wish to monitor
    • Out of Box Dashboards bring it all together
    • Monitoring of EC2 Instances six
    • Pulls all default metrics from Cloudwatch
    • Imports AWS alarms as vC Ops Hard Threshold violations
  • Group by region
    • AWS currently has eight regions globally. You can subscribe to specific regions.
      • Ex: To subscribe to Eastern USA use the region identifier us-east-1 in the region field
    • Regions drive dashboards sept
  • Visibility into relationships between AWS objects
    • EMR resources and EC2 instances eight
  • Auto Scale Grouping
    • Automatically aggregate instance metrics on groups nine
  • AWS Entity Status
    • Determine the power state of the AWS resources 10
I’m very excited about this announcement as I personally have been completing a number of customer rollouts of the management suite over the past couple of years with Xtravirt. As we’re a VMware Management Competency Partner as well as an official Consulting Partner of Amazon Web Services in the AWS Partner Network  and gaining the Microsoft® Silver Competency as a Midmarket Solution Provider for SMB Customers the future looks great for the use of this new version across our customer base. The general availability for vC Ops 5.8 has been cited for mid-December 2013 and with my experience using the beta version so far I see there will be plenty of opportunity to introduce some of the features into our customers, for their current data centre deployments and for cloud migration exercises. If you would like talk to us about assisting your organisation with VMware vSphere, VMware vCloud, AWS or Microsoft Hyper-V based solutions and their management by the vCenter Operations Management suite, please contact us.    

Quick tour of the Horizon Mirage web management console

VMware are continuing to evolve their Horizon Mirage product, regularly adding new features. One of these latest additions with the release of version 4.2 is the web management console.

VMware are continuing to evolve their Horizon Mirage product, regularly adding new features.  One of these latest additions with the release of version 4.2 is the web management console. Rather than being just a traditional end user tool, the web management console is for helpdesk personnel to undertake a range of client-side tasks on selected devices. Tasks such as enforcing layers, system reboots or reverting a client to a previous snapshot. Documentation for the web management console feature is limited in my personal opinion so I thought a guide would prove useful to people looking to use the feature and hence the reason for this blog posting.

Plumbing it in

The installation is not too difficult; it uses a standard Microsoft Installer, and requires a server with Microsoft IIS 7 (or later) and Microsoft .NET framework v4.0 installed.  As the console is intended for use by support personnel it is unlikely to be under intensive load or require high levels of resilience, so co-existence with the Mirage Management Server role has proven to be acceptable.

Accessing the console

To login and access the console you will need a web browser and Microsoft .NET v4.0 (or later) installed on your client.  The portal provides access to two functions. The first function, and more important, is the Helpdesk Interface where daily routine tasks can be completed. The second is the Protection Manager dashboard and is where status reporting for Mirage can be viewed. If you are using Microsoft’s Internet Explorer, you will need to be on or above version 9 (This is referred to in VMware’s documentation - anything less results in a ‘Browser is not supported message’).  With respect to supporting any other browsers, no other is listed in the official documentation, however, when it was tested with Internet Explorer 8, the browser warning did state that Mozilla FireFox, Google Chrome and Apple Safari are supported (though nothing on which version, unfortunately). noIE8 Figure 1: Don't use Internet Explorer 8! The Helpdesk interface of the Horizon Mirage Web Manager can be accessed from type in: http://(WebManagerServer)/HorizonMirage, while the Protection Manager Dashboard can be accessed using http://(WebManagerServer)/HorizonMirage/Dashboard. In either case, you will be required to provide authorised credentials. cred Figure 2: Mirage web management console It is probably worth taking a look at role based access in Horizon Mirage prior to letting your local helpdesk users onto the system. The Horizon Mirage console has a section ‘User and Roles’ that permits administrators to set up and grant role based access. A number of pre-defined roles are already available which can be granted Active Directory groups for ease of use. rolebasedaccess Figure 3: Role Based Access

Web management console

Once a user is logged into the web management console it is possible to search by either User or Device. For example, in the screenshot below it can be seen that the client ‘MXP’ was entered: mxp Figure 4: Web management - searching for a device Once found and selected, a console specific to that client is presented and available to work with. clientconsole Figure 5: Web management - client console Reviewing the top toolbar (from left to right) the following action functions are available:
  • Enforce Layers - Enforces all layers (Base and Apps) assigned to the client.  There is no function to select here; it will just enforce what is assigned with a requirement to review a confirmation screen
  • Set Drivers - Sets the Driver Library for the client.  This is functionally similar to Enforce Layers and is used to update operating system drivers on a client
  • Reboot - Sends a reboot command to the client
  • Suspend - This suspends network operations, so pausing replication etc...
  • Synchronize - Tells the client  to synchronize the device with its corresponding CVD (Centralised Virtual Desktop) Image held within Mirage.
  • Collect Logs - Collects System Logs from the client for diagnostics purposes
  • Restore - This is to restore data
  • Revert to Snapshot - Provides the means to roll-back the client to the last snapshot (This is not shown above but is accessed when scrolling past Restore)
  • Note - Allows notes to be stored against the CVD for admin purposes (This is not shown above but is accessed after scrolling past Revert to Snapshot)
Moving past the action buttons in the toolbar the next couple of items assist with the console layout:
  • Views - The next section provides a number of filters on the logging pane below (Steady state transactions, Snapshots, Events, Audit events, Tasks and Download transactions.
  • Grid/Timeline - Further view options.
webmanagement Figure 6: Web management - timeline view Click and expand Device Properties (shown at the bottom left of the Log view) reveals to the administrator properties for the device, including what base and app layers the client is subject to, the Active Directory information, installed drivers etc. The screenshot below elaborates this. deviceproperties Figure 7: Web management - device properties

Protection Manager dashboard

The Protection Manager dashboard provides the administrator with a high level view of the status of the estate.  Once logged in the administrator is presented with the following screen. openingscreen Figure 8: Dashboard - opening screen Each of these sections is active and can be clicked upon to drill into and provide further information. dashboardreport Figure 9: Dashboard - report Note the search button – this returns the Administrator to the regular web management console view.


My exposure and experiences in real world deployments with this tool has certainly proven it is very useful and powerful. It is also clear that much of the work VMware are applying to the product at the moment is to improve manageability and ease of use as well as loading up on functionality. Even from a deployment perspective there’s little effort needed to get it packaged and distributed to client machines as it does not have a dependency on an MMC. So far, it ticks a lot of boxes for me.   If you would like to learn more about VMware’s Mirage, other aspects of the Horizon suite or require assistance with your End User Computing challenges please contact us, we have a lot of experience to share.

Presenting at a VMUG: responding to the call

Working for Xtravirt, not only are you consistently working with cutting edge technologies in large-scale enterprise environments, but you’re…

Working for Xtravirt, not only are you consistently working with cutting edge technologies in large-scale enterprise environments, but you’re also encouraged to participate in and contribute to the online community. I’ve been to a few VMUG’s now (London and the UK), and always found the community sessions some of the most enjoyable. While the vendor presentations are also good and provide a great opportunity to learn about new technology they’re there to deliver a marketing message. To have a community member stood in front of the group, talking about something that they are obviously very passionate about with real world experience is what the user groups are all about. So, during the closing speech of the January 2013 London VMUG, Alaric Davies (one of the London VMUG steering committee) was seeking out community member presentations for the next VMUG. Whilst I didn’t volunteer at the time it got me thinking. After a few days an idea sprung to mind, why not talk about the pain points of working on a 4000 seat VDI deployment? VDI and EUC are still industry buzzwords with every year being labeled as the ‘Year of VDI’. I speak with people embarking on VDI projects, some are just finishing, others that are struggling and some that have failed. Having just come off of a successful 4000 seat EMEA VDI project that would get a few people in the room surely? My colleague, Grant Friend, and I prepared an overview of this session idea and submitted it to the VMUG committee and were fortunate enough to be approved. I’m sure I’m not alone when I say I’m not a fan of presentations that are death by PowerPoint and personally, I find some of the best presentations that require audience participation. With this in mind we kept the slide deck short and sweet dropping in trigger points to explain how we had overcome project challenges to see if others were experiencing the same issues and if so, how they overcame them. The presentation was well attended and we had some good audience participation, the hot topics were:
  • Application auditing - how was it done?
  • MS Windows 7 image - getting user buy-in
  • Stateful or stateless?
It seems that no matter how hard people try and no matter what the flavour of product, things always get missed. There always seems to be that hidden application somewhere that only one person uses which is critical to a business process. A common theme from the audience contribution was auditing web applications and the difficulty they presented. The outcome of this conversation was that attention to detail, careful analysis and interviews with the business were critical areas for success. The next hot topic was locking down the Windows 7 image and how far was too far? The key point I tried to get across here was that user buy-in is key in any VDI project. If your users aren’t happy then your project has a high risk of failing. In my experience, I’ve seen many people lock down their Windows images so that they look like something out of the 1990’s, yes they run fast but who in their right mind want’s to replace their existing desktop with a VDI desktop with an 800x600 resolution and a 32Bit colour scheme? The key here is to leave some visuals but remove some of the background items that consume resource, easy examples would be leave the orb, yet disable the ‘show window content when dragging’ feature. The final talking point of the afternoon was the argument between stateless and stateful desktops. All too often people state what desktop they want the project to use without thinking through if it is right for every use case. Whilst stateless is usually the end goal, we found success with utilising stateless desktops for the quick win use cases, however for the more complex use cases, using stateful. We could then get user buy-in, get everyone working on the VDI solution, then work with business owners and vendors to get applications working in the new environment without impacting user performance. The end goal being that all users are working on a stateless desktop. Following the success of the VMUG we were approached by the EMEA vBrownBag team to run through the same presentation again in one of their online sessions, which has since aired and can be viewed here. I had great fun delivering both of these sessions and felt a great sense of pride and achievement in being able to share some of the knowledge learned with other community members. I’ll certainly be putting myself forward to speak again at future events.   If you would like to talk to us about assisting your organisation with an End User Computing based solution, please contact us.

Using Netstat to inspect app dependencies

I was recently involved in a data centre transformation project collapsing and migrating smaller distributed IT solutions across EMEA to a central location.

I was recently involved in a data centre transformation project collapsing and migrating smaller distributed IT solutions across EMEA to a central location. Part of my remit meant I had the responsibility to investigate and establish application dependencies. Some of that information was relatively easy to obtain but for a few applications their original experts had left the organisation leaving gaps in the existing knowledge. Another contributing factor that complicated my investigation was the issue of confirming what network communication actually existed between those applications, specifically at port level detail. Establishing that in a reliable fashion was important as some applications were due to communicate over the WAN after migration. That’s where “netstat” came in quite handy. I could have used TCPView but some of the systems I was dealing with were quite old and running MS Windows NT/2000 on which TCPView is not supported. More importantly, the organisation was not entirely sure about the inner workings of these applications and therefore, these systems were under strict change control. So, built-in tools were the way to go. Netstat has always been part of MS Windows NT and therefore, was my tool of choice. I approached the investigation in two ways. Firstly, where machines were identified as being a part of an application I’d review which connections remained persistent and active. Secondly, on the same machines I’d observe and monitor ports that were open but only listening. This was the most interesting as it invariably revealed the machines were undertaking other tasks that the customer wasn’t aware of. Not surprisingly, I found a few of those! I used netstat with another built-in function “findstr” to filter out the unwanted entries like this: netstat –an | findstr –i ESTABLISHED This command lists all connections and ports from the local machine to remote machines for ESTABLISHED connections. These can change over time or some might be missing as it’s a “point-in-time” snapshot of the connection state, but it does give a good idea of conversations going on between machines. This process can be repeated every now and then to ensure connections are not missed. Depending on the system, there might be a large number of established connections. In case of a data centre migration investigation, the focus should be on machines that are connecting from a remote network. That said, the rest of the connections might also be of interest and might reveal unknown connections. For example, here’s a screenshot of one of my machines: ab_one Here you can see my machine is connected to a well-known service using port 5938 (remote machine - third column from the left). For listening connections, I simply changed the string to: netstat –an | findstr –i LISTENING ab_two As you can see, the string has a minor change but this time, it lists all the ports the machine is listening on (for the local machine – second column from the left). It’s useful to run this to see exactly what is running as checking the running services doesn’t always provide an accurate picture. Also, it’s a useful way to reveal if an application is talking on non-standard TCP ports e.g. someone manually changing the SMTP port from 25 to 26. There are a thousand well-known ports and we might be interested in others as well but generally the focus is on the ports that are less than 4096 and important to the role of that system. As before, in a data centre migration project, ports of particular importance are those from which a service is provided and/or will have to be accessed from a remote location. Now let's take this one step further. If you have Windows XP/2003 or above you can add the switch “o” to the command i.e. netstat –ano | findstr –I ESTABLISHED Doing that, exposes PID (Process ID) information as well on the far right of the output, as shown in the screenshot below: ab_three Now this is extremely useful if used in conjunction with “Task Manager”. Process ID information is generally switched off in Task Manager but can be switched on simply by:
  • Selecting the "Processes" tab
  • Clicking "View"
  • Clicking "Select Columns..."
  • Tick the "Process ID" box
ab_four Sorting the resulting processes list by “PID”, shows the following result: ab_five In the screenshot above the highlighted PID matches the PID in the command window capture previously shown. So, at the time of capture and in this example, SkyDrive had three connections made from my machine to the service. The remote IP addresses do indeed belong to Microsoft. How to verify that is left as an exercise for the reader. Where possible, this switch allowed me to extract the information required with even greater ease. Using this method, I was not only able to discover services running from machines that nobody knew about but were also able to establish communication relationships between old distributed systems. As a result, I was able to migrate those services with greater confidence having pre-staged the pre-requisite firewall changes. Most importantly, all of this was undertaken in accordance with the client’s policy of not making any changes to the software environment of these machines.   If you would like talk to us about assisting your organisation with data centre transformation, please contact us

vPi vBrownbag EMEA session

Last week I was lucky enough to be able to do a live demonstration of Xtravirt’s vPi, the free and open VMware integrated OS based on Raspbian for Raspberry Pi devices, on the weekly vBrownBag EMEA session…

Last week I was lucky enough to be able to do a live demonstration of Xtravirt's vPi, the free and open VMware integrated OS based on Raspbian for Raspberry Pi devices, on the weekly vBrownBag EMEA session hosted by Gregg Robertson and Arjan Timmerman. In this presentation I cover off the basics around Raspberry Pi, then move on to what vPi has to offer in terms of its default feature set. After this I then carry out the live demonstration showing what it is capable of, and how the various tools, SDKs, and scripting languages can be put to use together to create some impressive automation capability. You can watch the presentation and lab demonstration in the linked Vimeo recording below. http://vimeo.com/71875957   If you would like to find out more about vPi, or download a free copy, click this link.

Windows incorrectly displays classic menu

At a recent engagement I was involved with the design and deployment of a Virtual Desktop Infrastructure (VDI) and hit upon a problem where the MS Windows 7 computers appeared to always be using the ‘Classic’ theme.

At a recent engagement I was involved with the design and deployment of a Virtual Desktop Infrastructure (VDI) and hit upon a problem where the MS Windows 7 computers appeared to always be using the ‘Classic’ theme. Simply put, the user experience looked like this oldstart rather than newstart

The problem

As users logged in they would be presented with the Windows 7 theme momentarily only to be replaced by the ‘Classic’ theme. A little digging revealed this appeared to be happening when users were logging in and their credentials traversed domains, due to cross domain policy processing a very restrictive policy. Through further investigation I established it was random, even users in the domain local to the computer were also experiencing the same problem. There were a number of steps I went through before resolving the issue and below I outline my tests and results. As my Maths teacher always said, “It’s always best to show your working out”.

The environment

This deployment consisted of:
  • MS Windows 7 32bit clients
  • Citrix XenDesktop 5.6 - Pooled and dedicated desktops
  • AppSense Environment Manager 8.2
  • MS Windows 2008 R2 domain hosting the end point computers known as the computer domain
  • Users are members of one Active Directory Forest with multiple MS Windows 2003 child domains at various functional levels from Windows 2000 to 2003, known as the user domain
  • Two way trust in place between all domains, including shortcut trusts

Group Policy

To begin with I opted to check the most obvious place, the restrictive GPO. One of the settings in the restrictive policy was forcing the ‘Classic’ Start Menu, found on the User side of the policy in ‘Administrator Templates\Start Menu and Taskbar’ (see screenshot below). I’ve read many articles about this and that it isn’t compatible with Windows 7, even the support on list doesn’t cover Windows 7, so it shouldn’t be taking effect. This can be found under ‘Changes to legacy Group Policy settings’ in http://technet.microsoft.com/en-us/library/ee617162(v=ws.10).aspx forceclassic I configured a policy in the computer domain using loopback processing to reverse this setting to ‘Disabled’. After many reboots and having ensured the group policy was synchronised and being applied correctly the problem still continued. Result: No change

Performance Options

As I delved deeper into this problem I looked toward the visual effects settings found in the Performance Options under Advanced System Properties, in particular ‘Use visual styles on windows and buttons’ performanceoptions When checked, you receive the updated Windows 7 Start button, left unchecked and the old ‘Classic’ Start button is shown. Unfortunately, in this deployment not all users have the ability to change this setting. Even if they did, the environment consisted of pooled desktops which meant they wouldn’t always be presented with the same desktop. However, as a test I forced this option within the master image and pushed out an update. Result: No change

Personalization Themes

This led me to think more about the pre-defined Windows themes and how they contain these configurations out of the box. The configuration is managed within Control Panel within Appearance and Personalization. The themes are only files that set various visuals, they can be found in ‘C:\Windows\Resources\Themes’, with each theme having its own sub-directory. As I previously mentioned, the users are not permitted to adjust or apply themes, also it would not be desirable to allow them to change the settings. I decided the simplest way to apply this would be via a logon script and force the theme and desired effects configuration.

set wshShell = Wscript.CreateObject("WScript.Shell") wshShell.Run "rundll32.exe %SystemRoot%\system32\shell32.dll,Control_RunDLL %SystemRoot%\system32\desk.cpl desk,@Themes /Action:OpenTheme /file:""""" WScript.Sleep 10000 WshShell.AppActivate("Desktop Properties") WshShell.Sendkeys "%{F4}" 

The script runs the relevant API to set the theme and then after 10 seconds closes the console. This gives the theme enough time to take effect before the desktop is presented. For it to complete properly the Theme needs to be in the theme location detailed in the script above. AppSense Environment Manager formed part of this deployment so I was able to utilise the scripting tools to execute the script when the user logged on. Result: It worked however; this impacted and increased the logon time by 10 seconds. Watching the system run through its configuration while displaying a ‘Please Wait’ message as it applied the theme wouldn’t present a positive starting point for user experience. While this worked, it wasn’t ideal.

Registry Keys

The registry is where I next chose to concentrate the bulk of my efforts and there were three areas I focused upon.

Set ‘Use visual styles on windows and buttons’

As I mentioned earlier in this article detailed , I tried setting ‘Use visual styles on windows and buttons’ under Performance Options but this failed, but I was adamant this was the correct setting. I looked at the registry key for this setting:


Within this should be a string value of ThemeActive and a value of 1 to enable the settings. However, this registry key alone is not enough and it will need to call the API to make the change which in turn requires a reboot to take effect. I knew this wouldn’t work on pooled desktops but wanted to see the outcome on dedicated desktops, alas this didn’t work. Result: No change I then considered instigating a desktop refresh and try and force the setting to apply using a scripted action:

WshShell.Run "%windir%\System32\RUNDLL32.EXE user32.dll,UpdatePerUserSystemParameters", 1, True

I threw the key creation and the above line of script into AppSense and rebooted a client machine a number of times. Result: No change

Change Visual Effects through the Registry

A colleague pointed me in the direction of where the visual effects are configured via a hex value in the Desktop registry settings. I’m not going to go into the detail of how this key was configured as there is an excellent post on it here. The registry location is:

HKEY_CURRENT_USER\Control Panel\Desktop

The binary key in question is UserPreferencesMask and needs to be altered to include the correct hex value to configure custom performance options. Once again a restart is required and AppSense was configured to write this registry key at logon and thus the relevant API should be called. I could see the registry key was being applied but it made no change to the appearance. I checked the ‘Default User’ configured on the image to ensure this was configured as desired and it had the default hex value of 9E3E03. regedit Result: No change

Themes Registry Key

With my previous experience with VDI and visualizations I knew this could be applied in the registry. It then occurred to me the best course of action would be to configure a vanilla desktop with the Theme I wanted and export the relevant registry keys. Once exported I could get these imported into AppSense and then get the System to set these at logon. However, there are a lot of settings and I really wanted to keep the amount of options to be configured to a minimum. I knew the Windows Aero Theme had the majority of the settings I required but wanted to keep a couple of features to provide users with the feel of Windows 7, i.e. ‘Font Smoothing’. Any features that would be detrimental to the performance, i.e. ‘Drag Full Windows’, should be configured as disabled. With all these thoughts in mind I set about gathering the registry keys that were required. This involved using a registry comparison tool, such as Regshot, and making a comparison of the registry before I made the change to the desktop appearance and after. The list of the registry settings that I required focused around three main areas:

HCU\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced HCU \Software\Microsoft\Windows\CurrentVersion\ThemeManager HCU \Software\Microsoft\Windows\CurrentVersion\Themes

In addition to the Windows Aero Theme I left the Basic Theme in place, I would recommend this because it allows Windows 7 to downscale desktop settings if there is a problem with visual effects and allows easy deployment of a basic theme if one is required. Here’s a full extract of the Registry Keys themesxls I’d previously mentioned that I’d managed to get the AppSense actions to run under the System account; this was to ensure consistency and to stop any user restrictions hindering the settings. Attached is an output of the AppSense EM configuration which can be easily imported into another policy. Just open in an EM console and copy/paste to a production policy. desktoptheme.aemp Result: Settings configured as expected and a worthwhile dive into desktop appearance registry settings.


Although I managed to provide a workable and consistent solution my investigation and evidence highlighted the fault was caused by an incorrectly configured base image and/or an incorrectly configured default user profile. Even though I checked both of these parameters and ensured they were configured as expected I still firmly believe either one of these was the culprit.

Windows Azure licensing, what can I move?

With the IT industry pressurising organisations to move their technology services in to some form of cloud service it presents both technical and software licensing challenges….

Moving a Server from On-Premise to Windows Azure™

With the IT industry pressurising organisations to move their technology services in to some form of cloud service it presents both technical and software licensing challenges. In this blog post I’m going to cover off the consideration needed, and provoke thoughts about this topic, specifically using public cloud Infrastructure as a Service (IaaS) offerings and using Microsoft Windows Azure™ as the reference example. It’s worth highlighting that the thoughts and consideration required will also still apply to anyone running Microsoft Service Provider License Agreements.

I want to move an on premise workload to a Cloud Service Provider

To move a server workload to a cloud service provider you must ensure the associated operating system and application license(s) include “license mobility” or have “mobility” features built into its license, but importantly you must have Software Assurance.

What’s covered by License Mobility through Software Assurance?

At the time of publishing this blog post I reviewed the Microsoft Product Use Rights (MPUR) document; it provides a great level of detail around the License Mobility aspects and I’ve extracted the following key points relevant for this discussion:
  • Any aspects of Microsoft product licensing with respect to License Mobility require that products are licensed and their Software Assurance is up to date and active.
  • All products that are currently eligible for “License Mobility within Server Farms” and covered by Software Assurance are eligible for License Mobility.
  • These specifically defined products are also eligible for License Mobility through Software Assurance alone:
    • Microsoft SQL Server™ Standard – Per Processor and Server/CAL (processor and server licenses only)
    • Microsoft System Center™ – all Server Management Licenses (MLs), including SMSE and SMSD
    • Microsoft Dynamics™ ERP products are not available through Microsoft Volume Licensing and are not activated online but have mobility rules that allow for similar use as License Mobility through Software Assurance when deploying in shared environments.
    • Windows Server™, the Windows® client operating system, and desktop application products are not included in License Mobility through Software Assurance.
    • Customers can exercise License Mobility through Software Assurance rights only with Authorised Mobility Partners, the list of Authorised Mobility Partners is available here.
After reviewing the MPUR document I’ve included the more popular operating systems and applications here as having License Mobility incorporated in their license:
  • Windows Server™ 2012 Standard
  • Microsoft Windows Server™ 2012 Datacenter
  • Microsoft VEXCEL Server
  • Microsoft Exchange™ Server 2013 Enterprise
  • Microsoft Exchange™ Server 2013 Standard
  • Microsoft Forefront™ Identity Manager 2010 R2
  • Microsoft Forefront™ Unified Access Gateway 2010
  • Microsoft Lync™ Server 2013
  • Microsoft Dynamics™ AX 2012 R2
  • Microsoft Dynamics™ CRM 2011 Server
  • Microsoft Office™ Audit and Control Management Server 2013
  • Microsoft Project™ Server 2013
  • Microsoft SharePoint Server 2013
  • Microsoft SQL Server™ 2012 Business Intelligence
  • Microsoft Visual Studio™ Team Foundation Server 2012 with SQL Server 2012 Technology
  • Microsoft BizTalk™ Server 2013 Enterprise
  • Microsoft BizTalk™ Server 2013 Standard
  • Microsoft SQL Server™ 2012 Enterprise
  • Microsoft Data Protection Manager™ 2010 for System Center Essentials
  • Microsoft System Center™ Essentials 2010
  • Microsoft System Center™ Essentials 2010 with SQL Server 2008 Technology
  • Microsoft Groove™ Server 2010
I highly recommend you refer to the Microsoft Product Usage Rights list to confirm the official statement prior to moving your operating system and / or application to a public cloud. Where  3rd party licensing is part of the application ensure that the vendor provides an official statement of support too.

What can’t I move?

Moving workloads between on-premise and cloud services while technically is not too complicated, it does have licensing implications as I touched on above. While the list I include below is by no means definitive it’s worth noting a few items aren’t completely supported:
  • Microsoft Remote Desktop Services (RDS)
    • Microsoft only premits this in Remote Administration Mode as the RDS CALs are not covered under License Mobility and cannot be allocated within Azure.
  • Citrix
    • XenApp and XenDesktop rely on RDS client access licenses.
  • VDI
    • RDS Client access licenses are not eligible.
  • Windows Client
    • RDS Client access licenses are not eligible.
    • Windows Client is not covered for License Mobility.
  • Any Microsoft product that does not have Software Assurance and does not include License Mobility in the Product Usage Rights.


If you thought you could just pick up your current environment and drop it in the cloud you may find that it’s not just technical issues that require consideration. I’ve touched on only a few areas above and you’ll notice it’s nothing to do with technology, choosing a service provider or industry standard hypervisor but moreover organisational readiness of product version, product life-cycle and product licensing. At Xtravirt we assist organisations with these types of challenges and no two projects are the same, so please contact us to learn how we can assist and support your journey of moving from on-premise in to the cloud.  

Migrating Services to Microsoft Public Cloud

Introduction This blog post has been developed to give some insight into the technical aspects of migrating services to a public Microsoft cloud solution, and how to bring it back on premise. The focus in this article is on the … [More]


This blog post has been developed to give some insight into the technical aspects of migrating services to a public Microsoft cloud solution, and how to bring it back on premise. The focus in this article is on the Windows Azure™ Platform. microsoftcirlce

Source Architecture

For the purposes of this blog I am assuming that the source architecture comprises of  physical or virtual servers, running a Microsoft Windows Server™ OS


Aligning the IT strategy with the business strategy is key to providing IT services that meet the demands of the business. The use of Enterprise Architecture tools and methodologies  provide a solid foundation for mapping out your target architectures.

Financial Analysis

It is important to understand the cost models involved in both the source and destination architectures. Financial modelling should be conducted on a per Service basis. Both CAPEX and OPEX models are typically explored. Everything from staff costs, training, power, cooling, hardware software and a wide range of other facilities need to be included to understand the current costs. Modelling likely usage costs for Public cloud based resources is key to understanding if moving a service to the public cloud makes good business sense.

Licensing in the Public Cloud Domain

It would be wrong to assume that what you are licensed for on-premise will carry over to the cloud. Any service that you want to move will need to be licensed for “Licensing Mobility” and the cloud provider will need to be either Microsoft or an authorised License Mobility partner. There a number of on-premise systems that you can’t license for cloud usage. If an on-premise service contains one of these products then that is an immediate no-go for moving that particular service to the public cloud.


Depending upon the scale and complexity of the source environment, a combination of automated and manual discovery and assessment techniques may be required. It’s highly recommended to run the Microsoft Assessment Planning Toolkit to analyse the environment. (Note: While MAP provides a good level of information it’s recommended that additional steps are made to qualify licensing violations) microsoftapt


To be able to fully utilise a public cloud  and have confidence in its capability to deliver the services you need, connectivity is key. Redundant private and Internet based Links are recommended. In addition to this, it is highly recommended to establish a site-2-site  virtual private network . For this you will require a supported device (Microsoft Windows Server™ 2012 Remote Access Services, Cisco or Juniper Device) and a free static public IP address.

Use Cases

One of the key points I would raise about IT in general is that there is rarely a single solution that fits all. The main areas where I would look at using Windows Azure™ for initially, from an Infrastructure as-a Service (IaaS) point of view are the following:
  • Test and Development
  • Public Facing Systems that require agility and have localised data
  • Specific Services – e.g. Microsoft SharePoint™, Active Directory Domain Services, etc...
  • Disaster Recovery and Business Continuity Services
Microsoft’s IaaS offering on Windows Azure™ is in continuous development, and can be considered for critical line of business systems. Transitioning workloads to Windows Azure™ should be staged, by environment, as with any workload, server and or DC transformation initiative.

Technical Feasibility

Once we have established that the service(s) is suitable from a business, cost and licensing point of view we must also establish if the current service state is in a support configuration to be moved.

Supported Source Operating Systems

  • Windows Server™ 2003 SP2 and above (for a full list please click here)

What about Linux?

Using a 3rd party tool such as PlateSpin Migrate, this is Microsoft supported. Alternatively click here for a link to a (Microsoft) unsupported method of migrating Linux services.

Technical Checklist

  • Windows Azure™ Supports Windows Server™ 2008 R2 or Windows Server™ 2012.
  • Before moving a server into the public cloud ensure only a single network card on the virtual machine exists and is set to use DHCP.
  • The maximum data disk size in Windows Azure™ is 999GB.
  • Where possible make sure your on-premise disks are VHD format not VHDX otherwise they will need to be converted before storing.
  • The OS disk in Windows Azure™ has a maximum supported size of 127GB, article here.
  • Template virtual machines must be SYSPREP’d prior to upload.
  • If the on-premise application uses the drive letter, D:, this needs to be re-assigned. Windows Azure™ assigns this drive for non-persistent storage.
  • You cannot migrate virtual machines with snapshots.
  • If you’re planning on moving domain controllers or creating new ones, the NTDS and SYSVOL directories need to be placed onto a Data drive (not D:) and Windows Azure™ Disk Caching must be disabled (further reading here)
  • Using System Center™ Application Controller  is the easiest method.
  • As with any virtualisation of service the same rules apply, any physical dependency that can’t be virtualised will strike the service off the list (Dongles, HBAs, Smart Card Readers etc…).
  • Consider your page file locations – within Azure the default location is the non-persistent D: volume.
  • Remote Desktop must be enabled and firewall ports must be open.

Technical Process

In-place service migration

There are two main methods for in-place migration of an on-premise service to the public cloud:
  1. Utilise System Center™ Virtual Machine Manager to import the service into a virtual workload. Move the data into the Virtual Machine Manager  library then utilising System Center™ Application Controller to copy the data to Windows Azure™.
  2. Convert the source service using DISK2VHD (or another 3rd party tool) then upload using POWERSHELL 3.0/SCVMM/SCAC/CSUpload Command-Line Tool (further reading)


  • Reduced Time to transition
  • Exact like for like copy
  • Agile Deployment
  • Low Cost
  • Legacy data/configuration will be copied
  • System configuration may not be suitable for Public Cloud – e.g. Disk Size


Side-by-Side Service Migration

Outside of the in-place migration the other method is to build out the service architecture onto a new platform and migrate the service data/configuration and connectivity.


  • Clean Environment
  • Only required data is copied across
  • Running two systems side-by-side can allow for a shorter service outage window
  • Possible Higher Cost due to the running of two systems side-by-side
  • Potentially Greater Complexity
  • Possible licensing implications


End to End Process for converting a physical server and moving to Windows Azure™

The steps in summary: windowsazuresteps The timeline above provides a high level representation of the technical steps taken if a service has been deemed suitable for migration.

What if I want to go the other way?

Moving a virtual machine from Windows Azure™ to on premise requires some consideration, but is achievable. You can’t directly move the virtual machine. You can however download the Virtual Hard Drives (VHDs) contained within the blobs in Windows Azure™ and then attach those to newly created on-premise virtual machines. You will either need to know some PowerShell 3.0 (Save-AzureVhd cmdlet) or use a 3rd party tool to achieve this. Another way would be to use Windows Server™ backup to create a backup of the system and data volumes to another data volume then copy that data down. This is a rather convoluted method and using the PowerShell command would be much simpler.

Making life easier moving forward

One of the cool features of System Center™ Virtual Machine Manager 2012 SP1 is the ability to create capability profiles. Out of the box three are provided:
  • Microsoft Hyper-V™
  • Citrix XenServer™
  • VMware ESX Server™
If you want to include governance within a hybrid cloud it would be wise to create a capability profile for your cloud provider. The following is an example System Center™ Virtual Machine Manager profile that will ensure all virtual machines fit within the Windows Azure™ specification.
$capabilityProfile = Get-SCCapabilityProfile -Name "Windows Azure"Set-SCCapabilityProfile -CapabilityProfile $capabilityProfile -Description "This is a custom profile to ensure Windows Azure compatability is maintained" -CPUCountMinimum 1 -CPUCountMaximum 64 -CPUCompatibilityModeValueCanChange $true -CPUCompatibilityModeValue $false -MemoryMBMinimum 768 -MemoryMBMaximum 14336 -DynamicMemoryValueCanChange $false -DynamicMemoryValue $false -VirtualDVDDriveCountMinimum 0 -VirtualDVDDriveCountMaximum 0 -SharedDVDImageFileValueCanChange $true -SharedDVDImageFileValue $false -VirtualHardDiskCountMinimum 0 -VirtualHardDiskCountMaximum 16 -VirtualHardDiskSizeMBMinimum 0 -VirtualHardDiskSizeMBMaximum 1048576 -FixedVirtualHardDiskValue $true -FixedVirtualHardDiskValueCanChange $true -DynamicVirtualHardDiskValue $false -DynamicVirtualHardDiskValueCanChange $false -DifferencingVirtualHardDiskValue $false -DifferencingVirtualHardDiskValueCanChange $false -VirtualNetworkAdapterCountMinimum 1 -VirtualNetworkAdapterCountMaximum 1 -NetworkOptimizationValueCanChange $true -NetworkOptimizationValue $false -VMHighlyAvailableValueCanChange $true -VMHighlyAvailableValue $false
  virtualmachinemanager Having the capability mapped between Azure and your Private cloud platform will provide a greater degree of flexibility when moving workloads between the two clouds. To streamline this process Hardware templates corresponding to the Windows Azure VM options should also be created. An example of this is if your test and development functions are utilising Windows Azure. When a system is ready to move into production the virtual machine can be downloaded from Azure and then placed into the Private Cloud.


“To get through the hardest journey we need take only one step at a time, but we must keep on stepping” As this Chinese proverb states the main aim here is to keep working on continual service improvement. As with many transformation activities not everything can ascend to a modern state, some services will need to remain, some will be due to retirement, others will need to be upgraded and a number won’t make sense to move. Cloud IaaS features provide many advantages over CAPEX heavy data centre builds but it’s not necessarily a click of the heels to get you there. Hopefully this article has helped shine some light in areas that were hidden or confusing.  

Upgrading from VMware vSphere 4.1 to 5.1

During a recent client engagement I was presented the opportunity to upgrade a client’s VMware environment from vSphere 4.1 to vSphere 5.1 including their SRM estate, also from v4.1 to 5.1.

During a recent client engagement I was presented the opportunity to upgrade a client’s VMware environment from vSphere 4.1 to vSphere 5.1 including their SRM estate, also from v4.1 to 5.1. As part of the due diligence of planning I collected and compiled a fair amount of resource which can be found in a separate blog posting here. While the information in that blog posting is very helpful I wanted to create this article around the experience rather than a step by step update guide. For those of you who are not aware how the new vSphere 5.1 management infrastructure is laid out, VMware has introduced two new core components in this release, these being Single Sign On and the VMware Web Client. If you’ve been involved with VMware products in recent years you’ll recall this was available previously but now is likely to be the only access method in the next version of vSphere, it’s been substantially upgraded hence why I’m referring to it as new.

The Upgrade Process

As part of my initial discovery I established there were 4 individual environments requiring to be upgraded, these were consistent in their build revision at vSphere 4.1 Update 2. The steps I followed:
  1. Upgrade vCenter 4.1 to vCenter 5.0 on the primary site. This is quite straight forward and I previously documented the process on my personal blog here.
  2. Upgrade SRM 4.1 to SRM 5.0 on the primary site. The upgrade is extremely simple and the only advice I would give is to ensure you take a database backup prior to the upgrade.
  3. Update the SRA software to the SRM 5.0 version for the primary site.
  4. Upgrade vCenter 4.1 to 5.0 on the recovery site.
  5. Upgrade SRM 4.1 to 5.0 on the recovery site.
  6. Update the SRA software to SRM 5.0 for the recovery site.
  7. Use the Test Failover feature within SRM to ensure all the components are communicating and functioning correctly between the two sites.
  8. Install the SSO service on the primary site. In this deployment a separate virtual machine was dedicated to cover role separation and allow for future growth.
  9. Make a note of the Lookup Service URL as this will be needed in the next steps.
  10. Upgrade the Inventory Service from version 5.0 to 5.1 on the primary site, this is an extremely straight forward process and you will be asked to insert the Lookup Service URL, mentioned in the previous step.
  11. Upgrade vCenter from 5.0 to 5.1 on the primary site. Again, the upgrade is very straight forward and you will be requested to provide the Lookup Service URL during the installation.
  12. Upgrade SRM from 5.0 to 5.1 on the primary site.
  13. Update the SRA software to SRM 5.1 for the primary site.
  14. Install the SSO service on the recovery site.
  15. Upgrade the Inventory Service from version 5.0 to 5.1 on the recovery site.
  16. Upgrade vCenter 5.0 to 5.1 on the recovery site.
  17. Upgrade SRM 5.0 to 5.1 on the recovery site.
  18. Update the SRA software to SRM 5.1 for the recovery site.
  19. Use the Test Failover feature within SRM to ensure all the components are communicating and functioning correctly between the two sites.
  20. Install the Web Client software on the Primary and Recovery sites.
At the time of completing this engagement the Web Client was not able to manage the SRM component so the last step was more in readiness for future compatibility.

Why 4.1, to 5.0 then 5.1?

The primary reason for needing to stagger the upgrade process from 4.1 to 5.0 and then 5.0 to 5.1 was due to SRM. It’s possible to upgrade vCenter straight from 4.1 to 5.1 but doing so prevents the SRM component upgrade to 5.0 or 5.1. This inability is recorded in the SRM 5.1 Release Notes, excerpt applicable below:

Upgrade an Existing SRM 4.1.x Installation to SRM

Upgrade versions of SRM earlier than 5.0 to SRM 5.0.x before you upgrade to SRM

IMPORTANT: Upgrading vCenter Server directly from 4.1.x to 5.1 is a supported upgrade path. However, upgrading SRM directly from 4.1.x to 5.1 is not a supported upgrade path. When upgrading a vCenter Server 4.1.x instance that includes an SRM 4.1.x installation, you must upgrade vCenter Server to version 5.0 or 5.0 u1 before you upgrade SRM to 5.0 or 5.0.1. If you upgrade vCenter Server from 4.1.x to 5.1 directly, when you attempt to upgrade SRM from 4.1.x to 5.0 or 5.0.1, the SRM upgrade fails. SRM 5.0.x cannot connect to a vCenter Server 5.1 instance.”


In this engagement I found the upgrade process was relatively straight forward and fortunate that I did not have to utilise internally or externally signed certificates - apologies to those of you who have to use this process! The success of the upgrade was very much to do with the planning and I cannot emphasise enough how much time should be spent investigating the current environment, the build revisions, it’s dependencies and what components depend of each other. VMware update their release notes with Knowledgebase articles as ‘known issues’ are discovered so always check the text on the website rather than bundle versions within a download.   If you would like talk to us about assisting your organisation with VMware vSphere 5.1 or VMware vCloud 5.1 based solutions, please contact us.

Atlantis ILIO perf tips & migration script

In this blog I’m not going to be revealing performance metrics and comparing it to SANs at the same price point; as that has been done by many others, however it is possible to get approximately 30-35k IOPs when this

Recently during a customer engagement I was involved with deploying a technology from one of our partners, Atlantis Computing. Their diskless storage appliance, called ILIO, presents the RAM from a virtual machine to the host as an NFS Datastore. One aspect of the project required a little script intervention to assist with the migration of the ILIO controller and supporting virtual machines, which I wanted to share. Now in this blog I’m not going to be revealing performance metrics and comparing it to SANs at the same price point; as that has been done by many others including the venerable Brian Madden in this blog post, however it is possible to get approximately 30-35k IOPs when this storage acceleration is supported by fast DDR3 RAM – this isn’t to be sniffed at.

Configure for best Performance

Before jumping straight to the script I wanted to highlight a few items which can easily be overlooked but prove incredibly detrimental to the overall performance of the ILIO appliance if not correctly configured. Out of the box the ILIO controller needs a few small tweaks to enable it to reach those speeds but before you open the ILIO Center management application, check your physical and virtual hardware configurations first. These are by no means definitive but core items to consider:
  • In the BIOS of the host hardware, make sure that Power Management is set to “Max performance”
  • Change the ILIO virtual machine appliance NICs to VMXnet3. A large performance increase can be observed using these over the E1000 NICs
  • Set a CPU reservation for 2x host CPU speed
  • Set a Memory reservation for the whole amount of memory presented to ILIO
  • Set CPU Hyperthreaded Core Sharing to “None”
These settings ensure that ILIO will always have the resource it requires without any concern for contention delivering a fast local NFS Datastore that will outperform anything else at a similar price point. However, notice that word ‘local’. ILIO can theoretically be configured as a top of rack storage array, but Atlantis advise this is no longer a supported deployment method, and you will not see anywhere near the performance maximums without using multiple 10Gb/s NICs.  In this deployment the environment was configured to use a 1Gb/s Network. In this deployment locally presented storage will be used but this presents a couple of concerns that must be understood and mitigated, these being:
  • Disaster Recovery – how do you recover from a host failure?
  • Maintenance – how do you perform a host upgrade with minimum downtime?

Disaster Recovery

Locating the ILIO Controller and virtual machines on shared storage is imperative to mitigate host failure and take full advantage of VMware’s HA (High Availability) feature. Luckily, as we are using “Diskless” ILIO we don’t require that shared storage needs to be particularly fast as it will only read from it when starting up and restoring a SAN snapshot. Once invoked, VMware HA will only return the running virtual machines (VMs) to service at the point when the host failed. As it’s likely you’ll only have a subset of your total VMware View desktop estate assigned to the failed host you’d observe a number of orphaned VMs within VMware vCenter and View Manager, but at least there’d be enough VMs immediately available for those users who were already logged on to re-connect. As for the dealing with the migration of orphaned VMs this is discussed further down. At this point you may be wondering, just how you recover data that sits on non-persistent storage. The answer is a feature called SnapClone which is similar to a SAN snapshot. To use this feature you require a disk to be attached to the ILIO appliance; either VMDK or vRDM. The idea behind this is that you deploy all the VMs you require; or are licensed for, and then shut them all down and perform a backup, this writes a copy of the data kept in RAM to the disk. When the appliance starts, it then copies this data from the disk to memory. By using this feature, it means that you don’t have to manually clean up your ADAM database and hosts every time ILIO is shutdown.


By following Atlantis recommendations, you will have assigned the second NIC of the ILIO appliance to an internal vSwitch.  This means that if you want to put the ILIO controller and associated VMs onto another host, there is a fair amount of work required. Having run through this several times and getting to the point of deciding it would be much more efficient to just script it, I have done just that. This script will unprotect the replica, vMotion the ILIO controller, de-register and re-register the VMs on the new host and finally clean up after itself. Before this script can be used, you will need to size your ILIO Controllers so that you can comfortably run two of them on a single host.  Atlantis provide a calculator with the ILIO deployment tool to help you estimate the storage requirements of your desktops, however with the use of the ‘Floating Pools’, ‘Redirected Profiles’ and ‘Refresh on Logoff’ features, you can keep the storage requirements down even further. To allow the ILIO appliance to automatically connect to an NFS Datastore on any host within the VMware HA cluster each appliance will need to be manually migrated and NFS mount performed. However, if you would prefer not to have disconnected Datastores cluttering up your host(s), then the script below could be amended to dismount and mount the NFS Datastore(s).

The script

#Load VMware PowerCLI command set

if(-not (Get-PSSnapin VMware.VimAutomation.Core))


   Add-PSSnapin VMware.VimAutomation.Core



#Link my functions

. .\Register-VMX.ps1

. .\ConvertFrom-SecureToPlain.ps1



#---------------------------- CHANGE THE SETTINGS BELOW --------------------------------




$VC = "vc.virtlab.co.uk"

$ILIO = "ILIO01"

$OrgHost = "esxi02.virtlab.co.uk"

$DestHost = "esxi01.virtlab.co.uk"




#----------------------------- DO NOT CHANGE BELOW HERE --------------------------------



Write-Host ' '

Write-Host '---------------------------------------------------------------------'

Write-Host '             Have you Disabled the VMware View Pool ?'

Write-Host '---------------------------------------------------------------------'

Write-Host ' '


$yes = New-Object System.Management.Automation.Host.ChoiceDescription "&Yes",""

$no = New-Object System.Management.Automation.Host.ChoiceDescription "&No",""

$choices = [System.Management.Automation.Host.ChoiceDescription[]]($yes,$no)

$caption = "Warning!"

$message = "Have you disabled the View Pool?"

$result = $Host.UI.PromptForChoice($caption,$message,$choices,0)


if($result -eq 1) {

       Write-Host "Please Disable the Pool and run the script again"




$DomUser = Read-Host 'vCentre Administrator user name? (e.g. user@dom.com)'

$sDomPass = Read-Host 'vCentre Administrator password?' -AsSecureString

$DomPass = ConvertFrom-SecureToPlain($sDomPass)

$sHostPass = Read-Host 'ESXi host root password?' -AsSecureString

$HostPass = ConvertFrom-SecureToPlain($sHostPass)

$sDsnPass = Read-Host 'DSN User VDISAP_vCOMPOSER_USER Password?' -AsSecureString

$DsnPass = ConvertFrom-SecureToPlain($sDsnPass)


Write-Host ' '

Write-Host ' '


$ILIONFS = $ILIO.substring(7,8) +"-NFS"

$ViewVM = "View" + $ILIO.substring(14,1) + "*"


$exe = "c:\Program Files (x86)\VMware\VMware View Composer\SviConfig.exe"

&$exe -Operation=UnprotectEntity -DsnName=VDISAP_vCOMPOSER_DSN -DbUsername=VDISAP_vCOMPOSER_USER "-DbPassword=$DsnPass" "-VcUrl=https://$vc/sdk" "-VcUsername=$DomUser" "-VcPassword=$DomPass" -InventoryPath="/VDISAP-DCTR/vm/VMwareViewComposerReplicaFolder" -Recursive=True


Write-Host ' '

Write-Host '-----------------------------'

Write-Host 'Connecting to vCenter Server : '$VC

Write-Host '-----------------------------'


#Connect to vCenter

Connect-VIServer $VC -user $DomUser -password $DomPass


Write-Host '-----------------------------------------------'

Write-Host 'Getting a list of Powered On VMs to be Shutdown'

Write-Host '-----------------------------------------------'


#Get List of Powered on VMs in Remote Pool to be shutdown

$VMs = get-vm | Where-Object{$_.PowerState -eq "PoweredOn" -and $_.name -like $ViewVM}


$VMsCount = $VMs.Count


#Shutdown VMs in Remote Pool

if ($VMsCount -gt 0) {foreach ($vm in $VMs ){

       Write-Host 'Graceful Shutdown of VM: '$vm.name

        Shutdown-VMGuest -VM $vm.name -confirm:$false




#Wait 30 seconds for VMs to Shutdown

Start-Sleep -s 30


Write-Host '--------------------------'

Write-Host 'Disconnecting ILIO-NFS NIC'

Write-Host '--------------------------'


#Disconnect ILIO-NFS NIC

Get-VM -Name $ILIO | Get-NetworkAdapter | select -last 1 | Set-NetworkAdapter -Connected:$false -Confirm:$false


Write-Host '-------------------------------------------------'

Write-Host 'Migrating ILIO Controller to the Destination ESXi: '$DestHost

Write-Host '-------------------------------------------------'


#Migrate ILIO to $DestHost

Get-VM -Name $ILIO | Move-VM -Destination $DestHost


Write-Host '--------------------------'

Write-Host 'Re-connecting ILIO-NFS NIC'

Write-Host '--------------------------'


#Reconnect ILIO-NFS NIC

Get-VM -Name $ILIO | Get-NetworkAdapter | select -last 1 | Set-NetworkAdapter -Connected:$true -Confirm:$false


#Wait 30 seconds for ILIO NIC to connect

Start-Sleep -s 30


#Rescan Datastores

Get-Cluster | Get-VMHost | Get-VMHostStorage -RescanAllHBA


Write-Host '---------------------------------'

Write-Host 'Disconnecting from vCenter Server: '$VC

Write-Host '---------------------------------'


#Disconnect from vCenter

Disconnect-VIServer -Server * -Force -Confirm:$false


Write-Host '------------------------------'

Write-Host 'Connecting to Originating ESXi: '$OrgHost

Write-Host '------------------------------'


#Connect to Org Host

Connect-VIServer $OrgHost -user root -password $HostPass


#Get Inaccessible Virtual Machines

$VMs = Get-View -ViewType VirtualMachine | ?{$_.Runtime.ConnectionState -eq "orphaned" -or $_.Runtime.ConnectionState -eq "inaccessible"} | select name,@{Name="GuestConnectionState";E={$_.Runtime.ConnectionState}}


Write-Host '-----------------------------------------------'

Write-Host 'Removing Inaccessible VMs from Originating ESXi'

Write-Host '-----------------------------------------------'


#Remove VMs from Inventory

foreach ($vm in $VMs ){

                Remove-vm -VM $vm.name -confirm:$false



Write-Host '-----------------------------------'

Write-Host 'Disconnecting from Originating ESXi: '$OrgHost

Write-Host '-----------------------------------'


#Disconnect from Org Host

Disconnect-VIServer -Server * -Force -Confirm:$false


Write-Host '------------------------------'

Write-Host 'Connecting to Destination ESXi: '$DestHost

Write-Host '------------------------------'


#Connect to Destination Host

Connect-VIServer $DestHost -user root -password $HostPass


Write-Host '-------------------------------------'

Write-Host 'Registering VMs onto Destination ESXi'

Write-Host '-------------------------------------'


#Register VMs from Datastore

Register-VMX -dsName $ILIONFS -CheckNFS:$true


Write-Host '-----------------------------------'

Write-Host 'Disconnecting from Destination ESXi: '$DestHost

Write-Host '-----------------------------------'


#Disconnect from Destination Host

Disconnect-VIServer -Server * -Force -Confirm:$false


Write-Host '----------------------------'

Write-Host 'Connecting to vCenter Server: '$VC

Write-Host '----------------------------'


#Connect to vCenter

Connect-VIServer $VC -user $DomUser -password $DomPass


#Get List of Replica VMs

$ReplicaVMs = get-vm | Where-Object{$_.name -like "replica*"}


#Move Replica VMs to the Protected View Composer Folder

foreach ($vm in $ReplicaVMs ){

                Move-VM -VM $vm.name -Destination "VMwareViewComposerReplicaFolder"



Write-Host '---------------------------------'

Write-Host 'Disconnecting from vCenter Server: '$VC

Write-Host '---------------------------------'

Write-Host ' '


#Disconnect from vCenter

Disconnect-VIServer -Server * -Force -Confirm:$false


Write-Host '----------------------'

Write-Host 'Re-Protecting Replicas'

Write-Host '----------------------'


#Protect Replicas

$exe = "c:\Program Files (x86)\VMware\VMware View Composer\SviConfig.exe"

&$exe -Operation=ProtectEntity -DsnName=VDISAP_vCOMPOSER_DSN -DbUsername=VDISAP_vCOMPOSER_USER "-DbPassword=$DsnPass" "-VcUrl=https://$vc/sdk" "-VcUsername=$DomUser" "-VcPassword=$DomPass" -InventoryPath="/VDISAP-DCTR/vm/VMwareViewComposerReplicaFolder" -Recursive=True


Write-Host ' '

Write-Host ' '

Write-Host '-----------------------------------------------------------------------------------------'

Write-Host 'End of Script run.'

Write-Host 'Check everything has completed successfully and make sure to Enable the VMware View Pool.'

Write-Host '-----------------------------------------------------------------------------------------'



If you’d like any assistance with an Atlantis ILIO project, please contact us, and we’d be more than happy to assist you.

Horizon Mirage: Adventures with App Layers

I’m often accused of being easily pleased by shiny buttons and new features, but the folks at VMware’s Horizon Mirage team have added some genuinely nice extras to the latest flavour of Mirage, now called Horizon Mirage…


I’m often accused of being easily pleased by shiny buttons and new features, but the folks at VMware’s Horizon Mirage team have added some genuinely nice extras to the latest flavour of Mirage, now called Horizon Mirage to keep it consistent with the now bundled Horizon Suite.  In terms of version numbering, we’ve reached the heady heights of version 4.0. So, for those of you kind readers who took the time to read my previous blog item "Mirage: VMware reaches out to the PC…", this is somewhat a follow-up piece as the last one was primarily based around version 3.x.  Version 4.0 added some nice tweaks to the existing functionality, such as improvements in the Windows 7 migration wizard, but the most important feature for me is the introduction of Application Layers. Much had been made of the concept of layering on Mirage prior to version 4.0, but for me, there was a key element that was missing that in some ways made Mirage a little cumbersome.  Functionally, Mirage provided Base Layers that provide a template including the base operating system and a set of core applications, as well as a post-application script.  This was (and still is) a great feature, providing a base standard that could be deployed, used for conformance and so on.  Where it became a little fuzzy was application handling in that Base Layers alone lacked flexibility.  If you wanted to provide any flexibility for applications, the choice was either multiple base layers or a 3rd party application delivery mechanism, such as Horizon Workspace (using ThinApp packages) or Microsoft SCCM. Version 4.0 brings a further, somewhat different approach to the party – Application Layers.  Application Layers are applied onto client Endpoints in addition to Base Layers.  Essentially, they provide a means to deploy applications to clients as discrete components, separate from the OS-centric Base Layers. From a manageability perspective, this is great – it now means that only a few base layers are really necessary – departmental or user variations can be dealt with in Application Layers. So, how does this work…..?

Capturing an Application Layer

Fundamentally, capturing an Application Layer as a process is not too dissimilar to ThinApp or other software packagers.  You provide a basic operating installation and put the packaging tool (in this case, the Mirage Agent) onto the client.  Hit ‘Record’, install the application (or applications, if you want to capture more than one in the layer), then hit ‘Stop’.  Mirage then scoops up all the differences between the client before and after the installation. From a more-in-depth perspective, this takes a little more thought (doesn’t it always?).  Firstly, you need to use the operating system that you plan to run the application on when deployed (so an application captured on Windows 7 can’t be deployed to Windows XP clients).  You also need to be aware of the application’s requirements:
  • Will it register unique identifiers ‘per installation’ that can affect licensing or use?  A good example is McAfee ePO Agent that has a unique GUID per client in the registry – you don’t want a hundred PCs registering the same GUID to the ePO server!
  • Are there application dependencies, such as Java?  If so, do you want these in your Application Layer, or are they already in your Base Layer?  In some cases, the latter may be more appropriate, but the recommendation is that the Client you’re packaging on should adhere to the Base Layer configuration as much as possible.
  • Whether the application is 64-bit or 32-bit only matters with respect to the compatibility with the Endpoint OS – so 64-bit can handle either, while 32-bit can’t do 64-bit.
  • Application Layers can handle the installation of drivers and Windows Services – often an issue (or at least not terribly convenient) with Application Virtualisation methods.  So you CAN package iTunes…
  • It can’t deliver Windows OS components – such as .Net Framework, Windows Updates, Windows licenses, user accounts etc.  In most cases, these can be covered through Base Layers though.
  • Disk Encryption software and applications that change the boot record are only partially supported – It should be pointed out that I’ve deployed applications onto machines with encrypted disk without issue.
So, once you’ve battered your way through this, you can go ahead!  Generate a Windows machine with the Mirage Client installed, but do nothing else – don’t centralise it or anything.  Instead, just confirm that the client is visible in the Mirage Admin console as ‘Pending’. Next, go to Common Wizards and select Capture App Layer. It’s a pretty straight forward Wizard (selecting the client you want to capture the application on, selecting an upload policy, where you want to put it).  One thing that is quite nice is that it carries out a validation – so if the PC has any pending reboots, for example, it’ll tell you to do them first.  Once the wizard is complete, the job is visible in the Console’s Task Monitoring screen.  This is important, as you’ll need this later. Meanwhile, the Mirage Client audits the endpoint’s current state… This takes a little while, but then the fun can begin… For this example, I’ve just installed two simple applications with default settings (VMware View Client and the VMware Horizon Agent).  If there are client-specific operations that need to be run following application of the layer, such as an executable or a script to run that generate something unique on a specific client, batch scripts following the name convention post_layer_update_*.bat can be placed into the capture machines “%programdata%\Wanova\Mirage Service” path. I’d advise a reboot of the endpoint after everything is installed, even if one isn’t requested, just to ensure all necessary files are in place.  Once these applications are installed, the capture process can be ended. Remember my point above about the Task Monitoring screen on the Management Console?  This is where we end our capture.  Right-clicking the task and selecting the ‘Finalize…’ option launches a final wizard.  It summarizes what applications (and components) are installed, then allows you to apply a name and version to the application layer.  It’s also possible to update an existing layer here.  Once complete, the final state is captured from the client, so completing the creation of the layer. One thing to note is that the Mirage Client returns to pending after this is complete, leaving the client available for further use as required. Next, we deploy our layer…

Deploying an Application Layer

This is pretty straight forward.  Find the Update App Layers wizard in the Common Wizards page in the management Console.  This will ask you what you want to apply this to and what it is you want to apply.  With regards to the ‘what you apply to’, this can be either a specific client (CVD – Centralised Virtual Device) or a collection.  My choice would be to create collections for each application layer, similar to SCCM.  Membership to collections can be provided using a variety of roles, including the user’s AD group memberships or physical attributes. Once applied, any clients will immediately start applying the layer in the background, in the same manner as all other Mirage tasks. The client will then prompt for a reboot - which can be delayed, but the next reboot will apply the layer. After the reboot, the applications will be present and ready for use.  Mirage, in the background, will run a further conformance check to make sure that all is well.

A Few Thoughts….

One common question is ‘how is this different from application virtualisation such as ThinApp or traditional thick application installations such as MSI packages via SCCM?’ When compared to ThinApp (or similar mechanisms), the intent is to provide a layering distinct from the endpoint’s operating system by encapsulating the application in its own bubble.  This is a great approach for a large number of applications, but poses issues with others.  If an application requires greater integration into the parent operating system, or even hardware, this is not straight forward (and in many cases not possible) due to this separation. When an MSI package is deployed to a PC, regardless of what mechanism (from CD or via a delivery system such as SCCM), it is left to the control purely of the local Windows Installer service on the PC to manage the installation (or removal) of an application, not always 100% successfully.  In general, traditional application installation on the PC provides the greatest integration into the operating system, avoiding the problems associated with application virtualisation.  If the application has a complicated installation routine, the process can be fraught with problems and points of failure, possibly limiting the ease at which such a package installation can be automated. Mirage provides a middle ground.  The net result is similar to an MSI package installation in the way that the application layer deposits the binaries, drivers and registry settings into the operating system natively – applications can even be removed using the Windows control panel applet.  By virtue of the way the layer is inserted into an endpoint rather than as a scripted installation, collating a layer for a complex multi-package bespoke application is much easier than manual disk swapping or horrendously complex nested scripts. There is a degree of separation reminiscent of application virtualisation in that the layers can be removed or added independently of the Windows stack via the Mirage framework.  From a repair perspective, application layers, being clearly denoted in this way, can be repaired more easily than traditional methods – simply by telling the client to Enforce Layers from the Management Console – returning them to the original state. All this is not without its caveats.  For example, there are concerns around conflicting file versions – one layer has a DLL of one version replaced by an incompatible version in another.  Equally, some applications still won’t work with this method (MS SQL is mentioned in the VMware documentation, for example), so other options are recommended, VMware ThinApp being complimentary in this case.  It’s notable that VMware markets ThinApp alongside Mirage, much in the same way Microsoft market App-V as one of numerous application delivery options.  In this game, there is seldom one answer. So Mirage Application layering is pretty straight forward to implement and use.  It offers a great enhancement to Mirage, providing a more focused way of deploying applications than the product was capable of previously.   If you’d like any assistance with a Horizon Mirage project or simply wnat to learn more about it or any aspects of VMware Horizon please contact us, and we’d be more than happy to use our real world experiences to support you.

Recover VMs with corrupt snapshots

Consulting throws up many challenges during the design and implementation stages but none more than the actual environment integration. Being at the ‘coal face’ invariably provides a point at which things don’t always go to plan…

Consulting throws up many challenges during the design and implementation stages but none more than the actual environment integration. Being at the ‘coal face’ invariably provides a point at which things don’t always go to plan and it’s this real world experience that we at Xtravirt excel at. In this, my first blog posting, I’m going to discuss VMware snapshots and the possibility that you can recover from corrupted ones. Particular events can create situations where a VM might start rebooting or shut down completely, and during this unplanned process one or more snapshots for that machine may get corrupted. A common scenario for this kind of corruption is when:
  •  A VM starts displaying the message in the console:

 “The redo log of <Machine Name>.vmdk is corrupted.  Power off the virtual machine.  If the problem still persists, discard the redo log.”

  • Pressing OK to the message mentioned above, causes the machine to display the message again
  • Powering-off the VM might not be possible and could be displaying the message in the console:

 “The attempted operation cannot be performed in the current state”

Depending on the type of failure, recovery from such a situation is possible and at times, with all data intact.  The latter is especially true in the case for backup solutions that utilize the snapshot feature as part of their process but become corrupt just after it’s taken; therefore there isn’t a lot of changed data at that point.  A complete recovery in this example is achievable. I’ve recovered from such scenarios a few times and thought the process should be documented to help others.  This blog posting came about as I felt that while different KB articles document the process in parts, I couldn’t find one that guides someone through the whole recovery process. Some of the assumptions that I am making here are:
  • The failure is occurring on VM(s) with one or more snapshots, created either manually or via an automated mechanism eg: a backup solution
  • The virtual machine is displaying errors about inconsistent, corrupt or invalid snapshots
  • The person working through the issue is familiar with VMware operations and can deal with minor variations in the discussed scenario
  • The process to force shutdown of a VM is required for ESXi 5.x hosts (while syntax for other versions will be different, the process remains the same)

Virtual Machine Restore Process

Step 1: Save Virtual Machine Logs

The first action is to save logs for this VM; these can be found in the virtual machine folder on the datastore.  This is to avoid losing potentially valuable diagnostic data in the event of a catastrophic failure.  Due to the state the virtual machine is in, it might not be able to save vmware.log but the other log files should be copied directly from the datastore to a safe location.

Step 2: Shutdown Virtual Machine

This is to avoid having any further damage to the current snapshots before a copy of the machine is made.  It’s possible for vCenter to lose control of the virtual machine in such situations and power operations might not work from the VI Client.  If that happens, refer to “Force Virtual Machine Shutdown Process” section near the end of this posting for techniques to force the shutdown of the machine.

Step 3: Make a copy of the Virtual Machine folder

Once the virtual machine is shut down, make a copy of the virtual machine folder to another location on the same or another datastore.  Name the folder something appropriate eg: <Machine Name>-Backup. Note: A clone is not what is required and it probably won’t work in such a situation.

Step 4: Attempt to fix the snapshots

First check if the datastore has enough space remaining; snapshots do become corrupted if there isn’t enough space available.  As there might be other snapshots in the background, estimate generously and if there isn’t enough space, use Storage vMotion to migrate machines off that datastore, to have a safe level of headroom available. Once there is enough space available, try taking another snapshot, and if successful, try committing it.  This operation might fix the snapshot chain and consolidate all data into the disks.  If this process fails, then follow the remainder of the process to manually restore the machine from remaining snapshots.

Step 5: Confirmation of existing virtual disk configuration

Go into the VM settings and confirm the number and names of the existing virtual disks.  As there are snapshots present, the disk(s) will be pointing to the last-known snapshot(s).  Also, make note of the datastore the machine resides on.

Step 6: Command-Line access to ESXi server

Gain shell access to an ESXi server in the cluster which can see the datastore with the virtual machine in question.  The ESXi server should also have access to the datastore where the repair will be carried out.  As SSH may be disabled (by default), you may have to start the service manually. Note: Seek approval (if security policy requires it) before this is done. Once SSH is enabled, use PuTTY (or a similar tool) to connect and login using “root” credentials

Step 7: Confirmation of snapshots present

Once logged in, change directory to:

/vmfs/volumes/<Datastore Name>/<Machine Name>


ls *.vmdk –lrt

to display all virtual disk components. Make note of what “Flat” and “Delta” disks are present.  While it can vary in certain situations, the virtual machine’s original disks will be named the same as the virtual machine name by default.  If there is more than one virtual disk present, it should have “_1” appended to the base name and so on.  If there are snapshots present, they will have “-000001” appended to each disk name for the first snapshot and “-000002” for the second and so on, by default.  Make note of all this information.

Step 8: Repair of the virtual disks

Start with the highest set of snapshots and for each disk in that set run the following command, where <Source Disk> is the source snapshot:

vmkfstools –i <Source Disk> <Destination Disk>

Please note: <Source Disk> is the base .vmdk name, ie: not the one with –flat, -delta or –ctk in the name.  <Destination Disk> is the new disk, where all disk changes need to be consolidated.  The new name should be similar to the source but not identical.  <Machine Name>-Recovered.vmdk is one example for the first disk.  Keep the same naming convention throughout for all disk names eg: <Machine Name>-Recovered_1.vmdk, <Machine Name>-Recovered_2.vmdk and so on. For example:

vmkfstools –i <Machine Name>-000003.vmdk <Machine Name>-Recovered.vmdk

for the first disk from the third snapshot set.

vmkfstools –i <Machine Name>_1-000003.vmdk <Machine Name>-Recovered_1.vmdk

for the second disk in the same set and so on. Repeat the process for all disks in the snapshot set identified earlier in step 7.  If the process is successful, move on to step 9. If there is failure on one or more disks in the set, the following error message may be displayed:

Failed to clone disk: Bad File descriptor (589833)

If that error occurs, skip that disk and keep running the process for other disks as they might still be useful.  However, the set will likely be rejected to run as production so the next recent snapshot set should be tried.  Follow the same process until all disks in a snapshot set are successfully consolidated into a new disk set  If this is an investigation into the events leading up to the failure then additional sets might have to be consolidated in the same way.  All sets should now consolidate successfully.

Step 9: Restoration of the virtual machine

Using the “Datastore Browser”, create a new folder called “<Machine Name>-Recovered”, either on the same datastore or another.  Move the newly-created “Recovered” vmdk file(s) to the new folder.  Also, copy <Machine Name>.vmx and <Machine Name>.nvram to the new folder and rename both files to become <Machine Name>-Recovered.* Download <Machine Name>-Recovered.vmx to the local machine and edit it in Wordpad or similar.  Replace all instances of <Machine Name>-00000x (where “x” is the last snapshot the machine’s disks are pointing to) with <Machine Name>-Recovered.  Repeat for other disks if present e.g. _1, _2 and save the file.  This should make the .vmx match all newly-consolidated disks.  Rename the original vmx file in the datastore to <Machine Name>.vmx.bak and upload the edited <Machine Name>.vmx back into the same location.  Once uploaded, go to the “Datastore Browser”, right-click the vmx file and follow the standard process of adding a virtual machine to inventory, possibly naming it “<Machine Name>-Recovered”. Once in the list, edit the VM settings and disconnect the network adapter.  It might require connecting to a valid VM network first but the main thing is that the network adapter should be disconnected. Once done, take a snapshot of the virtVM and power the machine up.  At this point, a “Virtual Machine Question” will come up.  Answer it by selecting the “I copied it” answer.  If the disk consolidation operation was successful for all disks, the machine will come up successfully.  The machine can now be inspected and put into service or investigated for a problem. Once operation of the machine has been tested and the decision has been made to bring it into service, shutdown the virtual machine, reconnect the virtual network adapter to the correct network and power it back up.  After boot is complete, login to the machine to confirm service status, network connectivity, domain membership and other operations.  If all operations are as expected then the restore process is complete and the snapshot can be deleted.

Force Virtual Machine Shutdown Process

First Technique: Using vim-cmd to identify and shutdown the VM

While connected to the ESXi shell and logged in as “root”, run the following command to get a list of all VMs running on the target host:

vim-cmd vmsvc/getallvms

The command will return all the VMs currently running on the host.  Note the Vmid of the VM in question.  Get the current state of that VM as seen by the host first, by running:

vim-cmd vmsvc/power.getstate <Vmid>

If the VM is still running, try to shut it down gracefully using:

vim-cmd vmsvc/power.shutdown <Vmid>

If the graceful shutdown fails, try the power.off option:

vim-cmd vmsvc/power.off <Vmid>

Second Technique: Using ps to identify and kill the VM

Warning: Only use the following process as a last resort.  Terminating the wrong process could render the host non-responsive. While connected to the ESXi shell and logged in as “root”, list all processes for target virtual machine on the current host by running:

ps | grep vmx

That will return a number of lines.  Identify entries containing vmx-vcpu-0:<Machine Name> and others.  Make note of the number in the second column of numbers, which represents the Parent Process ID.  For most of the lines returned for that machine, this number should be the same in the second column.  One line belonging to “vmx” will contain that number in both first and second columns.  That is the ProcessID of the target virtual machine. Once identified, terminate the process using the following command:

kill <ProcessID>

Wait for a minute or so as it might take some time.  If after that, the VM hasn’t powered-off, then run the following command:

kill -9 <ProcessID>

The method in the section will not result in a graceful shutdown but it should terminate the machine, allowing for the recovery to take place.  If the machine still cannot be terminated, further investigation will be required on the host and the only option left will be to vMotion other virtual machines off this host and rebooting the host in question.

Final Words

The beauty of virtualization is that one can test most service scenarios without actually causing impact to service and this process is no exception.  For that reason, I would strongly recommend practicing this process in your lab environment so that you are well prepared in case disaster strikes.   If you would like talk to us about assisting your organisation with VMware vSphere troubleshooting, please contact us

VMware ESXi and NIC enumeration

Working within a Consulting practice presents new challenges with every project or engagement you’re involved with. Of course, the challenges aren’t always technical…

Working within a Consulting practice presents new challenges with every project or engagement you’re involved with. Of course, the challenges aren’t always technical and as an outsider the learning points around a business hierarchy, internal process or people can be equally as absorbing. In this post it’s a technical challenge I’d like to share from a recent engagement that I found to be a real head scratcher, as usual the answer was obvious once I’d managed to fathom it out.

Setting the scene

Our customer had procured additional IBM HS22 Blades to increase their compute capability further to a data centre project we, Xtravirt, had delivered in a previous project last year. These extra blade servers were installed (by a 3rd party) into their existing ‘H Series’ BladeCenter chassis and I was brought in to assist with ESXi builds, configuration and environment assurance during the expansion. For this article there’s no need to divulge the entire equipment specification other than the networking hardware. The BladeCenter had 2 x BNTs (Blade Network Technologies) installed each with 4 external ports connected, overviewed in a diagram later in this blog post.

The environment

The ESXi configuration took next to no time to apply as we’d previously introduced the concept of Host Profiles which, in this type of environment where high density compute is concerned, is ideally suited. VMware’s Update Manager and pre-defined baselines ensured the build and patching levels mirrored that of the current live environment. The new hosts were kept outside of the live production cluster so as not to disrupt any service provision and also to allow the customer to review and accept the new hosts before expanding out the cluster. All was ticking over very well until the testing started…

The head scratching moment

A test virtual machine was introduced to one of the new ESXi hosts to facilitate a pre-defined test schedule and report; a few Command Prompt windows were opened with a continuous ‘PING’ issued to different IP subnets to evidence the functionality of the network. Using vMotion, the virtual machine was migrated between the new hosts to ensure no loss of service however; the testing revealed actual loss of network connectivity but only on some blades. Starting with the simple things first I checked the status of NICs, were they up or down? Realistically I had expected to see a uniform outcome given that a converged network infrastructure should be presented consistently to all blade servers within a single blade chassis. In VMware vCenter this is what I observed for a server that was working (ESX61): For a server that wasn’t working (ESX62): Note: In these screen grabs it can be seen that the Observed IP ranges differ but I can qualify that the VLAN presentations were consistent on both servers. The 4 active networks had ‘shifted’ up 2 vmnics and looking at the MAC Address order I realised these too were out of line. The basic diagram below shows the rear of the BladeCenter chassis with 2 x I/O modules populated each with a BNT (IBM Blade Network Technologies).   This physical presentation translates to a logical presentation within VMware ESXi, the table below elaborates this. The working host, ESX61, conformed to the table. The ESXi host with the shifted NIC presentation, ESX62, clearly showed a difference.

Further investigation

Taking a step back from the console I sketched out the rough path of how I understood the physical to logical transition took place. Aspects I felt that would be an instant win related to the IBM Blade BIOS and the BladeCenter’s BNT configuration for the blade slots. Before charging down either of those routes I simply swapped a working and non-working blade server between slots, the purpose was to prove whether the ‘fault’ remained to the slot or followed the blade. What happened? It followed the blade which meant the configuration of the blade was ‘suspect’, so with the fault clearly related to the blade configuration I compared the BIOS of a working ESXi host against one of the troublesome ones, and both blades were identical. I took a step back further to the hardware procurement and installation, and soon established from the 3rd Party installer that some of blades had their 8 x NIC provision and configuration enabled using a local installation of the Emulex OneCommand software on a USB key, whereas some of the other blades had this applied using the Emulex OneCommand VMware vCenter plug-in. This was the key differentiator and highlighted that the blades configured within vCenter were reporting their NIC orders incorrectly. If I explain the requirement and need for Emulex OneCommand software it’ll start to pave a route toward the resolution and you’ll start to see why the difference followed the NIC provision.

IBM HS22 Blade NIC roles

The introduction of the Emulex 10GbE Virtual Fabric Adapter Advanced II (Part 90Y3566) daughter board to a blade not only increases the NIC quantity but also assigns 2 of them (vmnic4 & vmnic6) a personality of ‘iSCSI’, a minor inconvenience especially for a large blade installation. To change this default state the Emulex OneCommand software is required, it’s available as a standalone MS Windows application and also as a VMware vCenter plug-in direct from Emulex’s website.

Emulex OneCommand software

The MS Windows application can be installed locally on the blade (if it’s running a Microsoft operating system), on another networked computer (blades must be participating on the network to be administered), on a Windows PE disk or bootable USB key. The VMware vCenter plug-in has to be installed on the vCenter server and then ‘Enabled’ within the vSphere Client. This plug-in will only be able to provide adapter information and the option to configure a blade’s NIC once the ESXi host has been joined to the network.

Why were the NICs out of order?

The issue with changing the NIC personality from ‘iSCSI’ to ‘NIC-only’ after a server has been configured to participate on the network is that the presentation of the new NICs will not shuffle the existing NICs and re-order them based on their MAC Address. So now the 2 new NICs appear at the end of the list rather in the positions you’d expect, this is to preserve the first time NIC enumeration. This VMware KB article describes the changing of vmnic numbers post PCI card installation; this was in effect what was happening – new hardware being introduced. So you see, the centralised management through the application is great assuming all the devices are configured that way. In my scenario I had a mixture which is why the blades were consistently inconsistent.

The solution?

Initially I considered creating a new Host Profile as a workaround but soon realised the use of a Distributed Virtual Switch (DVS) meant I couldn’t leave the NICs in their mismatched state. The DVS Uplinks were already defined and active in the live environment so I had no option other than to re-install IBM’s OEM VMware ESXi on each of the blades where their NICs were incorrectly assigned. With all the NICs present the discovery during installation worked perfectly and the application of the Host Profile pulled them back into line without too much effort.   If you would like talk to us about assisting your organisation with resolving an issue or providing a solution, please contact us.  

Win 7 – The default profile for VDI (part 3)

This is the third and final post in a three part series discussing the default user profile in Windows 7 for VDI. If you haven’t already read my first posts, I’d recommend doing so first. In the first article I … [More]

This is the third and final post in a three part series discussing the default user profile in Windows 7 for VDI. If you haven’t already read my first posts, I’d recommend doing so first. In the first article I covered off the default configuration options along with some battlefield tips, and also some food for thought discussion topics for consideration when preparing your ‘Master’ image. The second article discussed and walked through the creation process of the answer unattend.xml file, utilising the Windows Automated Installation Kit (AIK). This final post covers the use of the unattend.xml file created in Part 2 to copy the default profile in your Windows 7 build, and preparing the image for Sysprep and deployment within a VDI infrastructure. Microsoft have published a knowledge base article describing the process of customising the default profile, which can be referenced here. So we have a Windows 7 virtual machine, built, configured, tweaked to our own/company specification(s). Now we need to convert this virtual machine to be our ‘Master’ image using the unattend.xml file. From the template machine, create a folder on C:\ named deploy and place your unattend.xml file in this folder. The screenshot below shows this. Once done, locate (but do not select) the Command Prompt shortcut from your MS Windows Start menu, use the mouse right-click to offer the context menu of the Command Prompt shortcut. Choose, Run As Administrator. From here, we need to issue the command that will launch sysprep, shutdown,  generalize our machine and call the unattend.xml file with out the ‘Copy Profile’ parameter set. Enter the following:

C:\Windows\System32\sysprep\sysprep.exe /oobe /shutdown /generalize /unattend:C\Deploy\unattend.xml

Note: This command assumes that your system drive is labeled ‘C’, you have created and placed your unattend.xml file in a folder named ‘Deploy’ and your xml file is called ‘unattend’. Once the command is executed, Windows will launch and start the sysprep process and when  complete the virtual machine will shutdown. This process has created a virtual machine that will appear like an ‘Out of the box build’ with our customizations, however we need to be sure the process has worked before finalizing the image. Power on the machine once again. You will notice that VM console shows the computer being prepared for first use and you are presented with the Windows Setup screens again; so run through these as you have done previously (Sysprep generated this). Once Windows has finished running through the setup process, logon to your desktop. You should notice all your customizations are still in place. To ensure the CopyProfile command has completed successfully open the following file:


Search for the following:

[shell unattend] CopyProfile from C:\Users\%usrrname% succeeded.

[shell unattend] CopyProfile succeeded.

The image is now ready to be shutdown, and have a snapshot taken to be used as a master image within your desktop broker.   If you would like talk to us about assisting your organisation with an End User Computing based solution, please contact us.  

Coming soon: vPi - a Raspberry Pi initiative

You may well have been living under a rock if you have not yet heard about the Raspberry Pi. For those of you who are unfamiliar with the Raspberry…

[Update] 5 July - vPi has now been released: http://xtravirt.com/product-information/vpi/ You may well have been living under a rock if you have not yet heard about the Raspberry Pi. For those of you who are unfamiliar with the Raspberry Pi, it is a slightly bigger-than-credit-card sized computer that is both cheap, and capable, developed by the Raspberry Pi Foundation. It runs a 700MHz ARM CPU and the latest revisions come with 512MB RAM. Storage is handled by an SD card, and you even get HDMI output as well as network connectivity right out of the box. So, if you are a VMware evangelist and love the Raspberry Pi, what do you do? You create vPi of course! That was Xtravirt's co-founder Alex Mittell's thought when he decided to put together the vPi project. vPi is a modified version of the Raspbian distribution (which is based on Debian). It aims to provide a "plug and play" platform for administrators, consultants, or anyone else really, to use to connect up to any VMware vSphere virtual infrastructure and quickly and easily perform administration tasks, gather information, or run scripts against it. You could even think of it as a beefed up mobile vMA appliance, with a huge scope for customisation. Here is a quick list of some of the features and utilities that are included with vPi:
  • Perl 5.14.2 + CPAN 1.98
  • VMware vSphere Perl SDK 5.1.0 build 780721
  • Python 2.7.3
  • Ruby 1.8.7 & 1.9.3 + Rubygems 1.8.24
  • Ruby vSphere Console 1.1.0 (http://labs.vmware.com/flings/rvc)
  • vGhetto Scripts - built as DEB package from http://vghetto.svn.sourceforge.net/viewvc/vghetto/build/DEB/ - thanks to William Lam for these! Includes a custom vGhetto update script to easily update the scripts from William's Sourceforge repository
  • ESXCLI 5.1 (a special ARM compiled version allowing it to run on the Raspberry Pi hardware)
  • vmkfstools 5.1.0
  • Misc. software: ngnix-light, PPTP Client, ifstat, iftop, subversion client, tmux
  • Desktop / GUI
Xtravirt are currently working on the vPi project with the aim to release it out into the wild as a community supported project. The hope is to get everyone involved and using vPi. We would love to see other virtualization evangelists, script writers and automation experts writing content and improving on the vPi project, hence the reason we want to keep it as a free and open initiative. Join us on the Xtravirt forums to discuss the vPi project, whether it be ideas, questions or anything else related! For now, here are a few screenshots showing example usage of some of the utilities to whet your appetite.
[caption id="attachment_2803" align="alignnone" width="600"] Some of the VMware utilities built in to the vPi image[/caption] [caption id="attachment_2804" align="alignnone" width="300"] Some more utilities and examples run on the vPi[/caption] [caption id="attachment_2805" align="alignnone" width="300"] The Ruby vSphere Console fling being demonstrated[/caption] [caption id="attachment_2806" align="alignnone" width="300"] The excellent vGhetto script collection on the vPi and demo of the updater script[/caption] [caption id="attachment_2807" align="alignnone" width="300"] ESXCLI - the ARM compiled version in action on the vPi[/caption] [caption id="attachment_2808" align="alignnone" width="300"] Running the vGhetto perl health check script[/caption] [caption id="attachment_2809" align="alignnone" width="300"] vGhetto health check script report output after running from the vPi[/caption] We are working hard to get the image ready for release, so keep your eyes on the Xtravirt blog and Twitter for updates. Otherwise, feel free to create a new thread or join an existing discussion on the forums!

Win 7 – The default profile for VDI (part 2)

This is the second in a three part series discussing the default user profile in Windows 7 for VDI. If you have not already read my first post, I would recommend doing so, it can be found here. In my … [More]

This is the second in a three part series discussing the default user profile in Windows 7 for VDI. If you have not already read my first post, I would recommend doing so, it can be found here. In my first article, I covered off the default configuration options along with some battlefield tips and some food for thought discussion topics. This article will discuss and walk through the creation process of the answer (‘unattend.xml’) file, utilising the Windows Automated Installation Kit (AIK). Microsoft has published a knowledge base article describing the process of customising the default profile, which can be referenced here.

So what is the answer file?

The answer file configures settings during the installation of Windows. In this scenario, the answer file will be used to create the master image that will then be used to deploy virtual desktops. The file will then be used to feed information to the Windows Installer through Sysprep to build a master image for VDI, specifying the configured parameters. I would like to point out; the answer file is a very powerful tool and is able to set a huge number of configuration options for Windows 7 deployment; however, we will simply be concentrating on the default profile scenario.

Let’s get started

I would suggest having a machine (virtual) with a clean install of Windows 7 available for this process that can be used as our build machine. This can be deleted afterwards and well within the evaluation time allowed by Microsoft. If you do not already have the Windows AIK for Windows 7, you need to download it. The AIK can be downloaded from here. Once the AIK is downloaded and the ISO is mounted, the AIK splash screen should launch as shown in Figure 1. Win7-pt2_image1 Figure 1: AIK Splash Screen From the options menu, select Windows AIK Setup.  The Setup Wizard will then start as shown in Figure 2. Figure 2: AIK Setup Wizard Run through the wizard to complete the installation. Once installed, launch Windows System Image Manager (SIM) from the Start Menu as shown in Figure 3. Figure 3: Launch System Image Manager Once SIM is launched, the following screen will appear as in Figure 4. Figure 4: System Image Manager Before creating the answer file, we need to copy a Windows Image file (WIM) to the computer running SIM from the media disk from the chosen flavor of Windows 7. Unmount the Windows AIK ISO file and replace it with the Windows 7 media that will be used for out Master image for VDI. Locate the install.win file and copy it to the machine you are working on. The WIM file can be found in the following location:

<Media_Source> > Sources > install.wim

Open the image file. From SIM select the file menu from SIM and click Select Windows Image. Figure 5: SIM File Menu If your media has multiple versions of Windows available, select the version relevant for the VDI deployment. Figure 6 :Image Selection When prompted to create a new catalogue file click Yes. Figure 7: Create New Catalog Once complete, we need to create a new answer file. From the File Menu, click New Answer File and you’ll see the Answer File Pane populates similar to that shown in Figure 8. Figure 8: Answer File Pane You’ll see from the answer file layout it is comprised of different phases of the Windows setup process called configuration parses. From the Windows Image pane, you can expand Windows Components, right-click the required component and add it to your answer file. From the Windows Image Pane, expand Components, and navigate to amd64_Microsoft-Windows-Shell-Setup_6.1.7600.16385_neutral, right-click and select Add Setting to Pass 4 specialize as shown in Figure 9. Figure 9: Windows Shell Setup You should notice section 4 of the answer file will now be populated with our chosen configuration option. Click on the configuration parse just added and the configuration options will appear in the Properties window. From the dropdown menu, change select true as the value for the CopyProfile field. Figure 10: CopyProfile Configuration This is the only change required to ensure that the currently logged on user profile is copied to the default user profile during Sysprep. Save the answerfile by clicking the File Menu and selecting Save Answer File as and change the name to unattend. Once the file is saved, open the file using a web browser, and notice the CopyProfile is set to true. Figure 11: XML Output In summary, this article has walked through the process of creating an unattend.xml file that can be used with Sysprep in a Windows 7 image deployment to copy a configured users profile as the default profile for that image. In Part 3 of this series, I walk through the process of finalising the image using Sysprep and ensuring it is ready to be provisioned by your virtual desktop broker of choice. If you would like talk to us about assisting your organisation with an End User Computing based solution, please contact us.

Mirage: VMware reaches out to the PC…

One of the great things about being an Xtravirt consultant is that we’re generally frontrunners for the latest and greatest technologies, and get to apply them in real world use-cases – and this is an excellent case in point.

One of the great things about being an Xtravirt consultant is that we're generally frontrunners for the latest and greatest technologies, and get to apply them in real world use-cases - and this is an excellent case in point. Last year, VMware acquired California based company, Wanova.  Wanova’s key product is Mirage.  This is a client-server software stack that provides the management, standardisation and protective measures usually found in a VDI solution to the thick-client Windows PC.  Mirage provides a solution that both compliments VDI approaches, or, in some circumstances, provides a better alternative.

How does it work?

It’s probably easiest to first discuss the components of the solution.  It’s a very straight forward client-server affair made up of the following components:
  • Mirage servers.  These provide the processing muscle for the Mirage solution, where data is transferred to and from.  Where multiple servers are used, they can use shared NAS storage and be presented as a load balance cluster using a load balancer (such as Windows NLB).
  • Mirage Management Servers.   Where the Mirage Servers provide the muscle, the brains for the solution are provided using the Mirage Management server.
  • Mirage Management Console.  This is an MMC based application provided for administration.
  • Mirage Web Server.  The Web server provides end-users the ability to recover data from the Mirage solution in the event of loss on the client.
  • Mirage Client. The end point device requires the Mirage client – this provides all the functionality required in about 5MB.
  • Branch Reflector.  This is a PC with the Mirage Client installed (usually on a branch site, hence the name) that is ‘promoted’ to act as a local cache for drivers and layers to be downloaded to clients.
So far, so good, and as you can see it’s quite a simple structure.  But what are the mechanics? The concept is pretty straight forward – Mirage works on the basis that a client is the sum of a set of layers.  These boil down to:
  • User settings and data
  • Applications
  • Operating system
The Mirage Client is installed on an installation of a Windows desktop operating system (XP, Vista or 7, in either 32 or 64 bit flavours) and registered via the management server – this is referred to as ‘Centralising the Endpoint’.  In English, this means Mirage takes an audit of the PC, processes what it needs to protect and ships it back to the Mirage Server estate for storage. The auditing and processing phase allows Mirage to apply file and block level de-duplication to data both locally held and at the Mirage Server estate to reduce the amount of data to be transferred.  The data is then compressed and transferred back.  The de-duplication/compression is the special sauce. Once there are a number of clients registered in Mirage, the centralisation drops to essentially just the user data.  The first client will upload practically the whole PC, whereas subsequent client will not need to upload common operating system, application files and some data files, ie: two users have the same Excel document. Once uploaded, Mirage essentially becomes an incremental backup tool, uploading changes subject to policy (schedule, file types etc.).  This backup capability leads, inevitably, to the ability to recover.  It’s possible to rescue the entire installation, or just recover the application settings and data to another device, including virtual machines.  It’s even possible to recover to a client running a newer operating system, opening up avenues for VDI migrations and hardware refreshes. But were this merely a backup tool, then it wouldn’t be nearly so interesting to VMware.  The next part is where it gets interesting.  Mirage is able to define the differences between the operating system, applications and data.  This capability allows Mirage to be used to manage these abstract layers. By taking a client PC with the Mirage Client software and registering it instead as a Reference Client, it is possible to establish standardised layers; base Layers containing operating system and additional software that might be required.  These layers can serve a number of purposes:
  • Ensuring a Client is returned to a standard is a useful measure for ensuring a PC maintains a baseline configuration, not to mention fixing faults on clients.
  • Upgrades are base layers can be version controlled, so it is possible to use it for deploying patches, additional applications, or the biggest item of all, an in-place migration from Windows XP or Vista to Windows 7.
It should be noted that Mirage Servers can be provisioned with hardware drivers; this is to provide support when applying an upgrade to a client. As mentioned above, in a multiple site scenario, Branch Reflectors can be used to further lighten the load on WAN links.  While these machines don’t cache uploads from the client, they are able to act as a cache for layers and drivers, so when a client requires these, they don’t need to pull them over the WAN link.  Consider them in the same light as, for example, an SCCM distribution point.

Windows 7 Migration

With Mirage, it’s possible to essentially ‘slide out’ an old operating system and ‘slide in’ a Windows 7 installation.  Again, it boils down to the layers. By defining OS/application layers, Mirage can substitute one for another. The Migration wizard takes the audit of a given target PC and packages up the files required and downloads them to the PC, again, with de-duplication and compression.  All this happens in the background.  The client is smart enough to throttle bandwidth to reduce impact on the user. When the data is in place, the user is prompted to reboot the PC.  Mirage engages its Pivot process. The Pivot process basically swaps the legacy operating system and its associated applications and boot loader and swaps them for the downloaded layer.  The legacy install gets dropped into a Windows.old folder. During the reboot process, the PC starts up with a splash screen explaining what’s going on to the end user.  Under the hood, it’s loading the correct device drivers, using Microsoft User State Migration Tool.  This is installed on the server and fed down during the migration to switch the user profile and data to the Windows 7 install and re-connect the PC to Active Directory.  Another reboot and the client is ready for use.  Between the prompt and the last reboot, it’s around 20-30 minutes.

Migration Observations

There are a number of things to consider when planning an OS upgrade using Mirage.
  • Base Layers can include more than just the operating system; they can include applications, so it’s possible to establish multiple Base Layers to cover different departments, for example
  • When developing Base Layers, consider them in the same way as would be the case with any imaging approach.  For example, applications that hard-code local identifiers (application GUIDs) should be avoided or a least managed.  Fortunately, Mirage can be configured with a post-migration script that can be used to install such applications cleanly
  • Some device drivers include associated software, for example, SoundMax audio adapters leave legacy software components that tend to generate a ‘missing hardware’ warning when the generated layer is deployed to different hardware.  It can be worth removing the driver/software prior to recording the Base Layer
  • Disk partitioning. Separate boot/system partitions aren’t supported, at least in terms of upgrading operating system or recovering the whole system
  • Disk encryption.  The Mirage agent can be installed and will protect a system with encrypted disks as the client software runs within the operating system layer.  However, the disk must be decrypted in order to do any operating system layer work

So where does Mirage sit in a VMware End User Compute world?

So, given it’s a VMware product, where does Mirage fit in VMware’s end user suite?  The answer is not quite straight forward at this time. If you’re looking at an estate which is predominantly formed of roaming users, this can be a better fit than VDI, even using roaming virtual desktops.  Obviously, straight forward VDI needs a connection, but even using View Local Mode has limitations that make Mirage attractive, mainly from a performance as well as licensing efficiency stand point, for example, a laptop isn’t running two operating systems.  There are also the practicalities of checking in and out roaming desktops to consider.  Mirage is very WAN tolerant due to its ability to de-duplicate and compress data, not to mention it can stop and resume transfers, so suits road-warriors quite well. Within an estate where roaming users aren’t a major consideration, Mirage is complementary.  It can be used to ensure that thick clients adhere to a standardised software stack, while also be used as a means to migrate users in and out of a virtual environment.  It’s also useful in legacy environments where users have had carte-blanche rights to keep data on ‘their’ PC, even beyond ‘My Documents’ – it can handle protecting and restoring user files even in this circumstance. With respect to ThinApp and Mirage, the layering approach fits nicely; especially with Mirage 4.0’s improvements with Application level layering.   A layer could include ThinApp packages, so providing an alternative means of deploying them. Overall, Wanova was a crafty acquisition on VMware’s part.  Mirage is a product that gives them footprint right down to the client device as a management and protective tool, but one that plays well both with a VDI implementation, or as an alternative in some circumstances. If you would like talk to us about assisting your organisation with a VMware Mirage based solution, please contact us.

Thin client USB re-direction issue

Recently at a customer site I worked on deploying a XenDesktop virtual infrastructure utilising Wyse T10 Thin Clients. One issue I discovered was locally attached USB printers. You’d think it would be a straight…

Recently at a customer site I worked on deploying a XenDesktop virtual infrastructure utilising Wyse T10 Thin Clients. One issue I discovered was locally attached USB printers. You'd think it would be a straight forward enough task - install the drivers in the Windows image and attach the USB printer. This is certainly what Wyse advise you should do and as long as it’s supported by Microsoft and Citrix then you should be good to go. However, this certainly wasn’t happening no matter what USB printer I connected. Before I dive into how I got this working it’s worth explaining a bit of history of USB devices and in particular what a VID/PID is because we’ll need to understand it for portions covered later. All USB devices have a Vendor ID (VID) or Product (PID) as their unique identifier, much like a MAC address for a network card. A VID is a 16 bit value that identifies the manufacturer of a USB device. A PID also has a 16 bit value and is used to identify the particular product from the manufacturer. Together these form a 32 bit unique code for each and every USB product. When I connected a USB key to the Wyse client it would be identified by the device and passed through to the endpoint, but when I connected a printer it wouldn’t. This initially led me to me to believe that USB printer redirection on the Windows client wasn’t enabled, in this case the USB printer class. This is handled by the Citrix VDA and USB classes can be added or removed via the Citrix XenDesktop GPO settings. So I checked the registry of the endpoint, specifically in two locations:


Both entries had the following settings:

# Syntax is an ordered list of case insensitive rules where # is line comment # and each rule is (ALLOW | DENY) : ( match ( match )* )? # and each match is (class|subclass|prot|vid|pid|rel) = hex-number # Maximum hex value for class/subclass/prot is FF, and for vid/pid/rel is FFFF DENY: vid=17e9 # All DisplayLink USB displays DENY: class=02 # Communications and CDC-Control DENY: class=09 # Hub devices DENY: class=0a # CDC-Data DENY: class=0b # Smartcard DENY: class=e0 # Wireless controller ALLOW: # Otherwise allow everything else

The class for printers is 08h which would fall under ‘ALLOW: # Otherwise allow everything else‘. In theory then this should work then. Citrix have published a knowledgebase article discussing USB configuration here, CTX132716. In addition to the registry settings are the Citrix GPO setting ‘Client USB device redirection’ found under User Configuration in the GPO. When I checked, it was explicitly set to ’Allowed’ as per the screenshot below. So with the endpoint configuration all checked and correct the next logical step was the Wyse client itself. I considered that maybe the Wyse client wasn’t passing the USB printer device through to the endpoint. Thankfully the Wyse system event log provides some good detail and after plugging the printer in the log provided the detail of the VID/PID of the printer. Once the device is connected the Wyse client details the USB device has been found along with the complete ID of the device. The first hex block is the VID, in the example ‘090c’ and the second hex block is the PID, in the example 1000. If you’re unable to get this information from the Wyse client you can also obtain it from device manager on a Windows client. In the example below the USB mouse attached to my laptop is used. Now having all the relevant information to hand I needed to switch my focus to the Wyse client and force redirection of the printer on it. This is completed via the Wyse Device Manager (WDM) and the specifically within the INI file used to configure the device. As I didn’t want to apply this change to all devices I can create a MAC INI file to tie to the Wyse client or a USER INI file to tie to a specific user. As the printer is locally attached I created an MAC INI file with the following line:

Device=vusb ForceRedirect=0x04f2,0x0112,0x03,0x01,0x01

(The hex string after ForceRedirect= is the exact VID/PID of the device) The Wyse device was rebooted to force a re-negotiation with the WDM and discover the INI file. Remember if you use a MAC based INI file and the Wyse terminal is swapped out due to failure you’ll need to create a new MAC INI file for the new terminal. If you're using a global wnos.ini file for the Wyse devices you’ll need to use Include=$mac.ini for MAC based INI file or Include=$un.ini for User based INI file. In addition the files need to be in ‘inc’ folder for MAC INI files and ‘ini’ folder for User INI files, under the wnos folder. Now when I plugged the printer into the Wyse terminal it was discovered as you would expect from the usual Windows process with the drivers installed as per normal. If I wanted to use a pooled/stateless image I would have to ensure the drivers were already installed in the master image for the printer driver installation to successfully complete.

XenDesktop Host Connections

If you deploy Citrix XenDesktop and use Machine Creation Services (MCS), you’ll need to create a Host Connection so the Broker can access the Hypervisor …

If you deploy Citrix XenDesktop and use Machine Creation Services (MCS), you’ll need to create a Host Connection so the Broker can access the Hypervisor. The connection is made using either, Microsoft’s SCVMM for HyperV, the SDK for vSphere or a direct connection to XenServer. Host Connections are used by MCS to provision machines.

A Quick word on PVS vs. MCS

Choosing whether to use MCS or Provisioning Services (PVS), is a topic for another day, as there are benefits for both. I think Citrix did not do MCS any favours in the XenDesktop 5 documented FAQ’s stating, “Until additional scalability information is available, Machine Creation Services should be used only for small to medium size VDI deployments”. This caused a poor initial perception of MCS and led many to believe that PVS was the only viable solution for large VDI deployment. I’ve since heard from Citrix Professional Services, that MCS has or is being tested to the same limits as PVS. MCS scalability is related to the scalability of:
  1. Storage
  2. Hypervisor
  3. XenDesktop Controllers
And of course user perception - check out the planning guide here. https://support.citrix.com/servlet/KbServlet/download/26479-102-665155/Planning%20Guide%20-%20XenDesktop%20Site%20Capacity.pdf

Host Connections

Should you decide that MCS fits your requirements, creating Host Connections allows you to define the network that the Virtual Desktops will be placed on and the storage the machines will sit on. Tying both networks and disk together in this way can offer mixed benefits: For example, in VMware View you have to create multiple snapshots of your master image to provision machines to differing networks, in XenDesktop you just define the host connection to use when deploying your catalogue. However; in VMware View you can dynamically select the storage when provisioning a pool, in XenDesktop you have to select the appropriate Host Connection and need to ensure this has the correct network assigned. You can define a Host Connection to use Local or Shared Storage, it depends on your configuration which way you go, but beware, you’ll need to make sure that your capacity plan / design is adhered to as its possible to reuse storage and networks on multiple host connections, keep an eye on this. You need to take care to ensure that Host Connections are balanced and don’t conflict with other host connections, thereby compete for resource. I.e. the number of desktops supported on a host connection should not exceed the number of IP addresses available, or exceed the performance of the disks assigned to it; if you do reuse a network or a set of disks between multiple Host Connections, the capacity of those Host Connections will be contended.

A final few words

Give your Host Connections meaningful, but short names. The Host Connection name is used when deploying Base disk images, if it’s too long for your file system, provisioning may fail when multiple pools are deployed as the truncated name will conflict. I use a format of “Host-Cluster-VLAN” e.g. “VC01-CLU1-1234”

Windows 7 – The default profile for VDI

I’ve managed to get a few VDI projects under my belt now ranging from small 50 seat deployments up to 4000 seat cross country deployments. Some have been from cradle to grave, whilst others have been bit part roles. I’ve … [More]

I’ve managed to get a few VDI projects under my belt now ranging from small 50 seat deployments up to 4000 seat cross country deployments. Some have been from cradle to grave, whilst others have been bit part roles. I’ve seen multiple issues in some deployments, many of which generally point back to the default profile. One common theme I’ve noticed (especially with Windows 7) is people often underestimate the importance of the Windows 7 default profile in their master image. Now don’t get me wrong, there are some brilliant user persona products on the market, whether they’re built in solutions or standalone products, but I see these predominately as 'enhancers'. Over my next three blog posts I’m going to describe how best to create a default user profile. Starting with this post I’ll discuss the initial configuration options, what should be done in the default profile and how it should be done. The second post will concentrate on creating the unattended.xml file that contains the ‘Copy Profile’ parameter that will utilise the Windows Automated Installation Kit. The third and final part of this series will be customising the default user profile in the unattended.xml file. Microsoft have published a knowledge base article describing the process of customising the default profile which can be referenced here. However many of my customers still experience issues getting this right first time, so the aim of these posts is to breakdown each step and help walk you through the process. Firstly, I must start by re-iterating this point Microsoft state in their KB article. The only supported method for customising the default user profile is by using the Microsoft-Windows-Shell-Setup\CopyProfile parameter in the Unattend.xml answer file. The Unattend.xml answer file is passed to the System Preparation Tool (Sysprep.exe). From Vista onwards this is the only supported method, unlike in the days of Windows XP where it was acceptable (and supported) to simply copy a temporary profile over the default.

Step 1: Configuring the default profile

You should always ensure that when configuring the default profile you always use a local Administrator user account; the process will not work with a domain user account. Ensure you remove all user accounts except the built-in Administrator account from the template machine. Note, any service accounts can be added in via GPO at a later stage. Start to configure any settings you want managed in the default profile. I’m not going to go into too much detail here, as each use case is different. It’s worth pointing out that VMware, Citrix and Quest have their own best practice guide, scripts and tools for customisation, which should be followed if applicable on that platform.
  • The VMware guide for View can be found here
  • The Citrix guide for XenDesktop can be found here
  • The Quest Guide can be found here
I would highly recommend reading each guide and understanding what each change is doing instead of just applying all changes or running the recommended scripts and decipher if these changes are applicable to your business case. For example, one of the changes the VMware View script makes is disabling the themes service within the image. This is fine, if your requirements are for the classic interface but from experience, most companies want the Aero/Orb theme for their users. A cut back version yes, but the whole concept of pushing VDI is bringing enterprise desktop environments into the 21st century, not pinning users back with a classic windows theme not dissimilar to Windows 98. If utilising a persona management application, you may want to hold back some changes and apply them from this level so they are easier to manage even if the master image changes, or different business units require different optimisations. Consideration needs to be given here, to ensure that you are not applying too many changes at logon or startup, which could have a negative impact on the performance of your VDI infrastructure. Finally, I would strongly suggest making use of snapshots throughout your image creation for failback purposes. So often people rush through making a number of configuration changes to find something doesn’t work further down the line and have to roll back a whole heap of changes to discover the issue. Document each configuration change/snapshot so you can quickly and easily roll back to a point at a later stage. I tend to get an image to a production ready state then clone off the final snapshot as a new virtual machine. Thus having a clean master image, yet still being able to go back to use my original template again if required. So, to close - decide what your master image contains, discuss with each use case owner and determine at what level the persona management works and importantly that you are meeting the customer requirement. After all, getting the base image architecture wrong will pave the way for all manner of issues as the applications are deployed. Read Part 2 in the series   If you would like talk to us about assisting your organisation with an End User Computing based solution, please contact us.  

It's not Post-PC Era, it's Multi-device Era

The title of this post is a quote delivered by Steve Herrod, CTO R&D of VMware during his keynote speech in the General Session at VMworld Barcelona. To put this quote into context the area he was discussing focused around … [More]

The title of this post is a quote delivered by Steve Herrod, CTO R&D of VMware during his keynote speech in the General Session at VMworld Barcelona. To put this quote into context the area he was discussing focused around VMware's Horizon product suite.

What is the Horizon suite?

Broadly speaking, VMware have created an architecture that attempts to deal with user applications and user data while at the same time allowing it to be presented through any device.

What’s the point of creating this architecture?

Organisations are now faced with many service provisioning dilemmas, do they:

1. Provide all users with a standard desktop and/or laptop with internal security and compliance?

Problem/Disadvantage: This presents internal IT departments with a headache; device management (by this I'm referring to hardware, OS and applications) is only part of it. Today’s users are far more technically literate and want to be able to access social networks, online shopping and watch television feeds – even while at work. If they can't do this they'll certainly try and find a way or work around to do so.

2. Allow users to bring their own device to an organisation as well as still being able to use a standard desktop and/or laptop.

Problem/Disadvantage: This is Point 1 (above) plus users’ devices that are introduced to the network are not managed by IT, have an unknown security policy, an unknown configuration and still need to function on the corporate network. The corporate network mustn't be compromised by these devices but also not be a barrier to them functioning. These are incredibly high risks.

VMware’s approach

Quite simply, the Horizon suite takes on the role of a mediator or broker. Users connect into their corporate infrastructure and are presented with applications that suit their device and need. Governed by management policies, applications are only presented to the relevant users and groups making use of single sign-on from directory service pass-through authentication. The diagram here, a cut-down version from a VMware original, shows users and their devices meeting the broker; the broker reviews the request and provides the service(s) defined by the pre-defined user rule sets. During the VMworld keynote presentation, the idea that applications were the sole purpose of this device enablement was dispelled. A demonstration was delivered showing that a broken laptop didn’t mean a user was out of action until it was fixed. A user that’s managed through Horizon is able to use another device immediately and continue to work. Admittedly the choice of device could limit the productivity but in the example shown, the user lost their MS Windows laptop but was able to continue working on their Apple MacBook. This may seem too good to be true but VMware do have many multi-platform ‘type 2’ hypervisors and application virtualisation techniques, so the groundwork had already been completed. Another demonstration was aimed specifically towards the use of corporate managed applications on an Apple iPhone, until now only VMware Mobile offered this feature. The audience witnessed a corporate application on a personal mobile device with execution within isolation from other running applications. The isolation, as demonstrated, prevented sensitive data being copied to non-corporate applications. Even Non-MS Windows tablet devices feature into the Horizon solution using the VMware View Client for a MS Windows VDI session. The View Client isn’t new; the challenge here is dealing with device native gestures, passing them through to the View client and preventing them from being cumbersome in navigation or within applications. Rather than battle against them VMware have introduced their own gesture layer through, User Interface Virtualisation. This feature allows typical swipe, tap and tap & hold gestures but they’re controlled by the interface and pass directly through to the VDI session Operating System. Additional features on top of this provide quick access to application switching and tricky tasks such as selecting / copying / pasting text. Ideal if a user were to swap between device manufacturers.

Wrapping up

While I’ve lightly touched on the Horizon product suite it’s clear to see that VMware’s direction is very much towards End User Computing (EUC) opposed to dealing with just Virtual Desktop Infrastructure (VDI). These technologies are very often muddled and considered to be one and the same but, as I hope you can see they’re clearly not. For those of you familiar with Brian Madden (http://www.brianmadden.com) and his prolific blog posts you’re probably aware of his book, “The VDI Delusion”, if you’re not, it’s worth a read. It reminds you of technologies past, market statements of intent and vendors promising their utopian solution. Of course, a utopian solution doesn’t exist, there are many possibilities and vendor technologies to assist with EUC, and it’s all about understanding what is best for the requirement.

Auto-Deploy Tips and Tricks

Over the past few months I have been a part of a number of Auto Deploy designs and Proof of Concepts. This has allowed me to really learn the feature that was introduced as part of vSphere 5 and is … [More]

Over the past few months I have been a part of a number of Auto Deploy designs and Proof of Concepts. This has allowed me to really learn the feature that was introduced as part of vSphere 5 and is now updated with vSphere 5.1. I also spent a fair amount of time learning and practicing with the feature in preparation for my VCAP5-DCA, which I sat recently. From these engagements and my studies, I have picked up a fair amount of tips and tricks for Auto Deploy and accumulated quite a few great resources to help people looking to learn it and deploy it within their environment. The tips and tricks I have learnt and covered in this article are applicable to versions 5.0 and 5.1 of vSphere.

Tips and Tricks

Host Profiles

I have made this the first of the tips mainly because Auto Deploy is a relatively simple solution, but there are advanced settings that are required for host profiles to ensure your stateless ESXi hosts connect to the network. Host Profiles applied to the hosts/cluster are extremely important to ensure your hosts are available for use in the shortest possible time. Below I have listed some of the advanced settings I had to apply for my stateless Auto Deployed hosts to enable them to work. The settings below are over and above the obvious settings of configuring a syslog server, a scratch location and pointing your hosts to the network core dump collector.
Option Description Setting
Security Configuration -> Administrator Password -> Choose "Configure a fixed administrator password" from the drop down Host profiles must be fully specified; they cannot include settings that prompt the user for information. Enter the password and click OK.
Networking Configuration -> DNS Configuration -> DNS Settings Configures the Auto Deployed ESXi hosts with the DNS server that contain the static DNS entries. Insert the DNS servers for the hosts that contain the static DNS entries. Add the domain name portion of the DNS name to the AD domain that the ESXi hosts are being deployed into.
Networking Configuration -> DNS Configuration -> DNS Settings -> Host Name Allows you to configure from where the Auto Deployed ESXi host will retrieve its host name. Configure to the Obtain hostname from DHCP setting to ensure the ESXi hosts obtain their names via the DHCP static entries.

VMware-FDM driver

This should not be a tip or trick as it should be obvious, but it seems many people forget to add this to their Image Profile. If you do not, when you add the Auto Deployed ESXi host to an HA Cluster the host will not have the HA Driver installed and therefore cannot participate in HA Failovers. Adding it is relatively simple as shown below:

Connect-VIServer MyVIServer

Add-EsxSoftwareDepot C:\update-from-esxi5.0-5.0_update01.zip

Add-EsxSoftwareDepot http://ip-of-vcenter/vSphere-HA-Depot

New-EsxImageProfile –Name Stateless –CloneProfile ESXi-5.0.0-20120302001-standard

Add-EsxSoftwarePackage VMware-FDM –ImageProfile Stateless

With the commands above you have now created an EsxImageProfile and added the VMware-FDM driver to the ImageProfile. Simple but very important. The above steps are just an excerpt of creating an EsxImageProfile and are not all the steps you need to follow to create a whole EsxImageProfile for the usage by Auto Deploy.

Saving your EsxImageProfile

After you have spent a fair amount of your time creating an EsxImageProfile, it is good practice to save it. This is due to your Image Profile not being saved and held in PowerCLI when you exit the command line. There are two formats you can save/export your EsxImageProfile to, ISO format or as a Bundle. You will also need to specify where you want to save this exported ImageProfile so you can use it for another DeployRule or distribute it to another location to ensure consistency for all your images. Continuing from the steps we followed above to add the VMware-FDM driver to the Image:

Export-EsxImageProfile –ImageProfile Stateless –ExportToBundle –FilePath c:\StatelessBundle.zip

Export-EsxImageProfile –ImageProfile Stateless –ExportToIso –FilePath c:\Stateless.iso

With these two commands, you have now exported your custom-built ImageProfile to a .zip bundle and .ISO file.

Setting the Execution Policy

This is another basic piece especially if you use PowerCLI daily but if you are new to PowerCLI then this may not be as obvious. If you don’t set your execution policy then none of your Auto Deploy Cmdlets will be available for you to run in PowerCLI. When you open PowerCLI, you will most likely see this warning / failure message. The enabling of the Execution Policy is very simple. Open PowerCLI and type:

Set-ExecutionPolicy RemoteSigned

When it asks if you want to change the Execution Policy type in Y to confirm Yes you wish to change it as shown below The AutoDeploy Cmdlets will now be available to use.

Converged Networking

An interesting point to note, a warning perhaps, is the use of Auto Deploy in a converged networking configuration as there is a limitation of which network card types are supported. VMware have published a KB article about this although it doesn't appear to be widely known & publicised. VMware's KB article states, "You cannot provision EFI hosts with Auto Deploy unless you switch the EFI system to BIOS compatibility mode." The statement is referring to the requirement of 'legacy' NICs, traditional 1GbE NICs that are controlled by the server BIOS rather than the dedicated converged networking hardware. The enablement varies greatly between manufacturers, it can be a BIOS configuration or additional hardware may be required.

Top Resources

There are lots of great resources both from VMware officially and from virtualisation community blog sites, the ones that helped me the most are listed below. I hope that the above tips, tricks and resources will prove helpful to you if your boss asks to evaluate Auto Deploy for your environment or, if you’re just learning to keep your knowledge up to date. Gregg

VMworld 2012: My Viewpoint

I was very fortunate to attend VMworld yet again, and with this being my third attendance in a row; it allowed me to see the differences in both my personal interest and growing expertise. So I thought I would give … [More]

I was very fortunate to attend VMworld yet again, and with this being my third attendance in a row; it allowed me to see the differences in both my personal interest and growing expertise. So I thought I would give my perspective on this year’s VMworld and the areas that caught my interest.

Day 1 (Partner Day)

Monday of VMworld is dedicated to VMware Partners with session content focused directly and around the relationships and partner ecosystem, and as Xtravirt are a VMware Solutions Partner it meant I was able to attend. This proved highly beneficial as a number of the sessions and discussions showed how dedicated VMware are to their partners and how much they are willing to help even SMB partners grow their sales and market share. If you work for a partner and are thinking of coming to VMworld US or EU next year then I would highly recommend signing up for the Partner Day and attending the partner tracks as a number of great announcements and tips were shared.

Day 2

Day 2 was the first full day of VMworld for everyone no matter if you were a blogger, a partner or an attendee. The day started off early with the VMworld keynote, and I was fortunate enough to get a great spot in the bloggers area and found the keynote and announcements in it interesting. I won’t go into too much detail around the keynote as you can watch the recordings on the VMworldTV YouTube channel here. One of the big announcements of the day that caught my interest was the release of the new VMware Cloud Management suite. After the keynote I attended a session around vSphere 5 design and then hit the Solutions Exchange where I was able to get a number of my questions answered around a couple of products I had my eye on. The Solutions Exchange was unfortunately in an adjacent building, which meant you had to factor in a 10 minute walk in your planning of attending sessions if you were moving between buildings, in my opinion it wasn’t situated as well as previous years. Mind you, the walking helped to burn off the calories on offer from the abundance of cakes. Recently, I have been deploying vCenter Operations Manager and vCenter Configurations Manager for customers. There have been a number of questions around the settings of the metrics and thresholds of vCenter operations Manager and how to customise it. Not forgetting of course the reporting thresholds and how to prevent an information overload and being bombarded by alerts. With vCenter Operations Manager 5.6 this has now been fixed with the addition of intelligent alerts and thresholds based on group management policies. I am really looking forward to using the updated product and was able to complete a Hands-On-Lab using vCenter Operations Manager 5.6 with the new ‘root cause’ description feature. Screenshot below shows this. Plus the ability to click on the link and find out what the error means and gain guidance on how to fix it from VMware, this is another step in ensuring you can manage your vSphere Private Cloud and Public Cloud all in one place. In the evening I attended the vExpert/VCDX/Office of the CTO combined party where fellow Xtravirt colleague Darren Woollard and I were invited due to us both being VMware vExperts. The event was amazing to say the least as we were able to chat to like-minded people from the vExpert group but also to VCDX’s and Office of the CTO employees. Darren and I decided to make sure we introduced ourselves to as many people as possible and spoke with Kit Colbert, Damian Karlson, Josh Atwell, Andrea Mauro and even to VMware’s CTO Steve Herrod. We snuck in a sneaky invite to attend one of the London VMware User Group meetings, he said he would try to make one next May. Fingers crossed. L-R: John Troyer (VMware Communities); Erik Ullanderson (Director Global Certifications); Steve Herrod (VMware CTO)

Day 3

Day 3 started with the second keynote focused around End User Computing (EUC) and set out to prove that End User Computing doesn’t equal VDI. The sessions were really great and yet again another demo, but this time around Mirage from Vittorio Viarengo who is the VP of Marketing for EUC at VMware. This keynote really showed how even mobile phones and tablets are being targeted by VMware as tools for the enterprise. He detailed how users can utilise these in their daily jobs and how easy it will be to do your work from these devices whilst still being safe and secure with corporate data. I would recommend watching the keynote here as it really gives some great insight into VMware’s vision for the future around EUC. The rest of my morning was spent in the Hands on Labs doing the vCenter Configuration Manager lab and the vCloud Automation Center 5.1 which were both really good. The remainder of my day was booked up. I had been asked to participate in a Customer Reference Video around my deployment of VMware vCenter Configuration Manager at a client of Xtravirt’s. After this videoing session I then contributed to another interview with the VMware UK social media crew around my experience of VMworld 2012, and to chat about the London VMUG. The VMware UK social media video is below which also includes fellow LonVMUG attendee Barry Coombs http://www.youtube.com/watch In the evening, it was the VMworld party, which was fairly good although yet again a headline band wasn’t booked, unlike the US. The crowd we were subjected to a couple of cover bands and some Spanish dancing meshed with street dance.

Day 4

Day 4 was my last day at VMworld and due to my flying out in the afternoon was quite short. I attended session INF-VSP1475 VMware vSphere 5 Design Discussions which was really informative and caught my interest around designs for my day job and for my planned attempt at the VCDX. The remainder of the day was spent watching the highly interesting TechTalk vBrownbags in the VMworld hang space / Bloggers Area and chatting about all the projects and technologies everyone is currently undertaking. For me this part of VMworld is less understood by some people in management (fortunately Xtravirt management is not one of these) but it’s so valuable. Being part of the community and being able to know who is doing what, can help you for future deployments, especially as you may need to call on the assistance of your peers. This aspect is really worth its weight in gold. The day was now over and this year’s VMworld was finished for me, so I made my way to the airport. I really enjoyed this VMworld and I am very grateful I was able to attend again. The direction VMware seems to be heading with all their tools and new solutions makes it a tall order keeping up to date with whilst doing your day job. However, with a week like VMworld it gives you the opportunity to update that knowledge and as previously stated allows you to make connections in community and within VMware that will help you deliver bigger and better solutions for your company and customers. Gregg

VMworld 2012: Making Connections

In this article I want to overview my perception of a VMworld conference, what the marketing engine offers and what the few days in Barcelona meant for me. What’s all the fuss about? The VMworld conference is the defacto ‘must-go-to’ … [More]

In this article I want to overview my perception of a VMworld conference, what the marketing engine offers and what the few days in Barcelona meant for me.

What's all the fuss about?

The VMworld conference is the defacto 'must-go-to' in the world of Virtualisation. Not only are attendees treated to product launches and new technologies appearing on the horizon (no pun intended), but the conference is buzzing with like-minded techies rubbing shoulders with each other. A dedicated area for vendors called the Solutions Exchange, is provided where all manner of free gifts, software demonstrations and business contacts are there to be extracted. As a veteran attendee I've built up many contacts over time, learned how to survive the Solutions Exchange, the evening gatherings and, surprisingly still manage to learn about new technologies. Attending this conference shouldn't be underestimated though; the days are long and tiring but phenomenally rewarding.


While the delegates are expected to pay a fee to gain access to the conference it's certainly not enough to cover the event. In the US, the Moscone Centre in San Francisco or The Venetian hotel in Las Vegas has to cater for upwards of 17,000 eager attendees. In Europe, over 8000 people now sign up. So, VMware partners up with the global giants to fund aspects the conference and in return they are given air time at General Sessions, plus many opportunities to brandish their logos on every serviette, sign and local piece of transport. For the more subtle vendors or, for those with a smaller marketing budget, the Solutions Exchange is where they make their mark. You'd expect to find most, if not all, of the market players that plug into virtualisation in one way or another. Booths are manned by keen account managers, technical SME’s and sales team all vying for your attention. Vendors will typically squeeze in as much they can into their allotted space. It's not uncommon to see fully populated SAN’s, or a Blade Chassis, as well as software demonstrations in a lab environment. Remarkably I've even seen Microsoft attending and touting their Hypervisor, and their hook-in was to win Xboxes, which gets most techies excited.


The infamous and best within the virtualisation industry are usually in attendance, maybe not at both conferences but at least at one of them. Whether at a technical, account management or architectural level, you'll find many of these people in and around the conference. The popularity of social networking tools and blogging now promotes their profiles which in turn promote more about the reason to be attending. It's not just technical content that makes VMworld a success; it's the people networking too. This is everybody's opportunity to meet, greet and engage. Many attendees already know their networking peers electronically, and it's the conference that helps to finalise the associations.


There are numerous routes to learn more about VMware's product suite; it's not just about sitting in a darkened hall listening to presenter hiding behind a lectern. Of course throughout the week you can knock yourself out attending session after session, scribbling notes and overloading on information (if that's your thing). Alternatively, take time out to attend the Hands-on-Labs, a dedicated area where you'll find hundreds of VDI sessions offering the opportunity to literally explore every aspect within VMware's suite of products. This avenue to access the latest technology is incredibly popular, after all, it's not every day you can deploy an entire Private Cloud, break it, and then fix it. After this, why not walk around the Solutions Exchange, find the VMware area and pick the brains of product suite experts? Demonstrations are always on hand for all the technologies. If you've exhausted that route then schedule a session to 'Meet the Experts'. Here the VMware subject matter experts are available to discuss existing and new technologies on a one to one or, one to few basis. Finally, you mustn't forget the vendors. Hunt down the ones you're already aligned with or potentially planning to be with and quiz them, it makes their day go quicker if they're occupied.

The evening events

There's no point in denying the fact that the benefits of attending a conference of this size comes with some perks too. If your people networking is working well you'll soon find yourself receiving invites from vendors and solutions partners to attend post conference drinks and nibbles. Some vendors go mad and hire an entire nightclub or a local brewery, whereas others just provide champagne and canapés. The VMware User Group (VMUG) typically arranges a meet up to bring the user group community together. There's a vExpert meet-up, VMware Customer and Partner recognition dinners, and so on. You get the idea. During the 3 day conference VMware provide a party for all conference attendees. In recent years the US VMworld has offered party headliners of the likes of the Foo Fighters and Bon Jovi whereas Europe tends just to have a themed event. Either way, there's free food and drinks on offer.

This year for me?

Well, it was a whirlwind of people networking and partner conversations. I notched up more contacts and linked Twitter IDs to physical people; in turn this of course leads to more connections. I attended an NDA session for a partner, Nutanix. Through this I added more people to my contacts, VMware and community based. You can see how the people-networking perpetuates so quickly. I had the added incentive this year of blogging too on my own website. Through my extracurricular activities outside of my daily consulting role I've notched up the VMware vExpert accolade in 2011 & 2012. As part of this community there's an opportunity to apply for a blogger's pass for either of the VMworld conferences, this year I applied and was accepted for the event in Barcelona. The challenge I then set myself was to bring something different to my blog postings, there are many bloggers in attendance dissecting technical sessions and business direction. So, I went for something that traced the day of a delegate, quite simple and easy to do with a pocket camera. I snapped a shot wherever I happened to be, starting from the Sunday morning when I set off to the airport to the end of the conference on Thursday. Each day I extracted the images, plugged them into iMovie, set a little music for background noise and hey presto. Every day a 1 to 2 minute video appeared on my site. You can see the videos here:
Sunday http://vexpert.me/q
Monday http://vexpert.me/s
Tuesday http://vexpert.me/t
Wednesday http://vexpert.me/y
Thursday http://vexpert.me/2A
  While I attended a couple of End User Computing technical sessions I found the learning from others far more beneficial. Chatting with other delegates is the only time when you’ll hear the real world stories of what has (or hasn’t) worked. Although, the most memorable moments during the week were meeting and chatting with Steve Herrod (CTO & Senior Vice President of R&D – VMware) and, very briefly, Pat Gelsinger (CEO – VMware). That’s name dropping for you. After reading all this I certainly hope to see you at one the future VMworld conferences.   Darren

XenDesktop, XenApp or View - which one?

Having worked with Citrix’s long standing XenApp technology in its many forms for 18 years, and now VMware View for the last three, I keep coming up against the question “which one is right for my organisation?”.  It used to … [More]

Having worked with Citrix’s long standing XenApp technology in its many forms for 18 years, and now VMware View for the last three, I keep coming up against the question “which one is right for my organisation?”.  It used to be a much simpler decision with Citrix the clear leader in the market on both features and performance. But do IT decision makers have enough independent information to make the technology decision today? Before I begin, I’d just like to caveat that there are other good solutions in the marketplace, and each has their own use cases, but for this post I’m keeping my focus limited to the top two established market leaders.

What are the differences?

Firstly let’s look at the primary differences and bring XenDesktop into the picture. It is much easier to compare Citrix XenDesktop and VMware View as they both work in a similar way.  They deliver desktop operating systems and applications hosted on a hypervisor from the data centre. Citrix XenApp however shares a server operating system across many users, providing potential greater density of users. This comes partly from reducing the number of operating system instances required to support the connecting users. The comparison challenge when developing a desktop strategy which is influenced by technology options comes with XenApp being bundled with XenDesktop. Most XenDesktop deployments I have worked on have a mixture of XenDesktop and XenApp. The general rule of thumb applied when developing the business case is the 80/20 rule, 80% of users delivered by XenApp and 20% with a full Windows 7 virtual desktop delivered by XenDesktop. However, use case analysis that includes applications may further influence the actual design to create a different split.

A simple decision?

Given the potential increased user density per physical host and other costs such as infrastructure and licensing, with Microsoft licensing being a significant influence, it initially appears a decision should be fairly straight forward. However it’s not as simple as it seems. To understand why, the decision needs to be taken in the context of what the business is trying to achieve and some of the realities these solutions and end to end architectures bring with them. Assuming that budget is available based on a realistic business case, the key influencing items for making a decision can be summarised in three points:
  1. What the business wants to achieve
  2. What the use cases are
  3. Scope of applications
VMware’s View solution has matured a lot over the last few years, and the difference in the user experience and device support between it and Citrix has narrowed greatly. Both vendors’ solutions now provide a realistic solution to support most use cases and client devices that are broadly used, even in businesses considering BYOD. The difference is in the effort to implement and manage each solution, with applications being the biggest challenge. Application virtualisation has helped to reduce application deployment challenges but there are still many hurdles to overcome. One difference between XenApp and dedicated virtual desktops is the risk of an application to have a negative effect on all logged on users. This creates the need to silo applications onto dedicated hardware, effectively increasing the compute resources needed to support a set number of users. Applications silos occur for a number of reasons including:
  • High resource utilisation
  • Memory leaking
  • Application compatibility
  • Multiple application versions
So where you can achieve greater density of users with XenApp, application silos are often required, which reduce the average density of users achieved across the entire estate. There is also the need for increased testing in XenApp. Applications both purchased and developed in house may also need additional testing and optimising to work effectively in the XenApp environment.

Reduce complexity

By adopting a virtual desktop strategy, many of these complexities are reduced. Applications need to be proven on the required operating system such as Windows 7, but for many this activity is happening if not completed already. Each virtual desktop has its own operating system and allocated resources greatly reducing the impact on other users when there is an issue. It is also easier to cater for the requirements of web based applications with more control over individuals web based settings and browsers. This reduced complexity when dealing with applications comes at a price, including greater infrastructure. But with the advances in hardware and software, virtual desktop infrastructures are becoming simpler and more cost effective. The more applications and complexity you have in an environment, the more compelling using virtual desktops over XenApp hosted desktops can be. But where the tipping point occurs will vary from one organisation to another. There is then the choice between Citrix XenDesktop and VMware’s View. For many organisations the decision is influenced by experience with a given vendors technology and internal skills. However the decision should not be made on this alone, rather weighed up in a wider decision making process. There will also be instances where the decision comes down to a single essential capability or feature of a particular vendor's solution. All these points demonstrate the need to fully understand the business requirements, use cases and application landscape. With applications moving more to web-based architectures, and HTML providing more functionality as a user interface, is investing in complex desktop environments the right thing to do? For many there is a compelling reason to do this but with the pace of change it is worth striving to keep complexity and cost to a minimum. Looking at vendors roadmaps and aligning these to business vision will help future-proof any investment made now. Where there may not be a single solution that meets all requirements, many organisations will find that the ecosystem of tools in the virtualisation market place will help meet short term business goals while providing a credible step forward towards truly flexible, device independent application access.

In summary

  • Prioritise getting your requirements and information gathering completed before fully reviewing technology
  • Start with a high level vision and work methodically towards a detailed requirements and business case from which all decisions can be validated.
  • Look outwards from business objectives and current resources, processes and technical capabilities to help validate technology selection at all levels.

Trust in Virtualizing Bus. Critical Apps: Part I

Many organisations have added a virtualization capability to fulfil infrastructure needs, which in turn, has led to many running critical backend system workloads. The level of dependency and trust in virtualization is forever growing, particularly as multiple vendors provide wider … [More]

Many organisations have added a virtualization capability to fulfil infrastructure needs, which in turn, has led to many running critical backend system workloads. The level of dependency and trust in virtualization is forever growing, particularly as multiple vendors provide wider reaching interoperability, as well as competitive vendor compatibility. The level of trust in delivering business critical applications using virtualization varies based on a number of factors:
  • Size of organisation
  • Security
  • Compliance
  • Performance
  • Needs
  • Country laws etc.
Product Developments Vendors have been focusing on developing new products, and feature improvements for existing products, targeting the needs of delivering business critical applications using virtualization. The primary focus is to provide seamless, configurable, and centralised capability to deliver business critical applications. Integrated features such as high availability protecting networking, storage, and compute resources are a few technology focused features. Tooling is also playing a key role, with monitoring and management that, with the right business requirements input can provide meaningful KPI data. On target availability and performance for example, will strengthen trust and prove the capability of virtualization delivering business critical applications. The Use Case Approach As with any technology project, approaching each scenario with a use case approach will lead to the right technology decisions and design. If we use the example of a Financial Trading company, it will produce many use cases, but it’s also likely that there will be common use cases throughout, eg:
  • Low latency
  • Highly available
  • Auditable
  • Any device, anywhere
Each key virtualization vendor has business critical application focused virtualization products and a supporting roadmap that fit these use cases. In some cases, organisations may need to employ a mix for the right solution for them. 5 Steps So, one of the early activities is to build those use cases. The level of depth will vary, but here are 5 suggested steps to follow:
  1. Catalogue your app landscape; achieved using the following:
    • Discovery software tooling, via agents deployed to workstations (thick, thin and or virtual)
    • Interviewing app owners
    • Existing dynamic & static CMDB data
  2. Group users into clusters, eg:
    • Office & Home Based Traders
    • Operating up to 24/7
    • International clients
  3. Measure application usage, again using software tooling
  4. Map users and user groups to apps, grading them as you go
  5. Build standard and edge use cases (edge cases being one-off’s or very unique)
With the app landscape catalogued, usage statics collected, users defined, and use cases built, you are now ready to move forward with a technology selection phase. I’ll cover the next phase in an upcoming Part II.

Updating Persistent Desktops in Citrix MCS

This post covers a recent experience I had when updating persistent desktops in Citrix Machine Creation Services. If you’ve got a deployment of dedicated virtual desktops in Citrix XenDesktop, you may have a requirement to update the master image.  This may … [More]

This post covers a recent experience I had when updating persistent desktops in Citrix Machine Creation Services. If you’ve got a deployment of dedicated virtual desktops in Citrix XenDesktop, you may have a requirement to update the master image.  This may be when the number or type of changes made to the master image is large (this could be patches or applications), and means that newly provisioned machines take a long time to apply updates when first used. Updating Pooled desktops is easy; it can be done through the GUI, but for dedicated desktops it needs to be updated via PowerShell, and in all cases it can have an impact on your storage. It’s worth noting that once you change the master image for a dedicated Desktop Group, the existing desktops will not be affected as the updated master image only applies to new desktops. This is great though, as the existing desktops have probably had the updates applied already through enterprise management tools, and this is the best way, I believe, to manage dedicated desktops once they’ve been deployed. If you have the need to update your master image, then you will first need to load the Citrix PowerShell snap-in, you can run this from PowerShell on the Desktop Delivery Controller:

add-pssnapin citrix*

A Quick note on PowerShell: While some people seem to try and craft PowerShell “one liners”, for important tasks such as this, I prefer to write all my commands out in a script and run each line in turn, judging the output and value of variables before progressing to the next task. You can use any script editor, the built in PowerShell Integrated Scripting Environment (ISE) within Windows is a useful free tool; just press F8 to run the selected line of code.

Once you’ve loaded the Citrix snap-in you need to get the Provisioning Scheme details. You can just run the “provscheme” command, but as this returns a lot of information, its best to capture all the information in a variable then just loop through and display the valid pieces of information:

#Get Provisioning Scheme

$ProvisioningScheme = provscheme

#Loop Each Desktop Group and Get Master Image

ForEach ($Group in $ProvisioningScheme)


Write-Host "######"

$Group.ProvisioningSchemeName # This is the Desktop Group

$Group.MasterImageVM # This is the Current SnapShot


So now we have a list of Desktop Groups and their associated master images, each separated by the hash marks. The first value is the Desktop Group, the second value is the master image and shows the current snapshot:


VDI Test Group 1_vCenterSharedStorage

XDHyp:\HostingUnits\vCenterSharedStorage\Win7XDMaster.vm\Initial Build.snapshot\Production Release.snapshot


Now that the master image in use is known, you can use the information to run the following command which will return the new/current snapshot of your image (note that you only need to specify the VM name, not the full snapshot path) e.g:

Get-ChildItem -Recurse -Path 'XDHyp:\HostingUnits\vCenterSharedStorage\Win7XDMaster.vm'

In the output the new snapshot is the value in the Full Path field, e.g:

FullPath : XDHyp:\HostingUnits\vCenterSharedStorage\Win7XDMaster.vm\Initial Build.snapshot\Production Release.snapshot\Master Image Update.snapshot

We now have the two key pieces of information that we require, the Desktop Group name and the new snapshot name, let store those in a couple of variables:

$ProvSchemeName = "VDI Test Group 1_vCenterSharedStorage"

$NewMasterImage = "XDHyp:\HostingUnits\vCenterSharedStorage\Win7XDMaster.vm\Initial Build.snapshot\Production Release.snapshot\Master Image Update.snapshot"

The final step is to run this last command; this will update the provisioning scheme for the group to use the new snapshot:

publish-ProvMasterVMImage -ProvisioningSchemeName $ProvSchemeName -MasterImageVM $NewMasterImage

Once the command is running, you can check the progress as normal in your hypervisor and watch the provisioning process; finally within the Citrix Desktop Studio action Tab you’ll see confirmation that the task was successful, you can also run the “provscheme” command again to check that the Desktop Group is using the new image. With the PowerShell console still open, now is the ideal time to set new parameters for the Desktop Group such as the memory, CPU’s or disk size if required, e.g., to alter the amount of memory new VM’s are deployed with run this command:

Set-ProvScheme -ProvisioningSchemeName $ProvSchemeName -VMMemoryMB 2048

You’re now sitting back on your laurels after successfully updating your desktops, but watch out… before you leave for the day there are a couple of gotchas. In XenDesktop 5.5, you will soon start to notice that when users logoff, their machines shutdown and don’t restart. This issue is fixed in XenDesktop 5.6 but is caused by XenDesktop marking the existing machines as out of date and pending an image update, in fact you can see this out of date message in Desktop Director. Remember that updating the master image does not affect existing machines so I’m assuming this code is sitting there for a future purpose or has been reused from updating pooled desktops, the fix for those affected is in CTX132211. The second is a matter of space, for each update you perform, a new master image must sit on each data store/storage repository defined in the host connection.  This can soon start to consume a large amount of space and can be cumbersome to manage; space is one of the main considerations when updating dedicated images and planning your storage requirements. You can mitigate this with de-duplication, but that’s a blog for another day. Finally, it’s worth noting that there is currently no function in Citrix XenDesktop similar to VMware View’s “recompose” to update an existing dedicated desktop to use a new master image, if you want to do that you need to delete the users machine and issue them a new desktop provisioned from the new image. Remember that you’ll potentially lose some user settings if you re-issue new desktops unless you are completely managing the user’s persona and application delivery. However, issuing new desktops may be a viable action in certain circumstances such as a broken desktop, or when the user’s difference/delta disk has grown in size due to installed applications, and those applications are now in the base image.

SRM v5 and EMC Recoverpoint

VMware SRM v5 with an EMC RecoverPoint SRA v2 – Array Pair missing In a recent deployment of VMware’s SRM v5 I was unable to successfully create a Protection Group. The first step in the Protection Group wizard presented an … [More]

VMware SRM v5 with an EMC RecoverPoint SRA v2 – Array Pair missing

In a recent deployment of VMware’s SRM v5 I was unable to successfully create a Protection Group. The first step in the Protection Group wizard presented an empty Array Pair pane. The screenshot below shows this empty pane.

 Figure 1: Empty Array Pair

At this point you would expect the pane to be populated with the SRA previously configured for the Selected Site. From here you cannot continue. I naturally assumed the SRA Array Pair was Disabled but upon reviewing the status of the SRAs they were indeed, Enabled. Puzzled, I set about checking the status of the VMware SRM service on both dedicated SRM servers, these were both running. The MS Windows Event Logs revealed no errors from either the VMware SRM service or the EMC SRA. Using the bountiful resources available on the trusty internet I couldn’t locate an article, VMware (or) EMC forum post or blog entry that referred to this exact problem. At this point I began to wonder if the problem wasn’t at the VMware layer but at the underlying storage presentation or even the communication to the EMC RecoverPoint Appliance(s). The investigation now continued on the EMC RecoverPoint installation. Initially checking through each of the categories within the console revealed many green ticks. All the references to their status, replication, pairing and, VMware vCenter connectivity were reporting correctly.

Figure 2: RecoverPoint appliance configuration

The Consistency Group Status showed the animation of data traffic, the storage access reported correctly as either Direct Access or No Access.

Figure 3: Consistency Group Status

It was actually from this screen the answer presented itself, specifically in the Policy tab. By default a list of categories is shown, with all of them collapsed. Expand the category:

Stretch Cluster / VMware SRM Support / Replication Enabler for Microsoft Exchange Server 2010”

The section for VMware Site Recovery Manager (SRM) is configured by default to:

“Group is in maintenance mode. It is managed by RecoverPoint. SRM can only monitor.”

This must be changed to reverse the ownership roles. Change it to:

“Group is managed by SRM, RecoverPoint can only monitor.”

The screenshot below shows this:

Figure 4: Consistency Group Policy

Once the policy was successfully applied the Protection Group wizard could now access the replication sets and display the Array Pairs. Here’s the proof.

Figure 5: Array Pair pane populated

The Array Pair is now presented and the wizard can now be completed.

vCenter Config. Manager 5.5 Experiences

Recently I deployed VMware vCenter Configuration Manager 5.5 and came across a number of hurdles and pain points along the way. Some of them were due to the configurations of the SQL servers in that they weren’t configured as requested, … [More]

Recently I deployed VMware vCenter Configuration Manager 5.5 and came across a number of hurdles and pain points along the way. Some of them were due to the configurations of the SQL servers in that they weren’t configured as requested, and others down to the unusual way VCM is configured and how problematic it can be to reinstall if you make a mistake and need to start again. So, to prevent other administrators from burning valuable time figuring out how to fix the varying hurdles I experienced along the way, I thought I would do a posting about how I worked around each problem.

Uninstalling and Reinstalling VCM

Due to my VCM server having a problem in the SQL Server Reporting Services database, I had to stop the installation half way through. The application attempted a rollback but returned a number of errors stating, “INSTALL.LOG not found”. I had to accept these errors and allow the rollback to complete but then when I tried to run the installation again I received the error below.

To remove this error and allow the installation to proceed you need to go to the location where VCM is installed, then the Uninstall folder, Packages, and then each folder underneath there and run the Uninstall agent for each piece. You will get the INSTALL.LOG not found error, but the workaround I found is to cut and paste the log file to the machine’s desktop, and then when it asks for the location of the log file, you point it there.

Make sure you remove every single package before running the installation again.

CM Agent won’t uninstall

I received a very strange problem where even though I ran the CMAgent uninstaller as shown in the screenshot above, when I went through the VCM Checker, I would receive the error below, stating the CM Agent was already installed.

The relatively simple but effective way of getting this uninstalled is to mount the ISO for VCM and run the CMAgent installer.

Once installed, uninstall it from the Packages folder location as detailed above in this posting and it will uninstall completely and successfully allowing the Checker to pass all its checks.

SSRS Insecure State

During the installation of VCM, specifically the SSRS portion, I pointed the installation to the SSRS database and instance. I received the error: “Insecure state detected while validating SSRS Instance MSSQLSERVER. The instance is not configured for HTTPS, please consult documentation before continuing”. The error isn’t a show stopper and you can continue, but for me this wasn’t an option as I wanted the SSRS instance secured correctly. After a fair bit of research and trying a few different options (this is where I had to reinstall VCM as mentioned in my first hurdle above) I found the solution to the “problem”. The problem is "fixed" by adding a certificate to the web server URL to create and allow SSL connectivity to the Reporting Server.
  • You will need to get an internally signed certificate from your internal CA or an externally signed one
  • You will then need to follow this article to add the certificate to the SSRS instance http://msdn.microsoft.com/en-us/library/ms345223.aspx (You can get into SQL Reporting Services Configuration Manager by running rsconfigtool.exe)
  • Go to Web Service URL and add the certificate you have added to your machine in the SSL Certificate drop down, click Apply, and ensure it does not give you any errors in the results panel at the bottom of the page

Now when you go through the installation you will not receive the error because HTTPS / SSL is enabled.

SQL Integrity Instance Error

I received the error below when VCM was running its VCM Checker utility; it kept failing on the SQL checks resulting in the error below.

For this problem, I spent ages trying to get it to work and even completed a whole rebuild including the deletion and recreation of all the VCM databases, but to no avail. Only after having stepped through line by line of the installation document specifically around VCM required SQL components did I find the solution. The local language of the SQL server was incorrect even though the languages of the SQL instances were all correct and the collation was correct.

The local language was corrected and the SQL instances recreated to include the now correct language in the collation and it passed all the checks.

Dashboard Reports Fail in VCM

After VCM was installed I thought all my hurdles were behind me now but unfortunately after running a template collection and then trying to view the report I received the error shown in the panel below. “You must use Internet Explorer with the Run as administrator option to view dashboard reports when working locally on the Collector.”

I followed the KB article (listed below) to fix the problem. Even though it’s a documented fix I thought I would show the steps to make it easier for someone trying to fix the problem as my folders were under different paths from what are listed in the KB article. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2008584

1.  In the collector server, browse to http://localhost/reports.

2.  If you do not see the folder ECM Reports on the screen, run this command:


where <InstallPath> is the base path of your vCM installation.  For me the Files folder was in the L1033 folder within the WebConsole folder.

3.  Navigate to ECM Reports > ECM.

4.  Click the Security tab.

5.  Click New Role Assignment.

6.  For the group or user name, type:


where <ServerName> is the short name of your server

7.  Select the Content Manager role.

8.  Click OK.

9.  Restart the SQL Server Reporting Services service.

After completing the steps above, I was able to view my reports after running a collection against one of the VCM Templates.

No Agent Proxy Machine Found

The next problem I encountered was that every time I tried to configure the settings of my vCenter servers under Licenced Virtual Environments I would receive the error below. I had my vCenter server showing under Agent Proxies and the agent was the most current and it wasn't showing any errors.

My problem came about because the servers had not been trusted and the managing agent status had not been enabled. I browsed to administration>certificates in the VCM administration portal and followed the followed the steps detailed from page 27 of the VCM Administration Guide to set the managing agents as enabled and trusted. Even though there are a few errors above and there is a fair amount of configuring to do to get vCenter Configuration Manager 5.5 working, when it is working it’s an amazing tool and one I would highly recommend to anyone looking for PCI, ISO and vSphere 5 Hardening guidelines compliancy, to name a few.

Citrix ICA Piggyback

During a recent customer engagement deploying Citrix XenDesktop 5.6 to 4000 users across EMEA, we came across an issue for a group of users that had a requirement to access a Citrix Metaframe XP farm hosted on a W2K server … [More]

During a recent customer engagement deploying Citrix XenDesktop 5.6 to 4000 users across EMEA, we came across an issue for a group of users that had a requirement to access a Citrix Metaframe XP farm hosted on a W2K server to access some legacy apps hosted by a third party.To meet this requirement we used something called ICA Piggyback. So what is ICA Piggyback? Essentially, it allows us to use a double hop process to bounce from one farm to another allowing us to make use of different ICA client versions. I published an ICA file from our XenApp 6.5 farm to the specific group of users, however we had complaints that when the published desktop was maximised to a full window the session would disconnect. Also if the user left the desktop as a window, the session would randomly disconnect at random intervals even when the desktop was in use. The desktops published from Citrix XenDesktop were running Citrix Receiver 3.3, so I started to investigate the issue by ensuring we had met current supported levels. A quick look at the receiver documentation showed the target destination was unsupported, not surprising really….

Figure 1: Citrix Receiver 3.3 System Requirements

My first approach was to contact the third party to see if it was possible to access the legacy applications on a supported platform, to no avail.  Therefore, it was back the drawing board. I started searching through the Citrix documentation looking for a client that supported both W2K and Metaframe XP, eventually finding the XenApp Plugin for hosted Apps version

Figure 2: Citrix XenApp Plugin System Requirements

After finding what was going to be client for the job, I had to figure out how I could get this client to the users. After some head scratching, I figured I had two options, package the application, or deploy it out from a XenApp server and use a piggyback method. We had a number of Citrix XenApp 6.5 servers available in a farm, yet I didn’t want to install such an old client on these servers in case I lost any functionality going forward. We had a separate requirement to host some legacy IE6 applications, so I deployed out a small Citrix XenApp 5.0 farm hosted on W2K3 R2. After a number of functional tests for performance and stability, this allowed for smooth connections from a hosted desktop, running Citrix Receiver 3.1 passing through a XenApp 5 farm, into a Metaframe XP farm. Whilst this is not a permanent solution it does provide a handy workaround and functionality to a number of users whilst the final legacy apps are retired and replaced.

Recent posts

VMware Horizon - What am I buying?

These days, Horizon is much more than simply a means to deliver a desktop to a user. With the modern realisation that the user isn’t really after a desktop (they just want their applications), Horizon is now, more than ever, … [More]

Single Sign On: The VDI Challenge

As more and more applications become used by a business, the more authentication turns into a headache for both support and end users. Fortunately, there are mechanisms that, when implemented, allow for the exchange of security tokens.

Horizon View App Publishing & Custom Icons

The Problem… So, here’s the problem -  you’re presenting applications in Horizon View and instead of using a generic icon you want to use your own unique application icons.  For example, you’ve got an application that runs from a batch … [More]