This blog is less about the technical, and more about the importance of discovery, a task that should be embarked upon at the start of any project, be it End User Compute (my focus), infrastructure or, for that matter, any project regardless of discipline.
There are a number of reasons why we jump straight into a project, rather than doing discovery, be it the pressure to deliver, excitement to get things underway or convincing ourselves that we already know enough to get started. However, before jumping into the project it’s worth taking a moment to ask yourself whether you and the whole team know what’s required. If you don’t then you need to start the project with discovery.
What, I hear you cry, am I talking about when I say “Discovery”?
The simple answer is defining what is required and where you’re starting from. Much of it involves a bit of detective work.
What do we want?
In order to deliver a project, the first things we need to do is define a set of clear requirements that provide a set of measurable objectives. These usually fall into the following categories:
- Functional requirements – These are typically the “what function/service/task is required” and can usually be broken down into business needs such as a reduction in operational costs or technical, such as “we need Application X”.
- Non-Functional requirements – These define the “how” something should be done or work. We might have a functional requirement to deploy Application X, but a non-functional requirement might be that it must be deployed to 100 users in a day.
Commonly, these are usually broken down into business and technical requirements. A business driver might be to reduce operational costs for the business, while a technical one might be to support an existing directory services infrastructure. While defining these requirements, applying a level of prioritisation is also important. A common approach is MoSCoW – Must have, Should have, Could have and Won’t have.
Let me be clear – the importance of a clear set of deliverable objectives cannot be understated. Vague, unfeasible requirements that are not set in stone at the beginning have been the cause of many a failed project. While a process to accommodate changes can provide mitigation, such a process is often an Elastoplast over a poorly defined set of requirements.
At the end of the day, the requirements are where we wish to be at the end of a project, but as with any journey, there’s likely to be the odd distraction and we need to know where we start from.
Where are we now?
Have you ever tried following a map to a destination without knowing where you are to begin with? It’s the same with any project. Having defined our destination in the form of requirements, we now need to know where the starting point is in terms of the current state.
With respect to a technology solution in a greenfield context, this is very simple – but rarely nothing. For a brownfield solution where you’re upgrading or replacing something, there’s a whole train full of baggage.
We need to capture a picture of the current estate. If we use an end user compute project as an example, we will typically need to capture:
- The number of users that require the function defined by the project, including the numbers who use it concurrently. For example, a hospital may have 1000 staff with access to a VDI solution, but as they work 24/7, there may only be a concurrency of 600 users – Designing based on “1000 users” would therefore lead to an expensive, over engineered solution.
- Applications – A starting point is a list of applications in use. However, more information is needed – typically, this would include the requirements of the applications, who uses them and how they reach the device (installation, deployment tools).
- Endpoint devices – What hardware is used – not necessarily limited to laptops and desktops these days as tablets, thin clients and mobile phones are increasingly used. Also, specifications – including the client operating system.
- Infrastructure – what is the surrounding estate? Items such as network services (including DHCP, DNS, VPN solutions), file and print are important factors.
- Access and authentication – What directory services are present and are there authentication solutions that will be leveraged (such as RSA SecurID or a RADIUS server).
It’s also worth raising that tooling is useful here. This might be as simple as exporting data from existing management and monitoring tools or the deployment of tooling into the estate. For example, Lakeside Systrack or Liquidware Stratusphere are both tools that can be used to capture an inventory of an existing physical (or virtual) estate including applications in addition to gathering useful resource utilisation metrics that provide an aid to sizing a solution. Be careful though – tooling is not the be-all and end-all. Remember that you’re measuring an estate that may have legacy baggage – old operating systems and applications running on old hardware that might not be 100% representative of the new world.
So, armed with the starting point and destination, we can now take this analogy a step further by creating our ‘map’ – the design for the given solution – and with that, we’re off racing to the finish line.
Getting the discovery phase right can lead to better solutions, focuses on objectives and not just deliverables, provides context and highlights user needs. Being fully abreast of the requirements and what exists means that the project is less likely to hit issues or need to be adapted as a forgotten requirement reveals itself. As they say, “forewarned is forearmed”.
Xtravirt have vast experience in the planning, design and deployment of many and varied solutions across a wide range of organisations and industries. If you’re embarking on an IT project or need help with a project already in-flight, contact us to see how we can help.