Categories
Blog

Azure Migrate – Part 1

Diaxion’s history and heritage was born in the data centre. We have been involved in many data centre migrations both large and small, however we are now seeing more and more data centre migrations to public cloud. For many organisations, the adoption and migration to public cloud offers the opportunity to transform yet there are still perfectly acceptable situations where a direct server migration to an IaaS platform is the answer. For example, the migration of Windows Server 2008 R2 workloads to Azure offered customers extended support for the 2008 operating system allowing further time to transform to newer operating systems.

Where a server migration to IaaS is required, the major public cloud providers offer great tools to assist with the migration. Diaxion recently worked with a customer where Azure Migrate was used to migrate many servers from an on-premises location to their new Azure tenancy. Azure Migrate uses Azure Site Recovery (ASR) technology for many of its features which many customers are already using and are familiar with. The ASR technology offers an easy path for the replication and migration of workloads but what Azure Migrate offers, is a business friendly, analytical view of the expected Azure footprint for the servers and is the recommended tool of choice for migrating workloads to Azure.

This blog series will walk through the process of using Azure Migrate to Discover, Assess, Replicate and Migrate workloads from an on-premises location to Azure with some handy tips and tricks we’ve experienced.

Azure Migrate is compatible with on-premises VMware and Hyper-V virtualised environments or even physical servers. For VMware and Hyper-V environments, after an Azure Migrate project is created in the Azure Portal, a lightweight virtual appliance is available for deployment into the on-premises environment. This appliance can be deployed with no impact to the existing environment but should be performed under ITIL change controls. The Azure Migrate project is created within a valid Azure subscription with the deployed appliance registered to Azure Migrate with unique keys specific to the project. The appliance can be configured to auto-discover the on-premises environment with no outage or impact. The destination hosts / clusters hosting the virtual workloads must be listed in the appliance with valid credentials to locate the servers within the environment. Once the initial Discovery is completed, the discovered servers will be listed in the Azure Migrate project with a set of specific information.

With the on-premises environment now set as Discovered in Azure Migrate, the features of Azure Migrate such as Migration Groups and Server Assessments come into play and this is what sets Azure Migrate apart from ASR.

Migration Groups are a construct to group servers logically. These Migration Groups can be manually created allowing servers that will be migrated together. These servers are typically servers that share a common workload or service such as a multi-tiered application or servers that share a similar business function. If the on-premises information is not detailed enough to build comprehensive groupings, an Azure Migrate feature called “Dependency Visualisation” can be used. Dependency Visualisation does have its own requirements to deploy and can be used in an agent-based mode or agentless mode depending on the on-premises environment. These agents are installed on each VM that requires the Dependency Mapping and are specific to Windows and Linux clients. Dependency Mapping can also use data from Systems Center Operations Manager 2012 R2 or later, if that is currently running in the environment you are working with the MMA (Microsoft Monitoring Agent) does not need to be installed. The agents for Dependency Mapping can all be installed with no impact to the operating system.

The migration group that is created must be linked to a specific discovery source (e.g. the Azure Migrate appliance), each group should have a unique name for the project, and each group will contain the servers that you intend to migrate together.
Server Assessments are another feature of Azure Migrate. A Server Assessment uses discovered data and migration groups to provide analytical data to the customer to help inform the choices available for the migration of servers. A Server Assessment can use two types of sizing criteria, either “Performance based” or “As on-premises”. The “Performance Based” data is based on collected performance data for CPU, memory utilisation and disk data such as IOPS and throughput. The “As On-premises” data matches the on-premises VM size and aligns it to the closest match for Azure VM sizes.

When creating a Server Assessment, there are several properties that can be populated depending on your own requirements, these properties influence the outcome of the assessment. The properties include:

  • The target location where you want to migrate the virtual machines into
  • Storage information such as the type of disks (automatic, premium SSD or standard HDD),
  • Azure Reserved Instance usage
  • Sizing criteria to right-size the VM
  • VM series that can be utilised
  • Cost factors such as hybrid benefit, EA licensing, Azure offers applicable to your subscription, the currency and any discounts that may apply.

Depending on your individual use case, many of these properties will be the same for all server assessments across your set of machines to be migrated.

After the assessment has been created, the output from the assessment describes:

  • The Azure readiness of each assessed server, i.e. Whether the VMs are suitable for migration to Azure
  • A monthly cost estimation based on the compute (e.g. VM series and size) and storage (disk sizing and performance level)

The Azure readiness describes each VM as ready for migration, ready for migration with conditions, not ready for migration or unknown (if there are issues with data collection). These readiness states are explained in detail with remediation steps where applicable, most of which are easily achieved.

The cost estimation is a very handy set of information that can assist with budgeting and forecasting the future Azure spend. The compute and storage cost estimations are aggregated for all VMs in the assessment.

The Azure assessment can be exported in CSV format to be kept as a point in time record and distributed to the teams involved in the Azure migration process, e.g. operational and finance teams.

And that’s it for Part 1 of this Azure Migrate blog series. The next blog will look at the replication of data using the Azure Migrate service. Hopefully this has been helpful and if you have any questions about the migration of workloads to Azure, please contact the team at Diaxion and we’ll be happy to assist with any questions you may have.

Part 2 – Replication
Part 3 – Migration
Part 4 – Tips, Tricks and Troubleshooting

Categories
Blog

Changing your Operating Model

In previous operating model articles, I have talked about what is the operating model and secondly about how to implement an operating model. This time around I want to talk about ways of changing your operating model. Most of us already have an operating model whether we like it or not and it’s about changing this operatable that can be complex talk consuming but also trying to decide what operating model to move to as there are so many that you could choose from. Your existing operating model may not be explicitly defined but it exists whether you like it or not. The maturity of this operating model depends on your group’s maturity, the organisational maturity as a whole and, the awareness of operating models, business architecture and, turning strategy into actionable plans and programs. Probably like most organisations there is a highly mixed level of maturity within the organisation. Some areas will be advanced or at least fairly well advanced, others will be at a basic level, there is no getting around this aspect. Creating a uniform level across the organisation is not desirable or even possible. As you gain more maturity it becomes easier to understand that you need multiple operating models across the organisation. This is not a one size fits all approach. Some parts of the organisation will retain a more traditional view and have an appropriate operating model for them. Other parts of the organisation will be more experimental and, require a super suitable operating model for that style of work. These ends of the spectrum are not the same and should not be treated as such. As we go through this article, I’ll point out what could change and potential ways of changing.

What operating model do you have now?

The starting point is to understand what operating model do you have currently or more likely, what operating models? Much of this depends on your current level of maturity. It may sound trite, but the operating model is a journey rather than a destination, as your operating model will change overtime depending on the organisation needs and strategy. I would expect that most organisations would have a “plan, build, run” model. This is probably the most common model in use today especially in IT. Other models that could be used are, a siloed approach or a very horizontal view but based on technology towers. More advanced models include well defined models such as IT4IT, the Service Operating Model Skills (SOMS) framework and the set of MIT Sloan models:
  • Diversification (low standardisation, low integration)
  • Coordination (low standardisation, high integration)
  • Replication (high standardisation, low integration)
  • Unification (high standardisation, high integration)
There are industry specific models such as eTOM (telecommunications), IAA (Insurance), BIAN (Banking), IFW (Banking but used elsewhere as well), etc. Please contact us if you want information on these models. My personal view is that some of these models are ageing and, are not quite as suitable going forward with the impact of digital disruption. Make sure that you don’t confuse the business model (how the business delivers value) with the operating model (how the business runs itself).

Why do you need to change your operating model?

Most organisations know where they want to go even if they cannot fully express it. Communicating strategy to the entire organisation (top to bottom) and getting all the staff to understand it is a key failing in Australian businesses in my opinion. Large organisations typically have a strong strategy, architecture and, business planning organisation and have the resources and capabilities to define the strategy and then execute on it. Smaller organisations, those at the bottom of enterprise level and smaller, often struggle to turn their strategy into execution. If you want your current strategy to have an enterprise wide impact then you need to do a few things. These include but not limited to:
  • changing your operating model,
  • having the investment capability to implement and,
  • having the execution capability.
Since we’re only talking about operating model here, I’ll ignore the other two parts. Many organisations run multiple, often independent initiatives within business units to effect their strategy. The key issue here is that they are often independent and not coordinated. Changing your operating model brings these independent efforts together, usually (nothing is perfect!). But you’re still wondering why the operating model changes. Key reason is business change, this happens the whole time as we all know but, the operating model changes when there is a significant business change. This could include: External industry disruption are such things as digitisation and/or, new entrants into the market, etc. This disruption often requires significant realignment of the business to either, respond with competing capability or, protect current market share and customer base. At the same time the organisation is going to need to maintain its existing capability and this is where often a dichotomy arises between the old way of doing things and the new. From an IT perspective, we can respond in several ways, some specific and some that are more generic and required deeper dives to make the appropriate changes for your organisation.

Different models required

As I stated before, you’re going to need different operating models to support older through to newer. Some parts of the organisation require, stability, high governance and, high structure; other parts of the organisation require, low governance, high change and, high flexibility. There will be other areas that require a mixture of these aspects. Gartner is fond of its two-speed model, I’m not a big proponent of this as I believe that two models is not nuanced enough. I agree that it’s simple to communicate this view but, often does not allow for enough flexibility within the organisation. The diagram below shows a view of multiple operating model mode attributes. It is important to understand that this represents a continuum and not discrete steps. A different view on this is based on what type of organisation and level of maturity you may have. I will discuss assessment around maturity and operating models in a different discussion. I am using the metaphor of types of Japanese fighter mainly because I like it and feedback so far has been good. Makes what can be a dry subject at times more fun. The fighter represents, in a broad brush the type of organisation you have. Of course different parts of the organisation will show all three, most likely and sometimes two at a time. There may be more nuanced separations but, for sake of simplicity I am keeping it to three:
  • Street fighter – do then think, individual, shortcuts, lack of strategy, superhero pretence
  • Samurai – honour, discipline, loyalty, control, strategy
  • Ninja – agility, adaptability, precise tools for the right job, think outside the square, focus, human
This last diagram starts to describe how IT can leverage this model to change its operating model over time. While this has a flavour of the Operations / Service Delivery areas of IT, it does not have to apply only here. It can apply on a smaller scale (but probably not at the team level) or larger scale – IT wide.

Conclusion

I hope this article has proven informative, gives you some ideas, promotes some conversations and, discussion around your operating model and your capability to enact. As always, we are here to help and guide you on the journey. Nothing will be perfect and, anyone who tells you different or, that everything will be ‘finished’ when they leave does not understand the journey. Feel free to contact us for a discussion.
Categories
Blog

Managing IT costs and providing financial visibility

The proliferation of IT systems and options to enable areas outside of IT to make their own purchasing decisions. This has made IT cost management increasingly difficult to successfully provide visibility into costs and for IT to manage or control them.

Historically there has been underutilisation in physical compute resources. Virtualisation promised to alleviate this concern, which led to a sprawl of virtual machines. Then cloud computing arrived, which allows anyone with a credit card to create their own IT resources.

This has created or increased challenges such as:

  • How to successfully monitor costs within heterogenous cloud environments that may be managed by different groups
  • Using a unified approach for cost management between business, finance and IT
  • Managing IT resource sprawl and providing accurate cost information to consumers (e.g. show-back or chargeback)
  • Managing cloud costs and budgets across multiple cloud providers
  • All major cloud providers provide cost management tools for their platform, which can help to manage costs and cloud budgets better and provide some recommendation as to where organisations can save money. However, in heterogenous environments, they will only provide a partial view of the total environment.

    Several companies offer solution to allow for a common view regardless of the platform addressing the typical challenges and support:

  • Optimise cloud resource consumption and rates aligned to financial principles and visible to all relevant groups – business, financial and IT
  • Guidance for cloud financial data based on business needs
  • Visibility into cloud account hierarchy
  • Ability to optimise resources through improved visibility across technology – and with some solutions expanding this to a full view of all costs
  • Support visibility into workflows and processes allowing automation
  • While these solutions – e.g. Apptio, CloudCheckr, Cloudability – come at additional cost, they can provide a very detailed view about costs and resources: allowing to identify unused or under-utilised resources, identifying poorly allocated budgets and in some cases a full end-to-end view covering purchase/run costs, support costs, project costs, etc.

    Diaxion can advise on Diaxion Cloud Finance Visibility and Optimisation to improve your visibility into IT spend and beyond, and to develop a strategy or roadmap how to optimise and measure and track results.

    Categories
    Blog

    Working remotely: what has – and had to – change?

    2020 so far has definitely not been a good year. From an IT perspective it forced a lot of companies to quickly re-assess their way of working and to quickly come up with options to enable staff working remotely. While this may be working overall by now, what are areas that may still need to be considered or looked at?

    Diaxion as a boutique consulting company has been lucky in this regard:
    1. We have been fully cloud-based for some time
    2. We are used to working from various locations, be that from home, remotely or a client location
    3. We are used to collaboration while not necessarily being in the same office

    When enabling remote working, areas should not be looked at it in isolation, but from a comprehensive viewpoint. The approach to a remote workforce will obviously vary between companies, but the following areas will need to be assessed, decided on and managed:

    1.Company-provided equipment vs. BYOD (bring your own device) for both mobile and compute.
    The advantage of having a company-controlled and managed device need to be weighed against the possible cost impact; people may prefer to select their own device vs. being supplied with a potentially inferior model

    2.Access to data, information, files and folders and their backup and restore options
    Should all data be centrally managed – on premise or cloud? Or should most data be held locally for each user? The answer may depend on the role of the user and the type of business. How often should data be backed up and what are the restore options? The latter should also be considered in light of malicious attacks like ransomware or malware.

    3.Bandwidth and infrastructure supporting remote working
    Is there sufficient network capacity and application capacity to handle all or the majority of staff working remotely? Do staff have sufficient network performance to work effectively?

    4.Video conferencing
    Diaxion recommends consolidating onto a single – or at most two – platforms to enable video conferencing. With people unable to come to the office, this will enable to keep the teams connected and allow also for not necessarily work-related interaction

    5.Collaboration tools
    These include instant messaging, allowing to work simultaneously (or nearly so) on documents, and similar areas

    6.Security for everything, which may include
    a.Multi-factor authentication
    b.Enforcing of company profiles and security standards
    c.Limiting access (e.g. based on a user’s profile)
    d.Encryption
    e.VPN
    f.Virus and malware protection

    7.Updated policies to provide a framework for remote work and to guide people
    One item that should not be forgotten or neglected, though, is the mental impact on people. Working from an office provides significantly more human interaction, be that as part of a coffee break, lunches or just the opportunity to exchange ideas or simply chat.

    A random sample indicates to us that people overall like to work from home and a majority expects to continue to do so at least a few days per week. Feedback has, however, also shown that only a minority would like to continue to work from home 5 days per week – and it must be ensured that this is handled in a way to keep up team morale, team performance and overall keep people happy.

    Diaxion will be able to assist or guide with the implementation or advise on the 7 mentioned areas above.

    Categories
    Blog

    Device management and Security in a COVID world

    There was little time at the start of the COVID-19 epidemic for businesses to prepare for what was going to happen and to prepare for a working from home scenario. When staff where forced to work from home, many businesses did not have the required infrastructure in place to accommodate device management and security at such scale. It is amazing that when you look back, there has been an enormous amount of effort put in by all businesses and their IT partners to make this happen in such a short period.

    For most, the biggest issue to be solved was allowing access to files and applications remotely. In the majority of cases, the solution was either to implement a VPN connection to allow access to applications and file servers in the data centre, implementing cloud-based Software as a Service (SaaS) solutions like Office 365 SharePoint and OneDrive or, a mixture of both on-premises and cloud based solutions. While using VPN connectivity may seem like a good idea at first, when allowing hundreds or even thousands of staff access at the same time is a huge challenge on its own. Some businesses were lucky enough that they had already started implementing their cloud strategies but many other businesses have had to fast track their migration into a cloud-based storage or SaaS applications to provide staff a similar working experience as if they would be in the office.

    Device management

    Whilst implementing this infrastructure within a very short timeframe has been impressive, some of the required infrastructure changes had to be implemented later as there was simply not enough time. Projects that usually would take months to implement have been rushed to accommodate the new way of remote working resulting in some project components being prioritised over others.

    A good example is that moving away from VPN solutions and allowing access to data through cloud-based solutions also meant a change in the way devices are managed. Cloud based solutions allow staff to work on files and in applications in an “anywhere, anytime, any device” method where connecting through a virtual private network (VPN) is no longer required. Devices and their security must be managed differently as traditional ways no longer apply and delaying implementation of these changes for too long can become an operational nightmare.

    Traditional support systems are configured with the assumption that the device is in the office at least once every few weeks. Connecting to the office through a VPN or being physically in the office for a few hours was enough to push security updates, user profile changes, anti-virus updates and report back on device health and security status. Now that staff have been working remotely over the last few months and VPN connections are used in less cases, their devices have become stale in the system as they were unable to report back the same way as if they were in the office. Pushing anti-virus or feature and security updates required the device to connect to the internal systems and even the date and time on the devices were synchronised with internal systems, causing all sorts of calendar nightmares when there was a slight difference between them.

    All these problems are easy to solve when planned for, but this has not always been the case for everyone with so little time to prepare.

    Security

    Traditionally, the network within the office premises and in the datacentres is considered a secure environment with a secure connection to files and applications. A VPN connection or a remote desktop solution to access files and applications has been the default solution when users are not physically in the office. With cloud, this authentication methodology required a few changes.

    User authentication in cloud-based solutions is generally performed over a secure connection directly over the internet. Most SaaS solutions allow for identity synchronisation with on-premises Active Directory to replicate credentials and when combined with Multi Factor Authentication (MFA), this makes for a very secure authentication solution for allowing user access to data and applications in the cloud.

    Security of data on devices requires a different approach. As staff can now login from anywhere and from any device, data can be saved in many places including on personal devices local storage. It is important for businesses to ensure sensitive company data is protected and properly managed. Device management solutions like VMware Workspace or Microsoft Intune can be used to ensure security requirements like anti-virus, disk encryption and password enforcement are in place and consistently monitored while staff are working remotely. These solutions can also be used to remotely manage the devices providing another layer of business security. This includes performing a remote wipe in the case a device is lost or stolen to ensure sensitive data is not compromised.

    The ability to block or allow access to company data with device compliance policies is another great feature. Compliance policies can be configured to allow access only to company data when devices meet all security requirements to ensure maximum security between your company data and your staff working remotely.

    Implementing a device management tool and enabling additional security policies to further tighten security can cause some serious disruptions if not executed correctly. Planning is vital for a seamless migration, but as it also requires a different way of working for the end user, staff education and expectation management is equally as important for a successful implementation.

    Diaxion have experienced and certified Consultants that can help with your device security and management journey so, if this is something that your organisation has questions about, please reach out to our dedicated team for a confidential chat.

    Categories
    Blog

    DevSecOps, IT Operations, InfraOps, DevOps – is there a difference?

    The last few years have seen a proliferation of terms, but what is the actual difference between them, where do they overlap – or is it all just buzzwords? So what does a DevOps, a DevSecOps, an InfraOps or IT Operations engineer actually do? And what happens, if the role becomes that one of an “Agile InfraOps engineer”?

    On the basis of job descriptions the answer seems to be: read the requirements first, as there does not seem to be a lot of agreement between roles, even though they may carry the same title. In reality there is probably a significant overlap between all of these areas and every company will do things differently depending on:

  • The overall approach: are you using an Agile approach and if so, in what areas is it implemented?
    1. Is it IT-/company-pervasive or does it apply only to select areas?
      Is the focus more on technology, culture, mindset or does it cover all of them?
      Which items does it include, i.e. from the above abbreviations is Security, Business and Applications part of the agile approach?
  • Size of the company: a larger company will usually split IT delivery into a larger number of teams with their own specialisation or area(s) of responsibility.
  • Use of partners and their inclusion (or not) in the various areas of Operations and Development, i.e. how do you best integrate a Managed Service Provider or Consulting Partner?
  • Regardless of the term used, the aim should be to:
    1.Deliver more often
    2.Reduce impact and risk
    3.Reduce delivery effort
    4.Increase security

    What ultimately matters is the outcome and not the terminology. While the terms below will create an initial view, the implementation will vary on each environment and implementation.

  • DevOps – combining Development and Operations with all staff having development skills and being equally responsible for Operations, resulting in developers quickly identifying and resolving issues as they are familiar with the applications.
  • InfraOps – Infrastructure Operations (cloud or on-premise; virtualised, containerised, serverless or physical) or Infrastructure Optimisation with a focus on automation and simplification.
  • DevSecOps – DevOps with Security integrated into the team, ensuring that Security is not an afterthought or becomes a blocker.
  • All of these approaches will depend on staff willing and interested to adapt and learn continuously, a good framework for automation, orchestration and instrumentation, and company support to develop new ways of working and new products.

    Categories
    Blog

    As the Financial Year 2020 comes to a close for many of us, I think all of us have had to rip up our business strategies, budgets and forecasts, moved from 12 month to 3 and 6 month planning cycles, something none of us would have predicted. Diaxion saw a significant portion of our work hibernate in March as clients moved to business continuity planning, work from home enablement or hibernating their own business units. We experienced a change in engagement and client intention about every 2 days during March / April. Whilst Diaxion had assisted some of our clients in the prior 6 months to redo their mobility and collaboration capabilities, refresh their Business Continuity Planning (BCP) and Disaster Recovery (DR), few were ready for the scale of the lockdown. We had to assist a client to increase their remote workforce capability 10 fold over the period of 4 days!

    So, what is next most people ask me, will we return to the office? It is an interesting question and one that has many answers in our perception currently. But something that no one has a solid answer to, yet. We have seen mental strain from missing the social aspect of the office, we have seen interesting and ingenious ways for colleagues to support each other during the lockdown. In Diaxion we had a staff member present on the company video conference dressed as a dinosaur! People have tried the video conference all day only to work out that it is draining! Many of our clients in discussion have noted that productivity has decreased with all remote. What will be the new norm post lockdown? It will be interesting to find out.

    Has cloud been a saviour for many during lockdown? Yes, especially for the end user computing and mobility perspectives. However, has it been without pain and will it stay? It is a very thought-provoking question, we have had stories and discussions with clients around Financial Operations (FinOps), cloud financial governance and how they can control or reduce their cloud cost. Companies that rushed to the cloud to enable lockdown have found that cost has blown out significantly and worse, security has been compromised as they did not have time to properly architect and apply continuous assurance practices to the cloud. Further companies have found that the promise to cut cost when revenue reduces in the cloud is only try partially and for some workloads. Lockdown for many companies saw not a reduction in revenue but a stoppage! Yet they were unable to turn off all or even a significant percentage of their cloud spend as they needed to maintain their back end and public presence, be it in a reduced manner. Couple this with forecasted severely reduced IT and business budgets next financial year, will Opex remain the favourite of business or will Capex reign? The answer of course, is yet unclear but regardless will be assured to be an answer of best fit per business unit and business service. What is known and certain is that continuous optimisation and assurance of spend and security will be required.

    What else will change post lockdown? Sovereignty and provenance of supply chain and provider chain for services will have an ever increasing importance. This has been a theme across many of our engagements over the past year across Federal, State, Vendor and health clients. It is being brought to the forefront in many of our key clients immediately prior and during the COVID situation.

    So, what has Diaxion been up to this past Financial Year?

  • 2 data centre strategies, each including consideration for sovereign and public cloud
  • Re-platform detail design for national critical infrastructure
  • 3 x Disaster Recovery enhancement programs
  • 1 x BCP enablement program
  • 1 x ways of working strategy
  • 2 x Multi-cloud enablement’s and assurance
  • Many Azure cloud establishments and migrations
  • AWS cloud establishment and migration
  • Google Cloud establishment
  • Cloud Operating models
  • DevOps enablement engagements
  • Puppet Services Partner of the Year!
  • O365 migrations of over 20,000 seats
  • Operating, resilience, operating model and architectural audits (on-premise and cloud)
  • And many more
  • What will the next 12 months bring? Let me know if you have a view, it is “interesting times” that is sure!

    Wishing all our clients the best in these times and offering our support for a free chat or assistance if appropriate, reach out!

    Categories
    Blog

    Considering a Cloud Server Migration?

    For any organisation, the transition to Cloud or adoption of Cloud is a hot topic that raises many a question and concern that is likely to slow down proceedings. In the earlier days of cloud platform availability, there was a much larger push for migration of servers to the cloud which left consumers scratching their heads and raising larger questions and concerns around the point and benefit of adopting a cloud for this type of service at all. Migrating and running servers hosted in the cloud can still surprise you with a much bigger bill that you might have been dealing with when your servers remained on-premises. That is of course, unless you adequately plan your migration and hosting strategy to utilise the most efficient tools and services that Cloud has to offer.

    Fortunately, there are now better tools and methods available to analyse and migrate your servers, plus a much larger range of hosting options for running your virtual machines when they get there. When planned well from the beginning, with a re-aligned mindset that moves away from building on-premises in the cloud, migrating your servers into a cloud platform can actually start to look much more appealing from a management and cost perspective.

    This is not to say in any way that migrating servers to cloud will always be a better solution than running them on-premises. You might have reached a point where a hardware uplift is required so cloud is being considered as an alternative, and in any case like this then an assessment needs to be carried out to validate the best option moving forward. This assessment would have to consider Platform as a Service options, however it must also consider Infrastructure as a Service with modern options that won’t be present to you when you plug your estimated server running costs into a cloud calculator you found through a Google search.

    Consider the following for your cloud server migration project:

    Analysis tool:

    The most costly mistake you can make when migrating servers to cloud is to lift and shift without reassessment. Cloud is not an extension of your on-premises data centre, but if you treat it that way then you can expect some bill shock. If you want to minimise costs when running servers in cloud, you will need to develop an analysis strategy that starts with the on-premises server and ends in continuing analysis long after the server has been migrated and cut over.

    In the past when we have built virtual machines on-premises, the usual practice is to allocate it some processing power and then add to it when better performance is required. I have never known a client to reduce processing power when a server has reduced its workload. However, when you start paying for a virtual machine by the hour, you want good analysis tools in place to tell you that after one year of running costs, your server isn’t needing what you have allocated it and what you have been paying for. The terminology here which encompasses this ongoing analysis is virtual machine ‘rightsizing’. A term you are going to get very familiar with if you want to save money on virtual machine cloud spend.

    Network to cloud:

    When we manage our own data centres, we have a feeling of being in control of our infrastructure and networking but as soon as you start talking cloud that feeling can immediately turn into uncertainty for varying reasons. You are considering sending large quantities of your secure corporate data over a link owned and managed by someone other than your team, which understandably is going to incite roadblocks from relevant internal parties. These roadblocks can be eased and removed in turn with enough knowledge used to create a suitable networking strategy.

    We are not talking about a data centre migration here, where there is ease in carrying physical storage from one location to another to plug in and transfer your data (though this is a possibility with some providers). We are talking about sending potentially gigabytes, terabytes, petabytes over a network link that needs to have adequate bandwidth to transfer your data securely. The chances are, your existing internet bandwidth wasn’t built with migrating these amounts of data in mind, nor does it pass your security tests for the class of data travelling over it. You now need to assess whether using a VPN or a direct cloud connection (e.g. Azure ExpressRoute, Google Dedicated Interconnect) will suffice, and this assessment against your personal requirements can be critical as it might dictate which cloud provider is going to be the winner to host your virtual machines.

    Migration Tool:

    There is a vast range of cloud migration tools out there to assist your organisation with migrating your servers to cloud. While it is great to have options, it can make the decision making process much more convoluted without some helpful guidance. That is, if your strategy proves that an online migration is technically and financially better for your server migration requirements over an offline server migration. With this proven, you can skip the use of a migration tool in place of your cloud provider’s native offline migration tool.

    Each cloud provider offers a native migration tool to get from your on-premises location to their cloud storage. In the short term you will need to know how and if these tools will work with your on-premises infrastructure: VMware and Hyper-V, Windows and Linux version compatibility. Planning a longer term strategy though you can get off on the right foot by putting in place a multi-cloud tool that offers more flexibility with no vendor lock-in. Additionally, when planning your migration strategy, the tool you use can double up as your cloud optimisation tool. Getting this strategic step correct can be key to making sure you don’t overspend in your migration process and virtual machine cloud hosting costs, in the short and long term.

    Moving your on-premises servers into a cloud platform is much more of a transformation than a migration. For any kind of transformation if you don’t have a solid strategy in place from the beginning, you can expect a bumpy and costly ride. Not forgetting, this strategy begins well before you can move a server into cloud: for example, have you defined your cloud operating model? Is your cloud governance framework planned and in place? How are your servers going to be protected once migrated? Establishing these models early will help you develop a manageable and financially viable cloud server migration project.

    It doesn’t matter where you are in your cloud journey, defining strategies in needed areas will vastly improve your situation. Diaxion have experienced and certified Consultants that can help talk you through your migration strategy for a better outcome with a focus on where you want to be positioned in the long term.

    Categories
    Blog

    Microsoft Software Asset Management Audit

    Any kind of activity within the IT world with the word “audit” in the name sounds about as much fun as pulling teeth. Such is the case with software audits which are generally performed by large software companies such as Microsoft, Oracle and SAP.

    Microsoft tends to lead the conversation when it comes to software audits given their vast of products with numerous licensing models which may overwhelm even the most seasoned IT professionals. While an audit is something that should be taken very seriously it isn’t really something that we should be scared of if you’re well prepared.

    Diaxion would like to highlight the various kinds of Microsoft audits that can be carried out and cover a few of the best practices you may use when preparing for an audit or true up exercise. We’ll work under the assumption that everyone desires to honestly purchase the particular software they are using. In the case when uncertainty is present, this would be the ideal time to start asking questions, with a view to quickly moving in the direction of compliance.

    Types of Microsoft Software Audits

    Microsoft commonly performs two types of audits: Software Assessment Management (SAM) and Legal Contracts and Compliance (LLC).

    SAM: This is most likely the first type of audit you’ll receive. A SAM audit is Microsoft’s method of saying, “Let’s take a look to make sure you’re in compliance. If not, we’ll work with you to help bring your software licensing into compliance.” SAM is often known as a “self-audit” because you’ll be asked to complete forms detailing the Microsoft software components you’re using and then provide a comparison to what you have already purchased, commonly through a licensing provider.

    This is often considered as Microsoft extending an olive branch. A number of companies have been offered deals or new licensing agreements to assist in facilitating them into compliance. Those that have gone through a SAM say Microsoft will usually be helpful as long as you are making an honest attempt to become compliant.

    Microsoft will commonly pay for a SAM audit which usually is performed by a partner. Although participation in a SAM is voluntary, it should be understood that if you decline, you can expect to be presented with the next type of audit.

    LLC: Microsoft will issue an LLC when the customer refuses a SAM. These are usually not voluntary and could mean that someone has accused your company of intentional software piracy. When you have been issued an LLC audit it may be best to consult a licencing specialist.
    One of the points we make clear straight away is that this will be a serious matter, and that the penalties allowed by law are in the order of $150, 000 per named infringement.

    Best Practices

    Don’t Leave It – You don’t want the process to snowball on you. If you think your company may be out of compliance it’s best to get it taken care of as soon as possible. Microsoft will be a lot more understanding when they know you’re serious about becoming compliant.

    Don’t Assume Legitimacy – Unfortunately, you will find dishonest resellers out there taking advantage of businesses by selling them fake software. What’s worse is that many companies do not realise they are using fake software until an audit uncovers the reality. Your best course of action is to work with a trusted certified reseller.

    Keep All Receipts – You will be asked to prove you purchased that laptop running a copy of Windows 8 or even Office 2013. If it’s running Microsoft software you will need to prove that you legally purchased it, and that includes just about all OEM and Retail licenses.

    Keep Current Inventory associated with All Software (not just the primary Microsoft suites) – This should seem like a no-brainer, but, by default or through organic growth and change you might have walked into a situation where it’s not really clear what software is being used within the organisation or a business you work together with. In this particular case, one of the first things you should do is perform a baseline inventory of all installed software. This will enable you to spot gaps in compliance. Microsoft provides a free Assessment and Planning Toolkit for this very purpose, and a number of vendors provide full asset management product suites.

    Work together with your Supplier – There’s a good chance you won’t end up being 100% compliant for each listed item of software in your company. That’s perfectly normal, and Microsoft expects this. Microsoft also expects you to work reasonably quickly to become compliant, which will involve working with your licencing vendor to determine what is required and at what it will cost to become compliant. Management is usually going to want that number so it’s best to understand the costs as soon as possible.

    Microsoft customers with an Enterprise Agreement (EA) or some other Microsoft Volume Licensing agreement are considered to be compliant, as long as the organisation has met the annual true-up requirements. Most audits are issued to customers under Open or Select type licencing contracts, which are used mainly by small and mid-sized companies.

    According to a survey by IDC, almost 75% of software vendors believe their customers do not manage software license entitlements correctly.

    If you’re running Microsoft software, there’s a good chance you’ll be asked to take part in a SAM audit at some point within the next year or so.

    You could save yourself a lot of time and hassle by working towards becoming compliant instead of waiting for the notice to arrive!!!!

    Categories
    Blog

    Quest Migration Manager for Active Directory

    In partnership with Quest, Diaxion is able to complete the migration of an Active Directory environment. Such a migration is a delicate and often incorrectly carried out procedure. The task of consolidating multiple Active Directory forests/domains into one centralised and structured forest can be accomplished by using this toolset.

    Several recent engagements with our clients have seen Diaxion gain experience with the installation and use of the Quest Migration Manger toolset. These engagements have been across a number industry segments, (such as Banking and Insurance, Healthcare and Utilities) and included migrations from multiple Active Directories into a one Structured Forest/Domain.

    The approach that Diaxion takes to these migrations is part of our proven methodology and includes carrying out an initial analysis of the current Active Directory infrastructure. We then look into the Users, Groups, Workstations and Organisational Unit structure for each source domain. As a result of this analysis, all findings are the provided in written reports to our client. Lastly, we will work with our clients on the analysis of the data extracted from the environment, building a road map to perform the migration to an agreed design and architecture.

    Diaxion provides specific prerequisites guidance, with the installation and configuration of the Migration Manager toolset. Performing Pilot migrations can be used to discover shortfalls, and if found these deficiencies can be used to drive the project and keep end user disruption to an absolute minimum during the migration phase.

    Diaxion has completed many engagements involving Active Directory. So, if this is something that your organisation has questions about, reach out to our dedicated team for a confidential chat.