Categories
Blog

Azure Migrate – Part 2 Replication

 

This is Part 2 of the Diaxion blog series focusing on the Azure Migrate service to Discover, Assess, Replication and Migrate workloads from an on-premises location to Azure. Part 1 was focused on Discovery and Assessment and can be found here – Azure Migrate – Discovery and Assessment

Now that we have completed the Discovery and Assessment as per Part 1 of this blog, we are now ready to replicate the selected servers from our on-premises environment to Azure. Previously, most replications were accomplished over a secured, encrypted Internet connection, however this did run the risk of transmitting private, potentially business sensitive data over an Internet link. There are now options to replicate servers over an ExpressRoute connection either using Microsoft Peering or Private Peering using private endpoints, however as the private peering method is only new it is quite limited. The decision on which network methodology is best for you is an individual business decision that must be considered prior to replication.

Regardless of the network methodology you choose, once the network is in place the replication of data itself is quite similar. The on-premises Azure Migrate appliance co-ordinates communications and manages the data replication for servers replicating to Azure.

The replication process copies the server storage and the hosting configuration file from on-premises to a configured storage account in your Azure tenancy. Once the initial replication is completed, delta synchronisations occur frequently to keep changed blocks in check from on-premises to Azure. The replication of a server is a multi-step process allowing you to configure each replication manually. The steps take into consideration the different ways Azure Migrate can support your environment. These include:

  • The source settings of your environment and whether you are replicating virtual servers with VMware vSphere or Hyper-V or physical servers.
  • The ability to include metadata from the Discovery and Assessment phase including migration groups that you may have configured to group servers.
  • The target settings specific to the replication you are looking to complete. These include the target subscription, resource groups, storage account, virtual network and high-availability options that may apply to your servers, whether that’s an Availability Zone (not available in all regions) or an Availability Set. You can also choose to apply the Azure Hybrid Benefit in this step if you servers that are already licensed with a valid Windows Server License.
  • The target compute settings for the server that will end up running in Azure. You can choose to let the Assessment make these decisions for you with regards to the Azure VM size, OS type and OS disk or you can select these manually if you wish to override the assessment details. These sizes and options can be changed at any time prior to replication starting, once the replication is underway the options cannot be changed. A VM can always
    be resized post-migration however.
  • The disks that are available for replication. The normal practice is to migrate all disks that are attached to your on-premises server but depending on your configuration you have the option to take selected disks only. The disk replicas in Azure will be managed disk, you can choose either Standard HDD/SSD disks or Premium managed disks.

After the above options have been taken into consideration, the replication begins. The replication of the virtual machine is a ‘live’ event where the replication is ongoing until the VM is migrated to Azure. The replication is a storage-based data transfer keeping the on-premises VM and the Azure disks synchronised to minimise the amount of time required for the migration. This delta replication is handled through the Azure Migrate appliance that is deployed on-premises. Alerts and details of the replication are raised in the Azure portal under Azure Migrate.

The status of the replicating servers can be viewed through the Azure Portal, whether the replication is ongoing and showing a percentage of data replicated, or the state of the replication and whether it is healthy or critical. The replication for any server can be stopped if required via the Portal, as well as current and past events related to the replication of a server.

The actual duration of the replication is obviously dependent on your network and source environment. If you are transferring the data over an Internet link you must be aware of the risk of data flooding the link thus impacting the business. The source environment, whether Hyper-V or VMware can also contribute to the performance of the replication as the transfer is only as fast as the hosts and storage can manage. The source environment is generally also the cause of replication failures, there are a
few ‘gotchas’ that can trigger errors, we will talk to some of these cause and solutions in Part 4 of this blog.

Some technical comments about the replication process are as follows:

  • The Azure Migrate appliance is responsible for the compression and encryption of data prior to uploading to Azure. The end storage in Azure is also encrypted using “encryption at rest” protocols. HTTPS and TLS 1.2 are used for the transfer of data.
  • Replication cycles are dependent on how long the previous delta cycle required. The formula is previous cycle time divided by 2 or one hour, whichever is higher.
  • A delta cycle is started immediately after the initial replication is finished. Future delta cycles then follow the above formula for timing.
  • Folders are created in the Azure Storage account that has been configured for replication per replicating server. These folders contain the disks and the VM configuration file. These can be explored using Azure Storage Explorer.
  • Azure Migrate will automatically create selected Azure services on the first replication attempt. These services include a Service Bus, Gateway Storage account, Log Storage account and a Key Vault for managing the connection strings for the Service Bus and access keys for the  storage accounts.

And that’s it for Part 2 of this Azure Migrate blog series. The next blog will look at the migration of data using the Azure Migrate service. Hopefully this has been helpful and if you have any questions about the migration of workloads to Azure, please contact the team at Diaxion and we’ll be happy to assist with any questions you may have. Stay tuned for Part 3, coming shortly!

 

Part 1 – Discovery and Assessment

Part 2 – Replication

Part 3 – Migration

Part 4 – Tips, Tricks and Troubleshooting

Categories
Blog

How to assess your Operating Model and Maturity

Time for another in my operating model series. We have talked quite a bit about, what an operating model is, why you need one, how to build your operating model and, how to change the operating model. What we have not discussed is how to assess your operating model as it stands, and what the level of maturity may be that your organisation has in respect of the operating model.

 Your operating model turns your strategy (the why) into operating (how things are actually done). It effectively provides “the what”, what needs to be done across the key aspects of your business: – process, structure, systems, culture, governance, etc.  The assessment takes a critical but non-judgemental view of the operating model and the capabilities it provides to see how well the operating model supports the strategy.

 Normally an operating model assessment takes 4 to 6 weeks depending on the complexity and scope. Remember this is just an assessment not the design of next steps or the production of the next gen operating model.

 Remember that an operating model can live at multiple levels so, one needs to carefully look at the appropriate scope of the above points e.g., it is no good working on the operating model for a finance department if you provide information related to the whole business which is a health care provider (though some is required). This comes back to a point in one of my previous articles that talked about how strategy must flow up and down the organisation.

Many organisations will find that the top-level strategy is not well communicated down the line and/or the intent is changed. This usually then impacts the operating model negatively as, either the organisation is then set up incorrectly or the value chain outcomes are incorrect. Secondly if the operating model has not been critically examined for some time you will find complexity has been introduced as groups have tried to forge their own path in pursuit of growth or capability. You will find this has led to unwitting complexity across process and organisation and often duplication in technology and organisation.

When

Not every change in strategy requires operating model change, the typical yearly strategy update usually does not require changes to the operating model. Typically, the following types of significant strategy change mean you need to take an assessment of your operating model:

  • Centralisation or decentralisation change
  • Significant change in your value proposition to customers
  • Acquisitions
  • Business strategy pivot – product to customer or similar
  • Take on significant new capability like cloud
  • Significant outsourcing or potentially more likely insourcing

 So, how do you actually do the assessment? Remember the inputs and outputs are relevant to the scope of the assessment.

Understand where you are

The understanding should be divided into a few stages: – pre-work, interviews/workshops, analysis and, initial outcomes.

Pre-work involves pulling together the information required to start the assessment.

  • Good grasp of the strategy and reasonably well documented strategy,
  • Value chains understood and documented to some extent, key processes that support the value chain,
  • Organisation structure,
  • Functions / business architecture – usually this is not well documented but ‘understood’ though understanding and definition can vary significantly
  • Technology health that supports the operating model

Much of the information gathered can be expanded or checked at a set of interviews or workshops with key people and groups. These workshops should be open-ended rather than closed questions and give the time for people to speak and understand they have been heard.  This also gives you a view of how well the overall strategy of the organisation has trickled down. Try to have people of the same group and level at the workshops some people are not intimidated by hierarchy. Prepared questions/pointers must focus on operating model topics rather than free for all otherwise general grievances come out. This also means management of the workshops must focus on the outcomes needed.

Initial analysis should focus on the common key issues and that people believe are holding them back from delivering the required outcomes and, the health of the required capabilities across people, process and technology. Understand how their view of strategy and outcomes is different from and/or shaped by the overall strategy. Are the capabilities aligned with strategy? Are they effective? Does this part of the organisation have it resources focused on the right things and in the right way? This gives you a start on the analysis stage.

 It is important to communicate your findings as they become clearer to the sponsor. This will ensure that you have the support and have direction on the appropriate communication to the rest of the organisation.

Identify what needs to change

Identifying what needs to change will be both simple and complex, it is rarely one or the other. To do this we need to:

  • Understand the maturity of the organisation and where it wants to get to from a maturity perspective, 
  • Identify capabilities, process, organisation, or technology that is not supporting the strategic direction and,
  • Check that the basis of your operating model is still aligned to the strategy.

The maturity of the organisation is a key point of view to take as effectively the current maturity determines where you can get to and the ability of the organisation to adapt to change. I am not going to go deeply into maturity models – that could be a subject for another article. Suffice to say the maturity model will be different – well the specific view will be but, the structure will be the same or very similar. Below is one I adapted from Gartner’s IT Infrastructure maturity model. https://www.gartner.com/en

Key points to remember:

  • You cannot jump maturity levels
  • Most people when you talk to them believe they are at a higher maturity level than they actually are
  • All aspects of a maturity level most be in place before an organisation can ‘graduate’ from one level to another

The analysis of the workshops and documentation with give you a particularly good view of where the organisation sits on the maturity scale. The trick will be convincing stakeholders and interested parties that your assessment is valid. Once you can agree on the approximate level of maturity you can then tailor your assessment and recommendations suitable to the ability of the organisation to take on required changes.

The capabilities, processes, technology and organisation should come from existing documentation or become apparent from workshops and other discussions. You are not expected or should intend to go into nth levels of detail (that is a full business architecture piece of work) but to understand where the issues (from the workshop analysis) are in relation to those core aspects and be able to map the issues to the appropriate core aspect. You need to be able to relate the issues and the impacts in relation to the core aspects, in a way that shows clear business detriment – increased cost, reduced throughput, inefficient organisation, technology alignment, etc.

The third part of this section is to review and identify how the basis of your current operating model – principles, value chain, etc. – match (or not) the current desired state. We are not trying state the next steps here, that is another task that is not part of assessment, we just need to identify and call out mismatches, why there is a mis-match and the impact.

Many of the disconnects between what is currently happening and what the strategy implies (don’t forget that an operating model is driven off strategy) will become apparent from the workshops. Picking the trends and themes and then relating this to how the operating model is working or not, takes more skill – not all issues will be obvious, keep lines of communication open and be will to check-in, discuss and amend based on feedback to ensure what you are delivering is relevant and understandable by the sponsor(s) and their teams.

Get the right people to shape the future

Performing the assessment, understand the existing model and what could be done, requires the right people. This is both on the execution and those making decisions on the assessment. Having the people with some out of the box thinking sponsoring and working within the assessment can help break the corporate think. This is also why many organisations use outside help to do assessments.

Having people who understand the business and what needs to change though combined with outside thinking often provides some of the best results. The sponsors are key in supporting and challenging the work and helping drive to the most pragmatic but also transforming output.

Final thoughts

The assessment can be a deep or shallow as you need or have time to afford. There is probably a minimum time of 3-4 weeks to do a basic assessment of part of a business, like IT Operations. The assessment gives you the data to help decide what to do next.

A few tips:

  • If there is not a clear strategy (even if not well articulated) or intent to change then don’t do this work
  • Sponsorship and the right sponsors are key
  • The organisation needs to be able accept and act on change – don’t forget about change fatigue
  • Communicate back early and often
  •  Gain commitment that the outcome will be acted on before starting
As always Diaxion can help you with this work. We have completed this across a number of our clients; banks, superannuation, government, utilities. Contact us to find out more. 

Categories
Blog

Diaxion + ARQ: Smarter Together

Today we’re thrilled to announce Diaxion is being acquired by the ARQ Group – one of Australia’s most awarded technology consultancy delivering the trifecta of digital, data and cloud-based solutions.  

The acquisition unites two of the country’s leading technology companies to create Australia’s only full-service technology, data, AI, cloud and digital consultancy providing advisory, delivery and operate/managed services. 

Diaxion customers now have access to the full spectrum of technological capabilities required to modernise and optimise their organisation – all onshore – and creating local jobs in one of the fastest growing industries: digital and technology.  

Diaxion becomes part of ARQ Group, which was acquired by Australian Private Equity firm Quadrant in partnership with the Management Team led by Tristan Sternson in February 2020. The combined business will have more than 450 employees. 

Diaxion are an Australian technology success story that began twenty-one years ago. Today, Diaxion are a market leading technology advisory consultancy with an impressive portfolio of clients: state and federal government, and ASX-200 companies, including the big four banks.  

ARQ Group CEO Tristan Sternson said Diaxion’s market leading advisory and digital transformation expertise naturally complements ARQ Group’s specialist capabilities in data analytics, artificial intelligence, and process automation. 

Diaxion Founder Tony Wilkinson will head Advisory at ARQ Group and says it’s an alliance of two great Australian technology leaders.  

“We are pleased to unite with ARQ Group who have been pushing the envelope of innovation with some of Australia’s biggest brands for the past 25 years. By joining forces, we can scale up our offering and continue to be industry trailblazers, showing the world that Australian- operated businesses can deliver even better results than large multinational IT consultancies.   

“I’m incredibly proud of what Diaxion has achieved over the last 20 years, and this acquisition is the natural next step for the business to offer our clients a full service of advisory to implementation all under one roof.  

“I speak for all of my colleagues when I say we’re genuinely excited about the new opportunities available within the expanded ARQ Group, and look forward to 2021 being our biggest ever,” Wilkinson said.  

Diaxion’s current services include advisory, strategy and design, build, implementation, migration, technical, operation and organisational audits. Diaxion’s portfolio of clients include state and federal government departments, and numerous ASX-200 companies, including the big four banks. Last year Diaxion was named Puppet’s Service Delivery Partner at the company’s Partner of the Year awards. 

Simon Pither, Partner of Quadrant Private Equity and Chairman of ARQ Group, celebrated the acquisition and said that 2020 was an incredible year of growth, despite the impacts of COVID-19.    

“When we first invested in ARQ Group last year, we did so because we saw the huge digital transformation tasks ahead for many Australian companies and Governments and knew ARQ Group was ideally positioned to help bridge that technical chasm. Due to COVID-19, that digital transformation has moved faster than we initially anticipated, but I’m proud how the ARQ Group team has responded and continued, delivering incredible results for its customers.    

“The combination of ARQ Group and Diaxion creates Australia’s only home-grown full-service digital, data, cloud, and tech advisory company,” Pither said.  

Categories
Blog

Ways of Working – collaboration and requirements

Following on from a recent article looking at working remotely, this one will take a closer look at some of the previously mentioned 7 areas of collaboration and requirements.

These areas were:
1. Equipment (mobile and compute)
2. Access to and protection of data
3. Bandwidth and capacity
4. Video conferencing
5. Collaboration tools
6. Security
7. Framework and policies

Below provides a quick overview of some of the pro/cons, advantages and risks and a very high-level view of associated costs.

1. BYOD vs. company-provided – Depending on company policy BYOD may be a less expensive solution, where companies may not cover the entire cost of equipment purchase. However, this is offset by a likely increase in supportability and compatibility issues, as BYOD requires to provide support for a wider selection of devices. BYOD also creates its own set of security challenges, as a balance needs to be achieved between securing the devices while still providing access to the equipment to its owners.

The “company-provided” option will also be influenced by the device type (laptop vs other solution), its capability to support working remotely and the “quality” of device, i.e. is staff able to work effectively with the provided equipment?

2. Access to and protection of data has been discussed previously, but involves the areas of
a.Backup – how easy (and costly) is it to backup and restore data?
b.Security – how to protect data from unauthorised access, e.g. covering areas like access management, and securing devices with the ability to remotely wipe content
c.Location and availability of data at all times – ensuring that data is available when and where needed, regardless if it is cloud-based, data centre-based or any other device like IoT.

3. Bandwidth and capacity – this ties in with point 1: where a VDI solution is provided as part of remote access, it must be ensured that the infrastructure is capable to support the users at all times. There is a number of options for VDI, i.e. all major cloud providers provide solutions; in addition, there are the established solutions by Citrix, VMware and others.

Likewise, staff should be supported with the provision or access to required connectivity when not in the office – be that through an appropriate mobile data plan or sufficient speed and capacity outside the office. Associated costs should be managed and reviewed regularly, especially as technology progresses.

4. Video conferencing – Zoom, Microsoft Teams, Cisco Webex, Citrix, and many more provide tools for virtual meetings. While they provide similar fundamental capabilities, they differ in setup, management, etc. To minimise confusion and reduce support issues, companies should consolidate as much as possible on a single platform and educate/train people in its use and configuration options.

Where things can get even more complicated is in the setup of physical meeting rooms, as the offered solutions (e.g. Polycom, Crestron, Logitech) will differ significantly in their setup and use. Too often meetings start late because staff spend the initial period of a scheduled meeting trying to figure out how to get the video setup in a meeting room to work.

5. Collaboration – this deserves a separate article of its own, as “collaboration” will depend to a large degree on a company’s requirements:

  • Is collaboration required for knowledge management: Confluence and SharePoint come to mind, but additional security requirements may require a different set of solutions
  • What level of collaboration is required, when working on documents? I.e. is it desirable for several people to be able to edit the same document at the same time? Diaxion has found that enabling SharePoint to provide this functionality has improved the level of collaboration, facilitates the editing of documents “on the spot” (like during meetings) and increased user satisfaction by no longer having to wait at a “locked for editing” message.
  • Where collaboration is primarily for application development or project management, other tools will be required to efficiently track progress and support collaboration between people.
  • Security (item 6) and the overarching framework (item 7) will not be discussed in much detail. Security should be implemented after a thorough review of requirements and will include regulatory items as well as the sensitivity of any data stored and access to it. As mentioned in the previous article it will cover at least the following areas:
    a. Multi-factor authentication
    b. Enforcing of company profiles and security standards
    c. Limiting access (e.g. based on a user’s profile)
    d. Encryption
    e. VPN
    f. Virus and malware protection

    Creating the framework and the associated policies will depend on the existing and required level of maturity in addition to the requirements from items 1 to 6. The more flexibility is required for users, the better the documentation needs to be providing information on what is acceptable and what is not.

    A final point that should be not ignored is to educate and properly train people in the use of the provided capabilities and functionality as well as in the overall framework to ensure that people understand their responsibilities and are able to use technology in effective and efficient ways.

    Categories
    Blog

    Azure Migrate – Part 1

    Diaxion’s history and heritage was born in the data centre. We have been involved in many data centre migrations both large and small, however we are now seeing more and more data centre migrations to public cloud. For many organisations, the adoption and migration to public cloud offers the opportunity to transform yet there are still perfectly acceptable situations where a direct server migration to an IaaS platform is the answer. For example, the migration of Windows Server 2008 R2 workloads to Azure offered customers extended support for the 2008 operating system allowing further time to transform to newer operating systems.

    Where a server migration to IaaS is required, the major public cloud providers offer great tools to assist with the migration. Diaxion recently worked with a customer where Azure Migrate was used to migrate many servers from an on-premises location to their new Azure tenancy. Azure Migrate uses Azure Site Recovery (ASR) technology for many of its features which many customers are already using and are familiar with. The ASR technology offers an easy path for the replication and migration of workloads but what Azure Migrate offers, is a business friendly, analytical view of the expected Azure footprint for the servers and is the recommended tool of choice for migrating workloads to Azure.

    This blog series will walk through the process of using Azure Migrate to Discover, Assess, Replicate and Migrate workloads from an on-premises location to Azure with some handy tips and tricks we’ve experienced.

    Azure Migrate is compatible with on-premises VMware and Hyper-V virtualised environments or even physical servers. For VMware and Hyper-V environments, after an Azure Migrate project is created in the Azure Portal, a lightweight virtual appliance is available for deployment into the on-premises environment. This appliance can be deployed with no impact to the existing environment but should be performed under ITIL change controls. The Azure Migrate project is created within a valid Azure subscription with the deployed appliance registered to Azure Migrate with unique keys specific to the project. The appliance can be configured to auto-discover the on-premises environment with no outage or impact. The destination hosts / clusters hosting the virtual workloads must be listed in the appliance with valid credentials to locate the servers within the environment. Once the initial Discovery is completed, the discovered servers will be listed in the Azure Migrate project with a set of specific information.

    With the on-premises environment now set as Discovered in Azure Migrate, the features of Azure Migrate such as Migration Groups and Server Assessments come into play and this is what sets Azure Migrate apart from ASR.

    Migration Groups are a construct to group servers logically. These Migration Groups can be manually created allowing servers that will be migrated together. These servers are typically servers that share a common workload or service such as a multi-tiered application or servers that share a similar business function. If the on-premises information is not detailed enough to build comprehensive groupings, an Azure Migrate feature called “Dependency Visualisation” can be used. Dependency Visualisation does have its own requirements to deploy and can be used in an agent-based mode or agentless mode depending on the on-premises environment. These agents are installed on each VM that requires the Dependency Mapping and are specific to Windows and Linux clients. Dependency Mapping can also use data from Systems Center Operations Manager 2012 R2 or later, if that is currently running in the environment you are working with the MMA (Microsoft Monitoring Agent) does not need to be installed. The agents for Dependency Mapping can all be installed with no impact to the operating system.

    The migration group that is created must be linked to a specific discovery source (e.g. the Azure Migrate appliance), each group should have a unique name for the project, and each group will contain the servers that you intend to migrate together.
    Server Assessments are another feature of Azure Migrate. A Server Assessment uses discovered data and migration groups to provide analytical data to the customer to help inform the choices available for the migration of servers. A Server Assessment can use two types of sizing criteria, either “Performance based” or “As on-premises”. The “Performance Based” data is based on collected performance data for CPU, memory utilisation and disk data such as IOPS and throughput. The “As On-premises” data matches the on-premises VM size and aligns it to the closest match for Azure VM sizes.

    When creating a Server Assessment, there are several properties that can be populated depending on your own requirements, these properties influence the outcome of the assessment. The properties include:

    • The target location where you want to migrate the virtual machines into
    • Storage information such as the type of disks (automatic, premium SSD or standard HDD),
    • Azure Reserved Instance usage
    • Sizing criteria to right-size the VM
    • VM series that can be utilised
    • Cost factors such as hybrid benefit, EA licensing, Azure offers applicable to your subscription, the currency and any discounts that may apply.

    Depending on your individual use case, many of these properties will be the same for all server assessments across your set of machines to be migrated.

    After the assessment has been created, the output from the assessment describes:

    • The Azure readiness of each assessed server, i.e. Whether the VMs are suitable for migration to Azure
    • A monthly cost estimation based on the compute (e.g. VM series and size) and storage (disk sizing and performance level)

    The Azure readiness describes each VM as ready for migration, ready for migration with conditions, not ready for migration or unknown (if there are issues with data collection). These readiness states are explained in detail with remediation steps where applicable, most of which are easily achieved.

    The cost estimation is a very handy set of information that can assist with budgeting and forecasting the future Azure spend. The compute and storage cost estimations are aggregated for all VMs in the assessment.

    The Azure assessment can be exported in CSV format to be kept as a point in time record and distributed to the teams involved in the Azure migration process, e.g. operational and finance teams.

    And that’s it for Part 1 of this Azure Migrate blog series. The next blog will look at the replication of data using the Azure Migrate service. Hopefully this has been helpful and if you have any questions about the migration of workloads to Azure, please contact the team at Diaxion and we’ll be happy to assist with any questions you may have.

    Part 2 – Replication
    Part 3 – Migration
    Part 4 – Tips, Tricks and Troubleshooting

    Categories
    Blog

    Changing your Operating Model

    In previous operating model articles, I have talked about what is the operating model and secondly about how to implement an operating model. This time around I want to talk about ways of changing your operating model. Most of us already have an operating model whether we like it or not and it’s about changing this operatable that can be complex talk consuming but also trying to decide what operating model to move to as there are so many that you could choose from. Your existing operating model may not be explicitly defined but it exists whether you like it or not. The maturity of this operating model depends on your group’s maturity, the organisational maturity as a whole and, the awareness of operating models, business architecture and, turning strategy into actionable plans and programs. Probably like most organisations there is a highly mixed level of maturity within the organisation. Some areas will be advanced or at least fairly well advanced, others will be at a basic level, there is no getting around this aspect. Creating a uniform level across the organisation is not desirable or even possible. As you gain more maturity it becomes easier to understand that you need multiple operating models across the organisation. This is not a one size fits all approach. Some parts of the organisation will retain a more traditional view and have an appropriate operating model for them. Other parts of the organisation will be more experimental and, require a super suitable operating model for that style of work. These ends of the spectrum are not the same and should not be treated as such. As we go through this article, I’ll point out what could change and potential ways of changing.

    What operating model do you have now?

    The starting point is to understand what operating model do you have currently or more likely, what operating models? Much of this depends on your current level of maturity. It may sound trite, but the operating model is a journey rather than a destination, as your operating model will change overtime depending on the organisation needs and strategy. I would expect that most organisations would have a “plan, build, run” model. This is probably the most common model in use today especially in IT. Other models that could be used are, a siloed approach or a very horizontal view but based on technology towers. More advanced models include well defined models such as IT4IT, the Service Operating Model Skills (SOMS) framework and the set of MIT Sloan models:
    • Diversification (low standardisation, low integration)
    • Coordination (low standardisation, high integration)
    • Replication (high standardisation, low integration)
    • Unification (high standardisation, high integration)
    There are industry specific models such as eTOM (telecommunications), IAA (Insurance), BIAN (Banking), IFW (Banking but used elsewhere as well), etc. Please contact us if you want information on these models. My personal view is that some of these models are ageing and, are not quite as suitable going forward with the impact of digital disruption. Make sure that you don’t confuse the business model (how the business delivers value) with the operating model (how the business runs itself).

    Why do you need to change your operating model?

    Most organisations know where they want to go even if they cannot fully express it. Communicating strategy to the entire organisation (top to bottom) and getting all the staff to understand it is a key failing in Australian businesses in my opinion. Large organisations typically have a strong strategy, architecture and, business planning organisation and have the resources and capabilities to define the strategy and then execute on it. Smaller organisations, those at the bottom of enterprise level and smaller, often struggle to turn their strategy into execution. If you want your current strategy to have an enterprise wide impact then you need to do a few things. These include but not limited to:
    • changing your operating model,
    • having the investment capability to implement and,
    • having the execution capability.
    Since we’re only talking about operating model here, I’ll ignore the other two parts. Many organisations run multiple, often independent initiatives within business units to effect their strategy. The key issue here is that they are often independent and not coordinated. Changing your operating model brings these independent efforts together, usually (nothing is perfect!). But you’re still wondering why the operating model changes. Key reason is business change, this happens the whole time as we all know but, the operating model changes when there is a significant business change. This could include: External industry disruption are such things as digitisation and/or, new entrants into the market, etc. This disruption often requires significant realignment of the business to either, respond with competing capability or, protect current market share and customer base. At the same time the organisation is going to need to maintain its existing capability and this is where often a dichotomy arises between the old way of doing things and the new. From an IT perspective, we can respond in several ways, some specific and some that are more generic and required deeper dives to make the appropriate changes for your organisation.

    Different models required

    As I stated before, you’re going to need different operating models to support older through to newer. Some parts of the organisation require, stability, high governance and, high structure; other parts of the organisation require, low governance, high change and, high flexibility. There will be other areas that require a mixture of these aspects. Gartner is fond of its two-speed model, I’m not a big proponent of this as I believe that two models is not nuanced enough. I agree that it’s simple to communicate this view but, often does not allow for enough flexibility within the organisation. The diagram below shows a view of multiple operating model mode attributes. It is important to understand that this represents a continuum and not discrete steps. A different view on this is based on what type of organisation and level of maturity you may have. I will discuss assessment around maturity and operating models in a different discussion. I am using the metaphor of types of Japanese fighter mainly because I like it and feedback so far has been good. Makes what can be a dry subject at times more fun. The fighter represents, in a broad brush the type of organisation you have. Of course different parts of the organisation will show all three, most likely and sometimes two at a time. There may be more nuanced separations but, for sake of simplicity I am keeping it to three:
    • Street fighter – do then think, individual, shortcuts, lack of strategy, superhero pretence
    • Samurai – honour, discipline, loyalty, control, strategy
    • Ninja – agility, adaptability, precise tools for the right job, think outside the square, focus, human
    This last diagram starts to describe how IT can leverage this model to change its operating model over time. While this has a flavour of the Operations / Service Delivery areas of IT, it does not have to apply only here. It can apply on a smaller scale (but probably not at the team level) or larger scale – IT wide.

    Conclusion

    I hope this article has proven informative, gives you some ideas, promotes some conversations and, discussion around your operating model and your capability to enact. As always, we are here to help and guide you on the journey. Nothing will be perfect and, anyone who tells you different or, that everything will be ‘finished’ when they leave does not understand the journey. Feel free to contact us for a discussion.
    Categories
    Blog

    Managing IT costs and providing financial visibility

    The proliferation of IT systems and options to enable areas outside of IT to make their own purchasing decisions. This has made IT cost management increasingly difficult to successfully provide visibility into costs and for IT to manage or control them.

    Historically there has been underutilisation in physical compute resources. Virtualisation promised to alleviate this concern, which led to a sprawl of virtual machines. Then cloud computing arrived, which allows anyone with a credit card to create their own IT resources.

    This has created or increased challenges such as:

  • How to successfully monitor costs within heterogenous cloud environments that may be managed by different groups
  • Using a unified approach for cost management between business, finance and IT
  • Managing IT resource sprawl and providing accurate cost information to consumers (e.g. show-back or chargeback)
  • Managing cloud costs and budgets across multiple cloud providers
  • All major cloud providers provide cost management tools for their platform, which can help to manage costs and cloud budgets better and provide some recommendation as to where organisations can save money. However, in heterogenous environments, they will only provide a partial view of the total environment.

    Several companies offer solution to allow for a common view regardless of the platform addressing the typical challenges and support:

  • Optimise cloud resource consumption and rates aligned to financial principles and visible to all relevant groups – business, financial and IT
  • Guidance for cloud financial data based on business needs
  • Visibility into cloud account hierarchy
  • Ability to optimise resources through improved visibility across technology – and with some solutions expanding this to a full view of all costs
  • Support visibility into workflows and processes allowing automation
  • While these solutions – e.g. Apptio, CloudCheckr, Cloudability – come at additional cost, they can provide a very detailed view about costs and resources: allowing to identify unused or under-utilised resources, identifying poorly allocated budgets and in some cases a full end-to-end view covering purchase/run costs, support costs, project costs, etc.

    Diaxion can advise on Diaxion Cloud Finance Visibility and Optimisation to improve your visibility into IT spend and beyond, and to develop a strategy or roadmap how to optimise and measure and track results.

    Categories
    Blog

    Working remotely: what has – and had to – change?

    2020 so far has definitely not been a good year. From an IT perspective it forced a lot of companies to quickly re-assess their way of working and to quickly come up with options to enable staff working remotely. While this may be working overall by now, what are areas that may still need to be considered or looked at?

    Diaxion as a boutique consulting company has been lucky in this regard:
    1. We have been fully cloud-based for some time
    2. We are used to working from various locations, be that from home, remotely or a client location
    3. We are used to collaboration while not necessarily being in the same office

    When enabling remote working, areas should not be looked at it in isolation, but from a comprehensive viewpoint. The approach to a remote workforce will obviously vary between companies, but the following areas will need to be assessed, decided on and managed:

    1.Company-provided equipment vs. BYOD (bring your own device) for both mobile and compute.
    The advantage of having a company-controlled and managed device need to be weighed against the possible cost impact; people may prefer to select their own device vs. being supplied with a potentially inferior model

    2.Access to data, information, files and folders and their backup and restore options
    Should all data be centrally managed – on premise or cloud? Or should most data be held locally for each user? The answer may depend on the role of the user and the type of business. How often should data be backed up and what are the restore options? The latter should also be considered in light of malicious attacks like ransomware or malware.

    3.Bandwidth and infrastructure supporting remote working
    Is there sufficient network capacity and application capacity to handle all or the majority of staff working remotely? Do staff have sufficient network performance to work effectively?

    4.Video conferencing
    Diaxion recommends consolidating onto a single – or at most two – platforms to enable video conferencing. With people unable to come to the office, this will enable to keep the teams connected and allow also for not necessarily work-related interaction

    5.Collaboration tools
    These include instant messaging, allowing to work simultaneously (or nearly so) on documents, and similar areas

    6.Security for everything, which may include
    a.Multi-factor authentication
    b.Enforcing of company profiles and security standards
    c.Limiting access (e.g. based on a user’s profile)
    d.Encryption
    e.VPN
    f.Virus and malware protection

    7.Updated policies to provide a framework for remote work and to guide people
    One item that should not be forgotten or neglected, though, is the mental impact on people. Working from an office provides significantly more human interaction, be that as part of a coffee break, lunches or just the opportunity to exchange ideas or simply chat.

    A random sample indicates to us that people overall like to work from home and a majority expects to continue to do so at least a few days per week. Feedback has, however, also shown that only a minority would like to continue to work from home 5 days per week – and it must be ensured that this is handled in a way to keep up team morale, team performance and overall keep people happy.

    Diaxion will be able to assist or guide with the implementation or advise on the 7 mentioned areas above.

    Categories
    Blog

    Device management and Security in a COVID world

    There was little time at the start of the COVID-19 epidemic for businesses to prepare for what was going to happen and to prepare for a working from home scenario. When staff where forced to work from home, many businesses did not have the required infrastructure in place to accommodate device management and security at such scale. It is amazing that when you look back, there has been an enormous amount of effort put in by all businesses and their IT partners to make this happen in such a short period.

    For most, the biggest issue to be solved was allowing access to files and applications remotely. In the majority of cases, the solution was either to implement a VPN connection to allow access to applications and file servers in the data centre, implementing cloud-based Software as a Service (SaaS) solutions like Office 365 SharePoint and OneDrive or, a mixture of both on-premises and cloud based solutions. While using VPN connectivity may seem like a good idea at first, when allowing hundreds or even thousands of staff access at the same time is a huge challenge on its own. Some businesses were lucky enough that they had already started implementing their cloud strategies but many other businesses have had to fast track their migration into a cloud-based storage or SaaS applications to provide staff a similar working experience as if they would be in the office.

    Device management

    Whilst implementing this infrastructure within a very short timeframe has been impressive, some of the required infrastructure changes had to be implemented later as there was simply not enough time. Projects that usually would take months to implement have been rushed to accommodate the new way of remote working resulting in some project components being prioritised over others.

    A good example is that moving away from VPN solutions and allowing access to data through cloud-based solutions also meant a change in the way devices are managed. Cloud based solutions allow staff to work on files and in applications in an “anywhere, anytime, any device” method where connecting through a virtual private network (VPN) is no longer required. Devices and their security must be managed differently as traditional ways no longer apply and delaying implementation of these changes for too long can become an operational nightmare.

    Traditional support systems are configured with the assumption that the device is in the office at least once every few weeks. Connecting to the office through a VPN or being physically in the office for a few hours was enough to push security updates, user profile changes, anti-virus updates and report back on device health and security status. Now that staff have been working remotely over the last few months and VPN connections are used in less cases, their devices have become stale in the system as they were unable to report back the same way as if they were in the office. Pushing anti-virus or feature and security updates required the device to connect to the internal systems and even the date and time on the devices were synchronised with internal systems, causing all sorts of calendar nightmares when there was a slight difference between them.

    All these problems are easy to solve when planned for, but this has not always been the case for everyone with so little time to prepare.

    Security

    Traditionally, the network within the office premises and in the datacentres is considered a secure environment with a secure connection to files and applications. A VPN connection or a remote desktop solution to access files and applications has been the default solution when users are not physically in the office. With cloud, this authentication methodology required a few changes.

    User authentication in cloud-based solutions is generally performed over a secure connection directly over the internet. Most SaaS solutions allow for identity synchronisation with on-premises Active Directory to replicate credentials and when combined with Multi Factor Authentication (MFA), this makes for a very secure authentication solution for allowing user access to data and applications in the cloud.

    Security of data on devices requires a different approach. As staff can now login from anywhere and from any device, data can be saved in many places including on personal devices local storage. It is important for businesses to ensure sensitive company data is protected and properly managed. Device management solutions like VMware Workspace or Microsoft Intune can be used to ensure security requirements like anti-virus, disk encryption and password enforcement are in place and consistently monitored while staff are working remotely. These solutions can also be used to remotely manage the devices providing another layer of business security. This includes performing a remote wipe in the case a device is lost or stolen to ensure sensitive data is not compromised.

    The ability to block or allow access to company data with device compliance policies is another great feature. Compliance policies can be configured to allow access only to company data when devices meet all security requirements to ensure maximum security between your company data and your staff working remotely.

    Implementing a device management tool and enabling additional security policies to further tighten security can cause some serious disruptions if not executed correctly. Planning is vital for a seamless migration, but as it also requires a different way of working for the end user, staff education and expectation management is equally as important for a successful implementation.

    Diaxion have experienced and certified Consultants that can help with your device security and management journey so, if this is something that your organisation has questions about, please reach out to our dedicated team for a confidential chat.

    Categories
    Blog

    DevSecOps, IT Operations, InfraOps, DevOps – is there a difference?

    The last few years have seen a proliferation of terms, but what is the actual difference between them, where do they overlap – or is it all just buzzwords? So what does a DevOps, a DevSecOps, an InfraOps or IT Operations engineer actually do? And what happens, if the role becomes that one of an “Agile InfraOps engineer”?

    On the basis of job descriptions the answer seems to be: read the requirements first, as there does not seem to be a lot of agreement between roles, even though they may carry the same title. In reality there is probably a significant overlap between all of these areas and every company will do things differently depending on:

  • The overall approach: are you using an Agile approach and if so, in what areas is it implemented?
    1. Is it IT-/company-pervasive or does it apply only to select areas?
      Is the focus more on technology, culture, mindset or does it cover all of them?
      Which items does it include, i.e. from the above abbreviations is Security, Business and Applications part of the agile approach?
  • Size of the company: a larger company will usually split IT delivery into a larger number of teams with their own specialisation or area(s) of responsibility.
  • Use of partners and their inclusion (or not) in the various areas of Operations and Development, i.e. how do you best integrate a Managed Service Provider or Consulting Partner?
  • Regardless of the term used, the aim should be to:
    1.Deliver more often
    2.Reduce impact and risk
    3.Reduce delivery effort
    4.Increase security

    What ultimately matters is the outcome and not the terminology. While the terms below will create an initial view, the implementation will vary on each environment and implementation.

  • DevOps – combining Development and Operations with all staff having development skills and being equally responsible for Operations, resulting in developers quickly identifying and resolving issues as they are familiar with the applications.
  • InfraOps – Infrastructure Operations (cloud or on-premise; virtualised, containerised, serverless or physical) or Infrastructure Optimisation with a focus on automation and simplification.
  • DevSecOps – DevOps with Security integrated into the team, ensuring that Security is not an afterthought or becomes a blocker.
  • All of these approaches will depend on staff willing and interested to adapt and learn continuously, a good framework for automation, orchestration and instrumentation, and company support to develop new ways of working and new products.