Categories
Blog

Is Hybrid Cloud the post COVID Solution?

With the immediate future being unpredictable and rapidly expanding, the Hybrid Cloud offers:

1.      Business Continuity

a.      Hybrid Cloud allows for continued business even in the face of disruptions or unexpected outcomes.

 

b.      Unless your business was already operating under a sophisticated network, there would’ve definitely been unavoidable issues.

 

2.      Scaling

a.      Hybrid Cloud allows for the cost-effective and seamless transition between the scaling up and down of your business relative to your immediate situation, voluntary or not.

 

b.      This also negates capital expenses and local resource requirements.

 

3.      Migration

a.      Hybrid Cloud deployment requires very detailed monitoring of these aspects: cost analytics, quota control, service library customization, performance monitoring, governance capabilities.

 

b.      Especially when migrating data to and from public and/or private clouds.

 

c.      Also moving into richer libraries and the expansion of APIs, apps, and infrastructure capabilities.

 

4.      Security

a.      This is an issue which all users engaging with the Hybrid Cloud will need to account for. The industry quickly working towards affiliating security strategy with business strategy.

 

b.      This involves transparency with aspects such as control and visibility across downstream evolution and cloud deployments.

 

As always Diaxion can help you with this work. We have completed this across a number of our clients; banks, superannuation, government, utilities.


Contact us to find out more.

Categories
Blog

Class based Firewall Policy

 

In the early 1990’s firewalls were invented to protect organisations by dropping packets based on a predefined security policy consisting of destination IP address and protocol or port, otherwise referred to as a packet filtering system.

Over the years firewalls have evolved to provide an array of security controls, starting with stateful inspection and application layer filtering, evolving to offer threat prevention features such as AV, SSL Inspection and Web filtering. They can also provide secure 2FA VPN solutions for SMEs with advanced logging and reporting functionality.

This is all great, however, at the heart of every firewall remains a manually defined security policy that ultimately determines what traffic is allowed to flow through.

 

The problem with traditional firewall policies

Both the attitude and implementation towards firewall security policies has not changed since their inception. They continue to be static in nature, requiring human intervention to create their structure, maintain rules and enforce changes.

Within most organisations, firewall policies are seen as shared infrastructure, on the basis that inter-departmental application flows traverse through them. One organisation can operate tens if not hundreds of firewalls, each with a unique security policy. Ownership of the structure and content tends to be a grey area, with security and governance departments offering guidance only as to their structure.

Rules within a given policy tend to be fine grain (one-to-one), interspersed with periodic coarse-grained (one-to-many or many-to-many) rules. As such, they are destined to grow to unmanageable sizes, in some instances thousands of rules.

Needless to say, the larger a policy becomes the harder it is to manage. Large policies pose a risk to operational performance, tending to increase CPU and RAM usage as the number of firewall lookups increase. Similarly, security compliance and governance become a risk with it being harder to identify and manage duplicate and overlapping rules.

Depending on the make and model of firewall, there are tools available that can help identify duplicate rules. Overlapping rules tend to be more subjective and therefore harder to delete. Counter hits are the main mechanism to tidy up redundant rules, however, to alleviate the risk of impacting services, it can take up to 12 months before these rules can be deleted, if at all. Both application owners and change management divisions take a pessimistic attitude when it comes to removing obsolete rules, understandably. The perceived view is that the risk is not worth the reward, in the short term at least. Another view is of kicking the can down the road.

Being an infrastructure service, security policies are not directly revenue generating. As a result, the appetite, funding, and resources needed to maintain and tidy up rules periodically almost never eventuates. Quite simply, until it becomes a realised operational or security risk most firewall security policies are left to their own devices. Except for SDDC and SDN environments with micro segmentation and automation, most organisations or projects neglect the need to delete associated firewall rules when applications are decommissioned. This is the main reason for the existence of redundant rules.

It doesn’t have to be this way…!!

Hopefully you can now appreciate the inevitable onset of issues that can arise from neglected firewall policies. If only there was a way to create a firewall policy that organisations could simply ‘set and forget’. A firewall policy that meets security and compliance obligations but doesn’t require the creation of new rules every time a new application is deployed.

Let me introduce ‘Class Based Firewall Policy’ (otherwise known as CBFP). In some respects, an adaptation of zone-based firewalls, CBFP uses the concept of manually defined classes (rather than zones) to differentiate between objects. It differs somewhat from zone-based firewalls as classes are not associated with logical or physical interfaces. Instead, they are a logical construct that can be used to form a policy structure with pre-defined rules to meet differing security postures.

In its simplest form, CBFP is a set of pre-configured firewall rules permitting flows between security groups and subnets, differentiated by a classification hierarchy. Every time an application is deployed the designer categorises each of the servers based on their security rating. Once the application is deployed, unless there are some bespoke or non-standard flows between application components, no changes to firewall rules are required to UAT and eventually go-live.

Figure 1 provides a graphical representation of what a typical CBFP hierarchy might look like.

Figure 1: Class Based Firewall Policy (example)

It’s important to understand many aspects of a CBFP policy are bespoke and unique to each organisation. For example, the  uncontrolled’ class or zone for company A may house web servers and required HTTPS be permitted from the ‘External’ zone. Company B may house web servers in the ‘Controlled’ zone and not require subdivision of controlled workloads. Similarly, not all organisations will have a need for an isolated zone that’s completely protected from direct external access.

When looking at the real-world implementation of a CBFP policy, one can use firewall objects based on IPv4 address to classify servers or appliances onto a class or zone, by virtue of their security group membership. Perhaps a better implementation, albeit very much dependant on the layer 3 network architecture for each organisation, is to reserve Class C IPv4 subnets to each CBFP classification or zone. This aligns with the goals of CBFP to ‘set and forget’.

Similarly, for SDDC or SDN deployments using micro segmentation,
logical tags can be created and assigned to individual workloads or virtual machines based upon their chosen CBFP classification or zone.

From an operational perspective nothing really changes when
using a CBFP policy. The only noticeable difference will be the reduced number of firewall changes needed. Application design teams however must now be responsible for considering security compliance when correctly classifying applications as part of any new deployment.

Typically, HLD and LLD documents contain a list of TCP/UDP
ports needed to be opened for a new application to function. Either instead or in addition, they will need to assess their risk appetite or acceptable level of risk when it comes to classifying applications, then stipulate the security rating and class (i.e., ‘Controlled’ or ‘Secured’ etc.) for each individual server being deployed. This will ultimately improve time-to-market for projects as designers typically understand applications better than  implementation engineers so by removing them from having to decipher and translate a design into firewall rules achieves this and removes a percentage of risk.

It’s important to highlight, one CBFP policy can be deployed to multiple firewalls within an organisation, both standardising and simplifying a security landscape.

Using the example in Figure 1, below is a high-level layout example
of how a firewall policy could be structured when adopting CBFP policies:

1.     Stealth Rules

  • Deny known threats (i.e., RFC 1918 prefixes inbound
    on perimeter firewall’s)

2.     Infrastructure Rules

  • Permit known infrastructure rules

3.     Application Exemption Rules

  • Permit application exemption flows that are not
    defined within global CBFP rules

4.  USER Class Rules

  • Permit USER to/from USER
  • Permit USER to/from MANAGEMENT rules
  • Permit USER to/from EXTERNAL rules
  • Permit USER to/from UNCONTROLLED rules
  • Deny USER to ANY
  • Deny ANY to USER

5.     UNCONTROLLED Class Rule

  • Permit UNCONTROLLED to/from UNCONTROLLED
  • Permit UNCONTROLLED to/from MANAGEMENT rules
  • Permit UNCONTROLLED to/from EXTERNAL rules
  • Permit UNCONTROLLED to/from USER rules
  • Permit UNCONTROLLED to/from CONTROLLED rules
  • Deny UNCONTROLLED to/from ANY

6.     etc…

Lastly, some things to be aware of…!!

  • Review your governance and compliance controls. It’s important to regulate and ensure the correct classification of workloads and applications into the correct security classification.
  • To avoid some of the pitfalls of a traditional firewall policy, namely the creation of redundant firewall rules, ensure you have an adequate IPAM register that aligns with the list of security objects defined within each firewall policy. This way, when not using a SDN or SDDC environment, every time a server or workload is decommissioned the firewall object will get deleted.
Categories
Blog

The Red Hat and Microsoft Azure Partnership

Microsoft and Red Hat are known to have been competing in the Operating System space. However, a few years ago both realised customers are using Windows and Red Hat, Java and .Net at the same time and many customers were asking for them to work together.



More than 95% of fortune 500 companies trust Azure to run their services with almost 100% of those companies in Banking, Airlines and Healthcare sector relying on RHEL.



This led to a partnership between Microsoft and Red Hat. They have been working closely on developing as well as supporting the solutions since then.  

Initially Microsoft and Red Hat started working together on number of smaller solutions such as Red Hat Enterprise Linux (RHEL) for Hyper-V and RHEL for Azure.



Overtime they have added to support OpenShift, JBoss, Ansible and SQL Server for RHEL and RHEL for SAP solution in Azure.



Having two widely deployed Operating Systems in the industry provides powerful solution and benefits to customers who need to co-mingle RHEL and Windows applications and workloads on Azure


Customers will also have two enterprise companies standing behind that with dedicated onsite technical support. With this, customers will call one number for support without worrying if it’s RedHat or Microsoft issue.


Let’s consider OpenShift on Azure. It’s cost effective, easy, and fast to deploy and provides a completely managed service that allows customers to focus on their application first. 


One real-world example worth mentioning is Lufthansa. Lufthansa uses the entire OpenShift on Red Hat Enterprise Linux stack running on Azure. As a result, they could see a 50% cost reduction.



One more benefit is minimal deployment time. Deutsche bank, which moved to a combined Microsoft Azure and Red Hat OpenShift service, were able to get their proof of concept to production time down to three weeks.



 Running Red Hat OpenShift on Azure is an Azure first party service jointly engineered, managed and supported by Red Hat and Microsoft. It inherits all of Azure’s compliance and allows customers to use their Azure billing.



When Microsoft and Red Hat work together on solutions for enterprise customers, customers get better integrations, better support, and jointly they are able to solve problems and provide an end-to-end solution. 


Categories
Blog

Azure Migrate – Part 4 Tips, Tricks and Troubleshooting

This is Part 4 of the Diaxion blog series focusing on the Azure Migrate service to Discover, Assess, Replicate and Migrate workloads from an on-premises location to Azure.

Part 1 was focused on Discovery and Assessment and can be found here – Azure
Migrate – Discovery and Assessment

Part 2 was focused on the Replication component of Azure Migrate and can be found here – Azure Migrate – Replication

Part 3 was focused on the Migration component of Azure Migrate and can be found here – Azure Migrate – Migration

By now we have successfully discovered, replicated and migrated our on-premises servers to Azure. This has been a work in progress and has involved many different components including the configuration of our on-premises hosting environment and our Azure tenancy to support Azure Migrate.
As you work through the Azure Migrate process, there a couple of things to look out for. We will try and highlight these to give you a headstart if you run into any issues along the way. Microsoft Support is generally available and the Azure Portal offers an easy way to log a support ticket with pre-filled options as you step through the Migrate process if you need support from Microsoft.

Replication Error #1

Replication may fail during an ongoing replication or may fail to start at all if there is a low disk space condition on a Hyper-V C:. You may get an error message in the CBengine logs (replication logs) such as “Failed to export VM config for VM *** with hr:8079002c”. If you receive this error message, check the Hyper-V host C: that your VM is attached to for sufficient disk space. There is no specific limit as it is dependent on the VM being migrated. If you cannot free up sufficient disk space, you can edit the registry to change the default path from C:\Windows\TEMP\ with the following

reg add “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Replication” /v ConfigExportLocation/t Reg_Sz/d<TargetfolderPath>

Replace <TargetFolderPath> with the location which should be used for exporting the VM. It should be an existing location. After this, the MARS agent should use the target folder to export the VM, for any fresh enable-Protection scenarios.

 
Replication Error #2

You may experience issues replicating data from Hyper-V hosts if the MARS (Microsoft Azure Recovery Service Agent) agent doesn’t match the ASR (Azure Site Recovery) Provider version. These two critical pieces of software need to be aligned to ensure the data can be replicated into your Azure tenancy successfully.
 
You may find errors in the MARS Agent logs such as InitializeAgent In Progress: device is not registered.
 
Microsoft is constantly updating Azure Site Recovery with rollups where the version of the software might be changed. The best way to check the current version of the software is on the What’s New page –

https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-whats-new

 

Ensuring your agents are supported is crucial to the success of your migration project. Microsoft recommend keeping your versions at N-4, anything below that will limit your support options. 

Migration Error #1

A server may not be migrated from the on-premises Hyper-V host environment if the disk size cannot be obtained. An error message should appear within the Azure Migrate service in the Azure Portal similar to –

This may be an issue with the Hyper-V host, an easy resolution is to migrate the VM to another Hyper-V host in the cluster if at all possible.

Migration Error #2

You may use Windows Firewall to protect your on-premises servers with a specific set of rules to enable RDP, generally this is configured with specific network profiles. Depending on your network and supporting configuration in Azure, the server that is being migrated may alter its network profile post-migration unexpectedly causing RDP to be disallowed. You will generally not receive an error if this is the case and will encounter a black screen when trying to RDP to the server. A potential fix for this situation is to change the Windows Firewall Advanced setting prior to the migration to enable RDP across all network profiles.

And that is it for this Azure Migrate blog series. Hopefully, this has been helpful and if you have any questions about the migration of workloads to Azure, please contact the team at Diaxion and we’ll be happy to assist with any questions you may have.

Part 1 – Discovery and Assessment

Part 2 – Replication

Part 3 – Migration

Part 4 – Tips, Tricks and Troubleshooting

Categories
Blog

Azure Migrate – Part 3 Migration

This is Part 3 of the Diaxion blog series focusing on the Azure Migrate service to Discover, Assess, Replicate and Migrate workloads from an on-premises location to Azure.

Part 1 was focused on Discovery and Assessment and can be found here – Azure
Migrate – Discovery and Assessment

Part 2 was focused on the Replication component of Azure Migrate and can be found here – Azure Migrate – Replication

As we now have servers replicating from our on-premises environment to our Azure tenancy, we are ready to migrate the selected servers. The migration process is a disruptive, multi-step process that includes the graceful shutdown of the chosen server, the migration itself which replicates the final changed blocks, the creation of the virtual machine in Azure and finally the graceful start-up of the Azure VM. The graceful shutdown of the on-premises server is at an operating system level only, so be careful to shutdown any application specific processes prior to the migration.

Prior to any migrations however, it is highly recommended that you conduct a test migration first. This test migration can be done in a non-disruptive fashion with no impact to the on-premises server that is in scope for test. You should however be cautious of your network settings as the test migration does produce a live, running VM in Azure. If you have the destination network connected to your on-premises environment, you may experience issues with conflicting network addresses for the same server. The test migration is used to ensure the VM can be migrated and to prove the stability of the operating system post-migration, it is not necessarily
intended to prove application stability. After the test migration has been completed successfully, it can be cleaned up through the Azure Migrate settings which will delete the Azure VM. During the test migration, replication of the on-premises server to Azure continues without impact.

The migration process is relatively simple after the test migration. The actual  migration step within the Azure Portal is effectively a “Yes -> OK” click-through. Once the migration is underway, the Azure Portal provides step-by-step updates of the Azure Migration process with a duration per step.

Experience suggests the following timeframe can be expected. 

Job Step

Estimated Time

Prerequisites check for planned failover

10-15 seconds

Shut down the virtual machine

2 minutes

Preparing for failover

10-15 seconds

Start failover

2 minutes

Start the replica virtual machine

1 minute

The variation sits within the “Preparing for failover” and “Start failover” steps. These two steps are dependent on the server that is being migrated and take into consideration how busy or ‘chatty’ the server may be. A database server for example may be doing a lot more work than a static management server so you may experience different timing there.

Post-migration, you will find there are several tasks that should be considered. You should, at a minimum install the Azure Agent, the Azure Log Analytics agent, the Azure Dependency agent, and the Azure Network Watcher extension. There are dependencies that need to be considered such as the .Net Framework version that is installed locally on the server, this may need updating to support the Azure Agent. Again, experience has found that these agents are installed after the standard post-migration testing has been completed. These could include items such as.

  • Can you connect to the server via RDP?
  • Can you resolve the DNS entry for the migrated server?
  • Are the application / services running properly on the migrated server
  • Application specific testing relevant to the migrated server.

The final steps for the migration are Azure Migrate and on-premises clean-up activities. Although the Azure Migrate process shuts down the running  on-premises instance, there are no other clean-up activities that are built into the process and these should be undertaken manually after the migration has been signed off. These include stopping the replication from on-premises to Azure (there is nothing left to replicate post-migration), the removal of the server from on-premises backups and the eventual deletion of the on-premises server.

There is one other consideration assuming not everything has been successful during the migration. What about failback? What are your options in this scenario? And this is actually something that Azure Migrate doesn’t handle as cleanly as some other tools. Azure Migrate is a one-way journey for servers. You can perform the migration and then re-enable Azure Site Recovery to protect the VM from Azure to on-premises and then perform another migration, but that is clunky and can be lengthy. The simplest failback scenario is that the on-premises server still exists and is just in a shut-down state. You can remove the connection or the entire VM from Azure and power on the on-premises server, perform your checks and you are now back to the beginning. You will have to reset the replication of the on-premises server to Azure to prepare for another migration at a later date.

And that is it for Part 3 of this Azure Migrate blog series. The Migration step is the riskiest and requires the most planning from an ITIL change perspective due to the downtime, but technically from an Azure Migrate perspective, the simplest. The next blog will hopefully provide you a few tips and tricks and gotchas that we have experienced using the Azure Migrate service. Hopefully, this has been helpful and if you have any questions about the migration of workloads to Azure, please contact the team at Diaxion and we’ll be happy to assist with any questions you may have. Stay tuned for Part 4, coming shortly!

Part 1 – Discovery and Assessment

Part 2 – Replication

Part 3 – Migration

Part 4 – Tips, Tricks and Troubleshooting

Categories
Blog

Application Control – What is it and why do you need it

You may have heard of application whitelisting or application blacklisting where specific applications were allowed / disallowed depending on their nature. The progression of malware and the skills of attackers has made simple whitelisting and blacklisting a thing of the past. End user computing is normally the first port of call for attackers, it is the easiest and most vulnerable part of most enterprises. Elevated user privileges and user susceptibility is a dangerous combination, the concept of least user privilege is where all companies should start securing end user computing.
Application control is a small part of the wider Privilege Access Management (PAM) suite of controls but is a very important consideration. This article will take us a bit deeper into what application control is and why it should be on the radar for all companies.

 The Australian Cyber Security Centre (ACSC) leads the Australian Government’s efforts to improve cyber security. The ACSC has published the “Essential Eight” which is a set of strategies to mitigate cyber security incidents. These strategies include the prevention of malware delivery and execution which includes application control specifically, to prevent execution of unapproved / malicious programs including executables, scripts, driver libraries and installers. The strategies and maturity of the implementation are the key concepts of the Essential Eight.

So now we know what application control is at a high level, let us get into a bit more detail starting with a real-world example. Most modern operating systems have hundreds, if not thousands of executable files that can be modified, some we easily recognise such as notepad.exe and others we do not. For example, notepad.exe normally belongs in the C:Windows folder, however on the computer that John from Accounting uses, it is stored on his desktop. On Friday afternoon, John tried to run notepad.exe however the application control policy has settings only allowing notepad.exe to launch from the C:Windows folder and the executable signature on notepad.exe must match what is determined in the policy as an allowed application. Both of these conditions failed to match the policy and luckily for John and his IT Security team, the application control solution has prevented the launch of an executable that was loaded with malware and was set to wreak havoc on the network.

As we mentioned before, most application control solutions have a wider and deeper breadth of services that start with that principle of least user privilege. Application control solutions are generally policy driven using both the user and the computer identities to drive this control from a central management platform. These policies can dictate which applications are allowed to launch, which applications are blocked, and which applications prompt the user to respond. As in the example above, John from Accounting can be given Just-in-time (JIT) privileges to launch an application that may not be controlled by policy. The policy can also dictate that all actions, both user and software are written back to the central platform for auditing purposes.

The applications that are allowed to launch, those that have been ‘whitelisted’ have to be managed from that central platform. The policies that we mentioned above from the application control solution can enforce cryptographic hash rules, publisher certificate rules, path rules to ensure the executable(s) are in a specific folder, etc. It is no longer enough to just allow a file name, package name or another easily changed application attribute as these are susceptible to attackers.

We now have a better handle on what application control is and how we can use it to control application executables for our end user computing platforms and the server environments. However, making sure the application control solution is fit for purpose can only be done by matching the technology to the business requirements. Not all application control solutions are made equally so time and effort must go into the decision-making process well before the technology is implemented.

Some of the business requirements that we would normally start with are:

  • Does the technology align to the operating systems currently supported in your environment?
  • Do you need a solution that is based on-premises or is a cloud / SaaS option preferable?
  • Do you understand the applications currently running that you wish to manage?
  • What are the reporting requirements for both real-time and historical?
  • What level of integration with other systems do you need such as SIEM or packaging platforms?

These business requirements should hopefully give you an indication of the preferred application control solution that is suitable for your enterprise and will allow you to go forward with implementation an application control solution. You may find as you work through the business requirements that whilst application control is a good starting point, you may need the full set of features offered by a Privilege Access Management system. This may include integration with DevOps and build pipelines, the ability to manage scripting environments and even integration with capabilities such as serverless architectures.

A very important final note, however, is that application control by itself does not replace antivirus and other security software already in place on systems. Application control should be considered a complementary product and using multiple security solutions is the best way to an effective defense-in-depth approach to securing endpoints.

If you are considering application control hopefully this article has helped you along the way. If you are looking to improve your security posture and would like assistance with application control, please contact the team at Diaxion and we’ll be happy to assist with any questions you may have.

Categories
Blog

Azure Migrate – Part 2 Replication

This is Part 2 of the Diaxion blog series focusing on the Azure Migrate service to Discover, Assess, Replication and Migrate workloads from an on-premises location to Azure. Part 1 was focused on Discovery and Assessment and can be found here – Azure Migrate – Discovery and Assessment

Now that we have completed the Discovery and Assessment as per Part 1 of this blog, we are now ready to replicate the selected servers from our on-premises environment to Azure. Previously, most replications were accomplished over a secured, encrypted Internet connection, however this did run the risk of transmitting private, potentially business sensitive data over an Internet link. There are now options to replicate servers over an ExpressRoute connection either using Microsoft Peering or Private Peering using private endpoints, however as the private peering method is only new it is quite limited. The decision on which network methodology is best for you is an individual business decision that must be considered prior to replication.

Regardless of the network methodology you choose, once the network is in place the replication of data itself is quite similar. The on-premises Azure Migrate appliance co-ordinates communications and manages the data replication for servers replicating to Azure.

The replication process copies the server storage and the hosting configuration file from on-premises to a configured storage account in your Azure tenancy. Once the initial replication is completed, delta synchronisations occur frequently to keep changed blocks in check from on-premises to Azure. The replication of a server is a multi-step process allowing you to configure each replication manually. The steps take into consideration the different ways Azure Migrate can support your environment. These include:

  • The source settings of your environment and whether you are replicating virtual servers with VMware vSphere or Hyper-V or physical servers.
  • The ability to include metadata from the Discovery and Assessment phase including migration groups that you may have configured to group servers.
  • The target settings specific to the replication you are looking to complete. These include the target subscription, resource groups, storage account, virtual network and high-availability options that may apply to your servers, whether that’s an Availability Zone (not available in all regions) or an Availability Set. You can also choose to apply the Azure Hybrid Benefit in this step if you servers that are already licensed with a valid Windows Server License.
  • The target compute settings for the server that will end up running in Azure. You can choose to let the Assessment make these decisions for you with regards to the Azure VM size, OS type and OS disk or you can select these manually if you wish to override the assessment details. These sizes and options can be changed at any time prior to replication starting, once the replication is underway the options cannot be changed. A VM can always
    be resized post-migration however.
  • The disks that are available for replication. The normal practice is to migrate all disks that are attached to your on-premises server but depending on your configuration you have the option to take selected disks only. The disk replicas in Azure will be managed disk, you can choose either Standard HDD/SSD disks or Premium managed disks.

After the above options have been taken into consideration, the replication begins. The replication of the virtual machine is a ‘live’ event where the replication is ongoing until the VM is migrated to Azure. The replication is a storage-based data transfer keeping the on-premises VM and the Azure disks synchronised to minimise the amount of time required for the migration. This delta replication is handled through the Azure Migrate appliance that is deployed on-premises. Alerts and details of the replication are raised in the Azure portal under Azure Migrate.

The status of the replicating servers can be viewed through the Azure Portal, whether the replication is ongoing and showing a percentage of data replicated, or the state of the replication and whether it is healthy or critical. The replication for any server can be stopped if required via the Portal, as well as current and past events related to the replication of a server.

The actual duration of the replication is obviously dependent on your network and source environment. If you are transferring the data over an Internet link you must be aware of the risk of data flooding the link thus impacting the business. The source environment, whether Hyper-V or VMware can also contribute to the performance of the replication as the transfer is only as fast as the hosts and storage can manage. The source environment is generally also the cause of replication failures, there are a
few ‘gotchas’ that can trigger errors, we will talk to some of these cause and solutions in Part 4 of this blog.

Some technical comments about the replication process are as follows:

  • The Azure Migrate appliance is responsible for the compression and encryption of data prior to uploading to Azure. The end storage in Azure is also encrypted using “encryption at rest” protocols. HTTPS and TLS 1.2 are used for the transfer of data.
  • Replication cycles are dependent on how long the previous delta cycle required. The formula is previous cycle time divided by 2 or one hour, whichever is higher.
  • A delta cycle is started immediately after the initial replication is finished. Future delta cycles then follow the above formula for timing.
  • Folders are created in the Azure Storage account that has been configured for replication per replicating server. These folders contain the disks and the VM configuration file. These can be explored using Azure Storage Explorer.
  • Azure Migrate will automatically create selected Azure services on the first replication attempt. These services include a Service Bus, Gateway Storage account, Log Storage account and a Key Vault for managing the connection strings for the Service Bus and access keys for the  storage accounts.

And that’s it for Part 2 of this Azure Migrate blog series. The next blog will look at the migration of data using the Azure Migrate service. Hopefully this has been helpful and if you have any questions about the migration of workloads to Azure, please contact the team at Diaxion and we’ll be happy to assist with any questions you may have. Stay tuned for Part 3, coming shortly!

Part 1 – Discovery and Assessment

Part 2 – Replication

Part 3 – Migration

Part 4 – Tips, Tricks and Troubleshooting

Categories
Blog

How to assess your Operating Model and Maturity

Time for another in my operating model series. We have talked quite a bit about, what an operating model is, why you need one, how to build your operating model and, how to change the operating model. What we have not discussed is how to assess your operating model as it stands, and what the level of maturity may be that your organisation has in respect of the operating model.

 Your operating model turns your strategy (the why) into operating (how things are actually done). It effectively provides “the what”, what needs to be done across the key aspects of your business: – process, structure, systems, culture, governance, etc.  The assessment takes a critical but non-judgemental view of the operating model and the capabilities it provides to see how well the operating model supports the strategy.

 Normally an operating model assessment takes 4 to 6 weeks depending on the complexity and scope. Remember this is just an assessment not the design of next steps or the production of the next gen operating model.

 Remember that an operating model can live at multiple levels so, one needs to carefully look at the appropriate scope of the above points e.g., it is no good working on the operating model for a finance department if you provide information related to the whole business which is a health care provider (though some is required). This comes back to a point in one of my previous articles that talked about how strategy must flow up and down the organisation.

Many organisations will find that the top-level strategy is not well communicated down the line and/or the intent is changed. This usually then impacts the operating model negatively as, either the organisation is then set up incorrectly or the value chain outcomes are incorrect. Secondly if the operating model has not been critically examined for some time you will find complexity has been introduced as groups have tried to forge their own path in pursuit of growth or capability. You will find this has led to unwitting complexity across process and organisation and often duplication in technology and organisation.

When

Not every change in strategy requires operating model change, the typical yearly strategy update usually does not require changes to the operating model. Typically, the following types of significant strategy change mean you need to take an assessment of your operating model:

  • Centralisation or decentralisation change
  • Significant change in your value proposition to customers
  • Acquisitions
  • Business strategy pivot – product to customer or similar
  • Take on significant new capability like cloud
  • Significant outsourcing or potentially more likely insourcing

 So, how do you actually do the assessment? Remember the inputs and outputs are relevant to the scope of the assessment.

Understand where you are

The understanding should be divided into a few stages: – pre-work, interviews/workshops, analysis and, initial outcomes.

Pre-work involves pulling together the information required to start the assessment.

  • Good grasp of the strategy and reasonably well documented strategy,
  • Value chains understood and documented to some extent, key processes that support the value chain,
  • Organisation structure,
  • Functions / business architecture – usually this is not well documented but ‘understood’ though understanding and definition can vary significantly
  • Technology health that supports the operating model

Much of the information gathered can be expanded or checked at a set of interviews or workshops with key people and groups. These workshops should be open-ended rather than closed questions and give the time for people to speak and understand they have been heard.  This also gives you a view of how well the overall strategy of the organisation has trickled down. Try to have people of the same group and level at the workshops some people are not intimidated by hierarchy. Prepared questions/pointers must focus on operating model topics rather than free for all otherwise general grievances come out. This also means management of the workshops must focus on the outcomes needed.

Initial analysis should focus on the common key issues and that people believe are holding them back from delivering the required outcomes and, the health of the required capabilities across people, process and technology. Understand how their view of strategy and outcomes is different from and/or shaped by the overall strategy. Are the capabilities aligned with strategy? Are they effective? Does this part of the organisation have it resources focused on the right things and in the right way? This gives you a start on the analysis stage.

 It is important to communicate your findings as they become clearer to the sponsor. This will ensure that you have the support and have direction on the appropriate communication to the rest of the organisation.

Identify what needs to change

Identifying what needs to change will be both simple and complex, it is rarely one or the other. To do this we need to:

  • Understand the maturity of the organisation and where it wants to get to from a maturity perspective, 
  • Identify capabilities, process, organisation, or technology that is not supporting the strategic direction and,
  • Check that the basis of your operating model is still aligned to the strategy.

The maturity of the organisation is a key point of view to take as effectively the current maturity determines where you can get to and the ability of the organisation to adapt to change. I am not going to go deeply into maturity models – that could be a subject for another article. Suffice to say the maturity model will be different – well the specific view will be but, the structure will be the same or very similar. Below is one I adapted from Gartner’s IT Infrastructure maturity model. https://www.gartner.com/en