Categories
Blog

Diaxion now offers Office 365 and SharePoint services

Using Office 365 and SharePoint?

    • Want to make it work better for your business?
    • Migrating to a new data centre, cloud or servers?
    • Do you need to back up your data in Office 365?
    • Need training?
    • Upgrading?

Diaxion now has the services to help you do this.

We recently completed a migration of a SharePoint environment and supporting systems from an infrastructure in Sydney to a Data Centre in Melbourne.

After an initial discovery to learn the setup of the infrastructure and the relationship between the various technologies in the environment Diaxion worked alongside the clients’ migration team to ensure a successful migration of its infrastructure.

The engagement included the following phases:
Discovery:

    • Identify business needs – determine business goals and objects through a series of onsite workshops.
    • Create a solution roadmap – define the best path for reaching the business goals.

Migration

    • The Planning phase – finalise all documentation related to the migration so a clear direction is understood
    • The Testing phase – migrating the required assets over to the destination Data Center and checking functionality is operational
    • The Migration phase – scheduling the transfer of production infrastructure assets to the destination Data Center and bringing the various services back online.
    • The Support phase – will ensure the assets are managed and kept running at an optimal state until they are decommissioned at end of life.
Categories
Blog

Is your CMDB up to date?

This is obviously a very leading question, as several more questions will flow from it:

  • • How is it refreshed / kept up-to-date? Is it a manual process or done automatically; if done automatically, are there multiple tools required or only one?
  • • Assuming it is up to date, what use do you get out of it, i.e. is it integrated in Service Management processes like Incident Management, Change Management?
  • • Does it integrate into your DevOps approach (where applicable)? – and:
  • • How do you define “up to date”?

In our experience the Configuration Management Database (=CMDB) is one of the most neglected components of any IT department. It does take significant effort to define, create and maintain, and its benefits can be quite difficult to quantify. Why is the state of configuration management within IT still immature?

Consider some – not perfect – analogies:

  • • Would any airline company fly planes until there are no longer spare parts available?
  • • Would a food retailer store items in their warehouse without knowing best-before dates?
  • • Would a manufacturing company only order parts once they have run out of them?

In a similar way is it really economical for an IT department to run old servers (that generally have less capacity, consume more power and generate more heat?) – just because all the IT department is using is an asset register, which provides purchase date, but does not provide additional details that link devices into the data centre.

How is it possible that companies are still using obsolete software (e.g. Windows Server 2003, which has been out of support for more than one year? Or on a related note: build new Windows Server 2008 environments, which are past mainstream support and will cease to be supported in 2020.)?

Or why is it common for support staff to first confirm the details of the environment with any new incident instead of having this information readily available within the CMDB?
It usually comes down to the fact that the CMDB does not contain any information or linkage to recent upgrades (e.g. added additional memory at some point?), disk and storage information, installed tools & software. All too often the CMDB is also being infrequently updated.

The CMDB has been an integral part of IT Service Management (e.g. ITIL) for a long time, as it can provide rapid information about the existing environment and can highlight connections and dependencies that otherwise may be easy to miss. In this way it will assist in resolving any issues quickly (incident and problem management) and even more importantly can avoid experiencing issues in the first place: performed as part of change management, where it is possible to gauge the impact of a change on the whole environment.

Unfortunately, the value of the CMDB does not decrease with new approaches to the development and management of IT infrastructure, but rather the opposite is true. A DevOps approach relies heavily on the use of (near-)identical environments for development. One way to assist this is by using a well-defined and well-maintained CMDB.

The challenge is – and has been for some time – to maintain the information within the CMDB in an efficient manner. This starts with:

  • • A clear definition of what information and level of detail should be held within the CMDB.
    1. Is it necessary to include minor detail for components: e.g. ‘Provider A’ 5m LAN cable, ‘Provider B’ 5m LAN cable; ‘Company C part-no of a 2 TB internal 7200rpm SATA disk’ or
      Is it sufficient to have a high-level summary in the style of: x86 server with 256 GB RAM, 2 CPUs with 24 cores?
  • • And a decision how this information is best maintained

The answer will depend on the complexity of the environment, the maturity of IT Service Management, the aim and type of IT department and – last, but not least – the available tools.

Many components within the IT organisation will depend on a combination of people, tools and processes; however, keeping the CMDB up-to-date is easiest to achieve with a well-integrated tool that automatically discovers any changes and is tightly integrated with a service management platform.

One of the challenges will be to include the cost for the required tools (and the additional people and processes that will still be required to manage the CMDB and Service Management) that need to be included in any TCO calculation for any new hardware, software or service.

It has been said that IT has to be a part of the business. Coming back to the first example of airlines, several major carriers in the U.S. suffered serious outages this year. Although probably not all could have been prevented with a better CMDB and Service Management, there is some indication that the extended outages were due to ignorance of the exact impact of planned changes. This also highlighted the close relationship between the Business and IT.

Can you in light of this really afford not having your CMDB up to date?

Diaxion recommends an approach of:

  • • Discovery of the current state and the desired outcome;
  • • Analysis of the current CMDB state and requirements;
  • • Recommendations;
  • • Remediation and/or implementation.
Categories
Blog

Agile Development and steps you can take to get started

Getting started with Agile can look daunting at times and it is difficult to start the journey if you are not sure which steps to take first. This article will give a quick overview of Agile, its core concepts and some practical advice to get you started.

What Is Agile Development?
Agile Development is a blanket term that describes a way to manage development teams and projects. It encompasses many different development methodologies that share the values contained in the Agile Manifesto and the principles behind it. It favours individuals and interactions over process, working software, customer collaboration and responding to change. Note the emphasis on responding to change, it’s not only feedback from users around the application but also feedback from the team and how to constantly learn and improve.

What are the advantages gained by Agile Software Development?
The end goal is to deliver business value and there are a few ways this is achieved
1. Constant feedback from users, results in software users actually need and use
2. Testing and review is integrated throughout the lifecycle reducing bugs which results in a stable application
3. Shorter development and test cycles with a release after each sprint resulting in shorter time to market with usable features

Steps to help you get started
There is no set recipe to get started with Agile Development it depends on many variables like the number of teams, management buy in, developer skillset, customer relationship, company culture but there are some guidelines that you can adapt to fit your scenario.
• It is important to get a vertical slice of the stakeholders involved. This starts with the development team and expands to business and end users
• Get management buy in, it won’t be possible to implement agile properly without it. Management’s support will show commitment from the company and help clear obstacles like timely access to a product owner
• Get the teams involved trained up on agile principles so they will know what is expected of them
• Get the development team trained up on the skills required for agile development like unit testing, designing for testability, refactoring, automation, version control, build servers.

The next step is choosing an application that will be your POC. You want to show quick wins and build up the team’s confidence in Agile Development. It should be visible enough to add value but small enough to implement relatively quickly. You want the team to focus on the method not be bogged down by code.

The key here is incremental, iterative changes to the application driven by user feedback and automation to optimise the development cycle. You are making small changes and each change goes through requirements, development and testing phases.

Start by soliciting user feedback, this will help you in a few ways.
1. The team may not be aware which features users need the most. You can prioritise your backlog items and work on the most requested items first
2. During new product development you will be able to see early on if it is feasible and allocate more resources or stop the project before it drains resources
3. For existing applications feedback will help you identify problems areas that are having a negative impact on sales or usage for example feedback from app stores.

Take the feedback from users, apply your expertise and analyse the requirements to turn it into the features they actually need. (Goal 1) Implement Source/Version control, this is a tooling requirement for the steps that follow.
• Automated unit testing. As part of the development cycle developers must create unit tests for any new code they write. In larger teams with testers they can also write integration, load and other types of testing. (Goal 2)
• Automated builds. With multiple developers making changes and possibly multiple components making up the system you have to bring all the pieces together and build the final product. This is called Continuous Integration and it helps to reduce bugs and breaking changes between all the modules at build time. (Goal 2)
• Automated integration testing. Once you built all the components you have to test the interaction between them. Integration testing will run scenarios that test all the components together and catch breaking changes at run time. These tests run as part of your build process and along with the automated unit tests is called Continuous Testing. (Goal 2)
• Automated deployments. Software only adds value when users can use it. By automating the deployment you reduce deployment errors and you can deploy more often and with confidence. (Goal 3)
• Automated delivery. Automatically delivers software into production environments. (Goal 3)

The outcome from this process is software that is in a state where it can be released if need be and the team is confident that it has been tested properly, the deployment process is working and the software meets user requirements. The biggest gains come when you can reach automated deployments and your users can provide feedback and see progress shortly after the changes are developed. People get fixated with automated delivery but many organisations don’t need it and it is not always feasible. Depending on your environment performing continuous delivery can entail huge effort, you’ll have to weigh up the benefits.

Conclusion
There is no one size fits all when it comes to Agile Development. You’ll have to try out different processes to see what works for your team or organisation. It is important to remember being Agile is a cultural shift not a tool implementation and it takes time and practise. Focus on small changes and learn from feedback. Collaboration between all players form an integral part of the methodology and it is something you can start applying from the beginning.

Categories
Blog

DevOps not just another Buzz word

What are the benefits? Who to work with? Where to implement?

We can answer your questions and provide information to kick start your journey.

Join the Diaxion and Puppet teams in an interactive 2 hour breakfast discussion about the benefits achieved through implementing DevOps into your business.

Learn what it can do for your organisation, how it can be applied and hear about our customer’s experiences.

The session is a small roundtable collaboration with a select group of your peers,the Puppet Specialists and the Diaxion subject matter expert. Some of the topics covered are:

What are the real business benefits

  • What gains can be made by implementing DevOps
  • Who is best to work through the implementation
  • Where do you start
  • What support is available
  • How do I find experienced people
  • Sydney: 11th October

    7.30am Registration for 8am start

    Venue: Four Point by Sheraton, 161 Sussex St

    Melbourne: 12th October

    7.30am Registration for 8am start

    Venue: The Westin, 205 Collins ST

    Secure your place now

    Hear from puppet around their 2016 DevOps Report

    Categories
    Blog

    Cloud connectivity

    Cloud connectivity. Not a very exciting topic right? Maybe so, but with more and more companies moving their IT workloads to public cloud platforms and IT departments constantly being asked to provide a secure, high speed, reliable connection to these cloud platforms, getting your cloud connectivity option right is becoming more important than ever.

    It used to be that your only option was to utilise an Internet connection and hope that there was enough bandwidth available to provide an adequate performance level. With the IT industry rapidly transforming, IT departments are being provided with choices and more importantly, the ability to guarantee performance.

    The market has shifted and provides two distinct cloud connectivity options. You can use a telecommunications provider to supply a presence in an existing corporate WAN/MPLS environment to your cloud platform of choice, or you can use a 3rd party cloud exchange provider to supply on-demand, elastic cloud exchange connectivity to multiple cloud platforms. The end decision you make will be whatever works best for your particular scenario.
    Here are a couple of questions which can help determine the cloud connectivity option that works best for you and why:

    Q: Which public cloud am I connecting to?
    Not all cloud connectivity providers are made equal. Whether it be through your telco or a cloud exchange provider, not all public cloud platforms are accessible for one reason or another. Microsoft Azure and AWS are generally easy to connect to, but if you’re looking to utilise IBM Softlayer, Google Cloud Platform, VMware vCloud Air or even something a bit more workload specific such as SAP HANA Cloud Platform, you’ll have to dig a bit deeper to see who has options available.

    Q: I only want to trial it for a short amount of time to see if it actually works.
    Your business case for a private cloud connection may only be a trial or a proof of concept to see how well it works for dev/test workloads. You need to decide whether you need to engage your telco to setup a new site on your WAN/MPLS or the cloud exchange provider option might be more palatable as the service can be procured on a month by month contract with no lock in.

    Q: How easy is it to get connected to my public cloud platform?
    The cloud connectivity options are markedly different when it comes to getting started. The telecommunications provider option can be straight forward. If your existing provider has a product available, eg you use Telstra for your WAN/MPLS, then you can utilise Telstra Cloud Gateway and configure your connection through that product portal. If you choose a cloud exchange provider such as Megaport or Equinix, then you need to ensure you meet their requirements such as being in a Megaport enabled data center or an Equinix facility to provide the physical connection to their cloud exchange.

    Q: Do I really need a dedicated, private cloud connection?
    No. Yes. Well, maybe. It depends.

    At the end of the day, your need for a dedicated, private cloud connection is truly dependent on which challenge you are trying to overcome and what you are trying to achieve.

    With cloud connectivity options changing so dynamically now and in the foreseeable future and more public clouds coming online and allowing connectivity, your ability to make the right decision will affect the overall success of your cloud strategy. Diaxion are experts at IT strategy and optimisation so if cloud connectivity is on your radar, we are able to assist you in your decision making.

    Categories
    Blog

    Information Security – A forgotten realm

    An all too often forgotten or underestimated facet of IT is an organisations Information Security Policy and with the increasing popularity of the use of Cloud resources, both public and hybrid, this becomes especially important.

    An Information Security Policy is the cornerstone of an Information Security Program and should reflect an organisation’s objectives for security and the agreed upon business strategy for securing the organisations information. A security policy identifies the rules and procedures that all persons within an organisation accessing computer resources must comply with in order to ensure the confidentiality, integrity, sovereignty and availability of data and resources. Additionally, it documents an organisation’s security posture, describes and assigns functions and responsibilities, grants authority to security professionals, and identifies the incident response processes and procedures.

    When developing an IT Security Policy you should keep in mind the ‘defence in-depth’ model. This means an organisation should not rely on one principal means or layer of protection. Instead, a security program should be developed that provides multiple layers of defence. This ensures maximum protection of an organisations data and resources, minimising the potential for compromise.

    In order to be useful, an IT Security Policy must be formally agreed upon by executive management. This means that, in order to compose an information security policy document, an organiszation has to have well-defined objectives for security and an agreed-upon management strategy for securing information. If there is any debate over the content of the policy, then this disagreement may continue throughout subsequent attempts to enforce it, with the consequence that the Information Security Program itself will be dysfunctional.

    So what determines a good Information Security Policy?

    In general a good IT Security Policy does the following:

  • Communicates clear and concise information and is realistic;
  • Includes defined scope and applicability;
  • Makes enforceability possible;
  • Identifies the areas of responsibility for users, administrators, and management;
  • Provides sufficient guidance for development of specific procedures;
  • Balances protection with productivity;
  • Secures assets against theft, fraud, malicious or accidental damage, breach of privacy or confidentiality;
  • Protects an organisation from damage or liability arising from the misuse of its IT resources;
  • Identifies how incidents will be handled; and
  • Is endorsed at the senior management level.
  • So what are the components of an IT Security Policy?
    A security policy should be flexible and adaptable to technology changes, should be a living document routinely updated as new technology and procedures are required to support the organisation. Of course these components will vary by organizsation based on size, services offered, technology, and available revenue.
    Some of the typical elements included in a security policy are:

  • Security Definition
  • Enforcement
  • Acceptable Usage
    • o Email
    • o Internet
    • o Mobile / Portable and Hand Held Devices
  • Logical Security
    • o Identity and Access Management
    • o IPS/IDS
    • o End Point Security and Antivirus
  • Data Security
    • o Remote access
    • o Backup and Recovery
    • o Auditing
  • Physical Security
  • Security Incident Management
  • Business Continuity
  • Security policies are crucial to ensuring the protection of organisational IT assets and information. Should your organisation need assistance with developing an Information Security Policy for your existing environment or an planned Cloud migration project, Diaxion can help you on this journey.

    Categories
    Blog

    Puppet Camp Sydney 2016 Diaxion Review

    “Puppet is IT automation software that helps system administrators manage infrastructure throughout its lifecycle, from provisioning and configuration to patch management and compliance. Using Puppet, you can easily automate repetitive tasks, quickly deploy critical applications, and proactively manage change, scaling from 10s of servers to 1000s, on-premise or in the cloud.”

    In May Diaxion had the privilege of being a sponsor at the Sydney Puppet Camp. The day began with an informative session by Nigel Kersten – CIO & VP of Operations. He gave an introduction of what is puppet and what puppet looks like in 2016.

    Some clients shared their journeys and experiences as well. Scott Coulton from Healthdirect Australia share his puppet & Docker Swarm journey. Steve Tyson from William Hill also had the story about his experiences and challenges how William Hill migrated from traditional infrastructure model to puppet base infrastructure as code model.

    The afternoon kicked off with Nicholas Maesepp giving a great walk-through about Atlassian’s DevOpt Model and how Puppet be implemented and beneficial to the development lifecycle.

    The remainder of the afternoon allowed for demonstrations by Jimmy Duong, NASDAQ; Steve Curtis, ANZ; Tim Cinel, Atlassian. This was a great chance where the audience can see and learn how the actual Puppet migration takes place and how efficiency to managed the infrastructure by using Puppet. The new features of Puppet Enterprise Console were also introduced, like the option to add class on top of configuration files, drag and link the configuration file with the host devices and many more.

    The day finished off with drinks and networking. It was a great chance to hear from the Puppet prominent community members, sharing experiences, and discussing the potential of Puppet implementations.

    Can wait until the Melbourne camp in November. See you there!

    Categories
    Blog

    Replication with Zerto

    If you are running a virtualised infrastructure (who isn’t?) and either your Disaster Recovery software is not meeting targets or just not meeting expectations then you need to know what Zerto is and how it can improve your solution. If you haven’t heard of Zerto yet, it is the first hypervisor-based replication solution to offer enterprise-class data protection and workload mobility from on premise VMware and Hyper-V environments to AWS, extending hypervisor-based replication to the public cloud. It supports continuous replication of workloads irrespective of hypervisor, storage or cloud.

    So why Zerto over other Disaster Recovery and replication software solutions? If it’s a host/guest based replication setup then software needs to be installed on every server, it won’t be storage agnostic and your RTOs and RPOs options are limited. If it is snapshot based replication then your production environment resources are probably being drained, again your RTOs/RPOs are high and automatic failback options are lacking. Array based replication like many others is complex and expensive to install, deploy and manage, requires the same storage at all sites and renders DR to the cloud nearly impossible.

    Unlike legacy solutions Zerto includes complete virtual awareness and full integration and coexistence with virtual and/or cloud environments, coupled with ‘one-click’ automation of failover, failback and testing. The company exclusively build disaster recovery solutions that offer all of the enterprise class features necessary to protect and replicate mission critical applications such as scalability, granularity, extremely low RTO/RPOs and simplified management. Take a closer look at some of Zerto’s benefits:

    Deployment
    You can have the Zerto infrastructure ready to go in under 30 minutes. It requires a simple application install on a couple of Windows VMs plus replication appliances on your hosts which can be done with a few clicks and most importantly without any reboots to hosts and no downtime to install any guest agents on servers. You can then create your protection groups and watch your VMs replicate in real time to your DR site.

    Non-disruptive DR Testing
    Zerto offers the feature to test your DR plan at any time with a single click while replication still runs and the environment is still protected. This means that Production doesn’t need to be taken offline and the applications are still available. Since Zerto keeps all VMs in sync across hosts and is WAN friendly, business critical applications can be recovered extremely quickly and to points in time down to the second. Testing of the groups is a few clicks away and VMs can have their IPs changed in software so layer 2 networks don’t have to be stretched across WAN links.

    Simple failback and reverse protection
    Testing failover might not be a big issues for some software vendors but what happens when you need to failback? All you data is at the DR site and what is left at Production is already out of sync. Zerto Virtual Replication enables data and application to failback with a few simple clicks so there is no data loss and no downtime for your DR tests.

    Hardware Agnostic Solution
    With array-based replication a customer needs nearly identical hardware in Production and DR. This cost is not an option for most companies. With Zerto, the replication can happen from an array based production cluster to different disparate hosts with local storage, different arrays or to the cloud. Since the replication happens above the array in the hypervisor, DR becomes easier and older hardware can be reused rather than thrown out. With all the options for targets, DR becomes a commodity rather than an expensive, unused datacentre.

    Storage Savings
    Migrations with 0 data loss, continuous data protection delivering RPOs of seconds with no snapshots, recovery to previous points in time every few seconds up to 5 days in the past – how does this affect storage, typically one of the largest cost items in your datacentre?

    With Zerto all of the replication is managed in the hypervisor, removing any reliance on the underlying storage so there is no need for configuring any snapshots or replica reserves at the storage array level. By utilising journal based protection for point in time recovery, which uses a thin virtual disk on a per-VM basis stored in the recovery site, this significantly reduces the storage usage of the replication as it typically uses 7-10% of additional space. This enables an immediate disk space saving of at least 20%+ over any storage based replication solution which is a significant cost saving that allows for future storage growth with existing storage capacity.

    For example, protecting 30TB of VM data would consume 33TB in the recovery site and uses no production storage at all, saving up to 18TB total storage requirements in comparison to SAN based replication.

    Execute a DR plan from anywhere
    With Zerto’s web-based interface, the IT team is not trapped in the data centre. Failovers can be executed from a tablet in advance of the impending event, from your home, so you can be sure that your staff are safe.

    Conclusion
    Zerto has become the standard for protection, disaster recovery and migration of applications in cloud and virtualised data centres so it is no surprise that they have enjoyed a more than 300 percent sales increase in the Asia Pacific and Japan (AJP) markets over the past year, driven primarily by its success in Australia and New Zealand. They also boast numerous award wins including Best of Show at VMworld 2011, Best of VMworld Europe 2014, as well as 2011, 2012 and 2013 Product of the Year Awards for its innovative hypervisor-based replication approach.
    Diaxion are an Australian Services Partner for Zerto so contact us to discuss how Zerto can be instrumental in keeping your business relevant as we progress through the cloud based technical revolution.

    Categories
    Blog

    Automation and Process Mapping

    We are often asked by clients about orchestration and automation, it is a critical component in our transformation target state operating model, cloud governance and cloud migration projects. It is both a cost and efficiency gain and a security / governance mitigation which should not be understated or under invested. The return is not only immediate in the provisioning but it flows through the entire environment life cycle, software development life cycle through to release and support. Automation, if done right, creates predictability, consistency and combined with configuration drift / management should reduce support effort once critical mass of coverage is achieved.

    So where do you start with automation? You can approach it from multiple points at the same time or you can align it to the demand pipeline of the projects. It depends on what the key business benefit is that you are trying to achieve? If to reduce tickets to the help desk, visualise / quantify what the volume of tickets is, look for things that can be automated to triage, perform the resolution or if the root cause is inconsistency in implementation, process map what it is that you want to automate.

    Quite often though the reason for looking at automation and orchestration comes back to “reduce the mean time to production”, be it via enabling DevOPS continuous deployment, testing and integration or just getting standard builds to the project team quicker. Then be able to get the environments to the testing team quicker and finally, get the production release done, quicker.

    How do you best go about this?
    We map the process holistically, start with request to IT from the project team and map the steps by request, responsible team, completion checks, average time to complete, SLA to complete, types and rates of failure and escalations. Map it in a spreadsheet first, though to the point where the project team is actually able to start deploying or coding the application. With this data then perform a lean or business value chain optimisation process across it, look at the errors, escalations, hand offs and see if there is a different way to address the problem e.g. expand the SOE to be a purpose built SOE such as a .Net or Java development SOE. Consider the customer of the request to be the project team, not the next team in the process e.g. backup, database or security team and streamline the process to the customer’s needs.

    Finally optimise the process looking at elements that can be automated and then orchestrated. Consider the acceptance of the outcome being not only something that the project team can consume on the first attempt but that is also verified (automatically) as being operationally accepted by the companies’ respective risk, security and operational control gates. Oh and don’t forget to break the end to end process down to metrics that can be measured and reported on, a bunch of “mean time to’s”. Little use automating if you can’t then promote the benefits and enter a cycle of continuous improvement!

    Finally, you get to cutting the code, automate to the process, do it in reusable blocks which you can then promote out to others to reuse where / when applicable. But what about orchestration we continually get asked, is it not where we will get the most benefit? The answer ultimately is yes, but are you ready for orchestration and before you can orchestrate you need the foundation of automation in place! It is a journey.

    Categories
    Blog

    June Newsletter

    Welcome to our June, end of financial year newsletter.

    Well, hasn’t this past year been a ride! What have we been doing? A very diverse range of projects and services across our clients, from cloud providers, banks, insurance, federal departments into retail and hospitality. One thing is for certain many of our clients have similar pain points, desires and business outcomes but they are across the entire spectrum of IT life cycle and capabilities. We were completing Cloud migrations whilst doing several of the following:

    • • Risk mitigation strategies on end of life / support software and hardware
    • • Small and large scale data centre relocations
    • • Storage tenders
    • • Sourcing strategies
    • • Automating all types of technologies and activities

    Such a diverse set of engagements, yet the common theme was realisation of tangible business benefits in year so that IT could show benefits to the business and be an enabler rather than perceived as a cost and business hurdle. At the heart of this was some of our new consulting around Target State Operating Model definition where we look at the current pain points in the IT organisation, it’s impacts to the business or more pragmatically the business’s ability to get new services to market in a rapid and cost effective fashion. This is generally a 2-year transformation roadmap identifying with a broad range of stakeholders how IT wants to be perceived, what services it delivers to the business and how the business wants to operate / consume IT services in 2 years. From this we look at the people, process and technology elements of the transformation. Identify the necessary building blocks, define the target state, business benefits, business case and transformation roadmap. The roadmap being broken into bi monthly or quarterly iterative releases that are aligned to the business demand or project pipeline, thereby presenting to the business iterative and cumulative business benefits realising benefits in year. All of these engagements have included the move to a hybrid cloud operating model.

    Want to learn more? Contact us or look out for one of our upcoming newsletters when we define it in more detail.

    What else new have we done this financial year?

    • • Became the first regional Puppet Service Delivery Partner – we work with you to effectively use Puppet Enterprise; Puppet PS also leverages us!
    • • Launched the Puppet Enterprise / Cisco Nexus jumpstart service in conjunction with Puppet and Cisco – manage your Nexus switches via Puppet
    • • Software defined networking projects for Cisco ACI
    • • Microsoft Azure migrations and automation
    • • We are now a Microsoft Consulting sub-contracting partner around Azure, automation and data centre modernisation
    • • Extensive PowerShell and PowerShell DSC automation – automate the build of an entire Azure Pack, Hardware
      configuration and Software from one command. Operations on Azure including provision, de-provision, security etc. Upgrade an entire POD of servers, storage, network, compute and hypervisor from one command, moving hundreds and thousands of VM’s around so that no client disruption is encountered.
    • • AWS business service architecture design, strategy and implementation
    • • Windows 2003 remediation
    • • HPC in the cloud anyone?
    • • Helping clients to consume, manage and operate services to construct their own services to their business users
    • • Data Centre and cloud migrations
    • • CliQr design, deployment and integration for DevOPS

    A diverse set of projects and needs. So what are we focusing on next week?

    • • Network services virtualisation and automation
    • • Automation and orchestration in general – Infrastructure as Code
    • • Microsoft Azure consumption and automation for the enterprise
    • • Expanding our software defined network capability and automation
    • • Solution design, solution and enterprise architecture around the data centre, hybrid cloud and cloud in general
    • • Helping IT enable but govern the business in the age of consumption “as a service”

    Thanks for your support this year and for reading our newsletter, I am always surprised and happy when you frequently refer to articles that you have read from the newsletter when we meet.