Categories
Blog

Puppet Service Delivery Partner of the Year

Diaxion Named Service Delivery Partner by Puppet in 2019 Channel Partner of the Year awards

Diaxion recognised for delivering success with Puppet

Sydney – Feb. 19, 2020— Diaxion, IT Strategy and Optimisation consulting, today announced it has been named the recipient of Puppet’s Channel Partner of the Year award for Service Delivery Partner. Diaxion is recognised for contributions made to drive enterprise success with Puppet’s product portfolio.

“We are excited to be recognised by Puppet as Service Delivery Partner of the Year. The continued expansion of our business comes from putting our customers at the centre of everything we do. Our partnership with Puppet is absolutely key to delivering exceptional experiences for our customers.” Said Tony Wilkinson, Managing Director of Diaxion.

“Puppet’s leading software automation solutions and Diaxion’s unparalleled expertise in Cloud and DevOps services is a winning combination. Customers are looking for innovative solutions that address their business challenges and deliver sustained value. Our partnership with Puppet makes this possible and helps organisations unlock new opportunities to delight their customers, drive operational efficiencies and grow their businesses.”

[APPROVED]
“Puppet makes the challenges of infrastructure invisible to the world’s leading enterprises, helping them deliver better software, faster,” said Yvonne Wassenaar, CEO of Puppet. “Diaxion aligns with this same vision giving customers a leg up in meeting the demands of an increasingly multi-cloud world. We applaud their commitment to deliver customer value and thank them for their partnership.”

The success of the Puppet and Diaxion partnership comes from shared values and a pragmatic approach to client engagements. Puppet’s leading Devops automation technology combined with Diaxion’s expertise in DC modernisation and hybrid cloud delivers innovative solutions for organisation across a range of industries. Diaxion focuses on business outcomes first before technology solutions, enabling right-fit IT strategy for clients and delivering tangible business results. With over 18 year’s experience in transformation projects, Diaxion’s proven methodology ensures robust and de-risked client engagements. The combined strengths and capabilities of Puppet and Diaxion bring a unique and compelling solution to market.

The annual Puppet Channel Partner of the Year awards honor Puppet’s channel ecosystem for delivering customer excellence and innovative solutions. This year’s award winners also demonstrated exemplary performance in the implementation of Puppet technology. The program recognised nine partners and one engineer globally in five categories.

About Diaxion
In a market grappling with demanding and complex transformation, Diaxion’s specialised expertise in DC modernisation and hybrid cloud helps leaders achieve tangible business outcomes, with reduced risk. Leveraging our unique insight, Diaxion leads with the business challenge to unlock IT requirements that deliver the right outcome.

We do this because we believe business goals should drive IT, not be limited by it.

With superior skills and capabilities, and over 18 year’s experience designing and implementing transformation projects, Diaxion is the choice of Australian firms. Our partner agnostic service focus, nimble and agile culture, combined with our pragmatic and can-do attitude ensures a robust and de-risked engagement process.

Categories
Blog

(Managed) Services Transition

Transitioning services can be a painful and lengthy experience. This can be alleviated by ensuring that deliverables are clearly defined and that all parties work together.

A previous article outlined some of the required items within a Statement of Work (SoW). For a transition of services – be that between managed service providers (MSP) or from in-house to a MSP or the reverse; or any other transition – it is essential that the scope and deliverables are clearly defined and that most assumptions, requirements, inclusions and exclusions are agreed between all parties.

Depending on the number of parties involved, more than one SoW may be required, ultimately adding to the costs incurred by a transition. Often the transition out costs are similar or even higher than the transition in costs (as an outgoing vendor has no opportunity to recoup costs over the time of a longer agreement).

A successful transition will depend primarily on accurate information. Additional components to help achieve a successful transition are:

  • Good project management
  • Engaged resources from all parties
  • Seamless migration and/or integration of systems
  • Accurate information (this point cannot be stressed often enough!)

  • Good project management

    The complexity of a transition can vary, but the transition will benefit from good project management with clear objectives and a project manager that engages with people to follow up on any outstanding actions. This can be done with a “waterfall” or “agile” methodology. The best choice will depend on many variables including each party’s culture and approach, the type of transition and the future requirements; i.e. it will be easier to transition an established environment than a half-completed software implementation.

    Engaged resources
    The transition will be easier, if all parties are fully engaged and are (willing to) work towards a common goal. This may mean to back-fill some positions during transition, so that key personnel can concentrate on the transition and are not distracted by their normal day job. This can also mean to keep the transitioning out party involved or engaged with other / future projects to avoid a “scorched earth” tactic. Diaxion’s experience shows that the Australian market is too small for vendors to engage in this kind of obstructive behaviour. It is common however, that the transitioning out party is incapable of providing the required support as key personnel may no longer be available. There may even be challenges with the vendor transitioning in, as this would be the point when the delivery organisation has to step up to what a sales organisation has promised to deliver.

    Systems
    System incompatibility, significant changes in supporting systems and the need to use different systems is another challenge facing transitions. This can range from the need to use or integrate different service management tools to different development platforms and similar scenarios. It can be mitigated with good planning and project management, in addition to accurate information: where the interdependencies and existing relationships between services, systems, etc. are well known, it is easier to transition these to new environments.


    Accurate information

    Accurate information facilitates a successful transition. The more is known – and is accurate – about the current environment, its relationships, dependencies and overall status, the easier it will be to move this to the new target state.

    The following two examples are provided for illustration:
    1.Managed Services
    Transitioning the managed service for a network environment will be easier if the following items are known (somewhat in order of importance):

  • Hardware components (vendor and model)
  • Location
  • Device configuration including passwords
  • 2.Software Development
    Transitioning between vendors during software development or major integration work is an even bigger challenge without good information, i.e.

  • Status of implementation
  • Backlog and known issues
  • Dependencies and integration
  • During transition a third independent party can help to mediate between the involved parties and provide both governance and quality assurance. This can be in the form of project or transition management or as an independent resource working with the product/system/service owner(s).

    Categories
    Blog

    Creation of a Statement of Work (as part of the RFx process)

    After a RFx pack has been created and the subsequent evaluation of responses, there will be a negotiation phase which then will result in the creation of a Statement of Work (SoW), defining the details for delivery.

    There are various approaches to the creation of a Statement of Work; some critical items that should be defined within the SoW will be discussed here.
    1. Commercial and financial terms
    The SoW should include all relevant commercial and financial terms and items. This will include:

  • Costs:
    1. Fixed costs vs. time and material covering items like CPI increases, rate cards, etc.
      Inclusions and exclusions (e.g. travel, work hours/overtime, etc.)
      Payment terms (monthly vs. milestones; upfront vs. milestone; etc.)
  • Service Levels and other possible penalties
  • Renewal and cancellation options
  • Transition In / Transition Out terms
  • Warranties and other specific items that are not addressed in the Master Service Agreement (MSA)
  • 2. Deliverables
    Obviously this will be the main part of the SoW describing what the vendor will actually deliver. The content of this section will depend on the actual scope. Examples are:

  • Software-related services
  • Hardware-related services
  • Consultancy
  • Software development
  • Cloud-related services; from infrastructure to software/solutions
  • Data centre-related services
  • Managed Services
  • The approach to each one of them may vary significantly and may depend on the (existing) relationship and prior experience with a vendor. In general it will be safer, but more time-consuming, to have rather too much detail in the SoW than too little.

    Some items that should be included are:
    Scope:
    This may be very detailed or may just paint the “big picture”. The latter is permissible, if the vendor is well-known and has a proven track record, i.e. has proved to be trustworthy. The “big picture” approach otherwise carries the risk of client/vendor disagreement and additional cost (“This wasn’t defined in the SoW”).

    Assumptions, requirements, inclusions and exclusions set clear boundaries for the engagement and will include the integration points – be that with applications, other vendors or existing environments.

    Similar to the MSA there are certain advantages with the vendor providing the SoW. In the case of the MSA there should be less need to negotiate – as the vendor obviously will be happy with the MSA’s content. The risk is that the terms will be more favourable to the vendor than for the client and a vendor MSA is unlikely to be as comprehensive.

    A typical vendor should have some experience in delivering a proposed solution, i.e. this should not be the first delivery ever (one hopes) and should therefore be well aware of the deliverables and requirement for a successful implementation. It should therefore be less effort for the vendor to create the initial draft of the SoW.

    Regardless, if reviewing or writing the SoW one should ensure that the SoW closely aligns to the RFx requirements and response and consistently aligns across all related documents. It can be useful to have a 3rd party review the documentation (RFx pack, MSA, SoW, related procedures and policies, governance details, etc.) to ensure consistency and to minimise the risk of undesirable gaps.

    Categories
    Blog

    Ways of Working

    There is enormous pressure on companies to facilitate new “ways of working” for its employees enabling them to work seamlessly – and securely – from virtually anywhere, at any time.

    Demands of employees have significantly increased over the last few years. With the proliferation of mobile devices that are generally very easy to use, manage and maintain by anyone, employee expectation is nowadays for a “mobile-like” experience with the following features:

  • Always on
  • Secure
  • It “just works”
  • Easy to manage
  • The challenge for any IT department is how to best achieve this as much as possible with limited budget and resources. What are some of the available options to enable a truly mobile workforce for people benefiting from it, while providing an effective and efficient work environment for those that do not need to – or want to – be mobile?

    Automation plays a significant role in enabling this vision; another important component is the selection of software and tools to support a flexible way of working.

    There are a number of options available – both in the cloud and in-house – allowing the business to build a software “App Store”. Similar to the app store on a mobile device, this enables end users to install and de-install software that is authorised and licensed.
    Tools also support the management of user licenses; these generally pay for themselves by monitoring usage and allowing the automatic de-install of unused software, thereby freeing up a user license. The automatic deinstallation may have to be communicated to users; otherwise the help desk may end up with questions like “Where did my Visio go?”

    There is no excuse for unpatched end user systems. As annoying as the installation of updates on Windows 10 can be, these can be scheduled to generally occur outside busy periods to affect people as little as possible.

    In regards to software, the analysis may be more difficult and will come down to the various usage – and user – patterns. SaaS and internet-based software is convenient, as long as there is any internet connectivity; access may be limited or impossible in remote areas or out of the country, though.
    Local software avoids this issue, but will not be possible in all cases; local data also may cause concerns around backup, data retention and regulatory requirements. Again, there is choice in the marketplace to achieve both in most cases – e.g. accessibility to data in the cloud with a local copy.

    Security can be improved with 2-factor authentication and by enforcing defined security rules; this should include the mandatory use of software protecting against malware, viruses, etc. on all local systems, and may utilise the use of VPN and identity and access management software.
    The implementation will usually be in an incremental way. A gap analysis between the current and target state will help to define a suitable – and affordable – roadmap.

    However, while some staff will embrace a more flexible way to work – be that from home, during transit (e.g. public transport), or while travelling for work (e.g. airport and hotels) – and will benefit from the IT team enabling the above options, there is likely to be a significant group of people that will want and need to continue working as before.

    This could be due to the type of work, e.g. financial systems requiring a large monitor(s) setup, or even preference. Care should be taken not to “force” everybody in a new way of working. Companies have embraced e.g. hot-desking with varying success and not all changes turn out to be positive to improve employee efficiency.

    A new way of working should therefore go beyond the use of new and/or modern tooling, but should build on employee feedback: interviews or surveys will help to identify current gaps and pain points, i.e. what actually affects people and what should be changed and improved.

    Categories
    Blog

    Evaluation of RFP responses

    A previous article looked at the initial creation of an RFx document and some of the challenges both in creating the RFP pack as well as some pitfalls for companies responding to one. Our recent experience confirms that it is beneficial to consider some main points in the creation. This article will focus on the next steps within the RFP process, mainly the evaluation of responses.

    To recap quickly – in the RFx creation process ensure that the critical requirements are explained clearly, so that respondents can be certain in terms of what needs to be delivered. The art in creating the RFx document lies in striking the right balance between describing the requirements while still allowing a flexible approach. The latter becomes especially important with an agile or DevOps approach. Generally companies will be given several weeks to respond to an RFP (or RFQ). Depending on the amount and complexity of work required, 2 to 6 weeks is a typical timeframe for briefings and questions until the submission date.

    Diaxion recommends to decide on the evaluation process prior to the RFP submission date. This offers several benefits:
    1. Responders can be advised of the evaluation and scoring criteria
    2. Internal resources (evaluators) can be informed to allow them to set aside time for evaluation
    3. A probity plan can be created for good governance ensuring transparency and traceability

    Some decisions that have to be made prior and during the evaluation process are:
    How will the engagement with the selected company work, i.e. what are the contractual terms:

    1. Master Service Agreement – which one to use?
    Depending on the complexity of the client’s – or vendor’s – MSA, the negotiation process for a new vendor / new MSA can take anywhere from a few days to several months including reviews by the legal team. If done well it will allow to structure the ongoing engagement and fast-track future engagements with the vendor.

    2. Contractual terms and conditions within e.g. a Statement of Work (SoW) including:

  • Pricing (fixed vs. variable, milestones, use of a rate card)
  • Deliverables and assumptions
  • Some of these items will already be at least outlined in the RFx documents however, often vendors are reluctant to commit fully to RFx responses. Be it that during the negotiation phase additional items are discovered or the scope is adjusted or that the vendor’s delivery organisation finds out at this point, what the vendor’s sales team has responded with.

    Probity plan

    The probity plan describes the overall RFx process and may include items like

  • How were the RFP participants selected (open / closed tender; selection criteria)
  • Submission process and details covering items like
    1. planned timeline for the RFx evaluation process
      planned activities including number and level of workshops or similar
  • Evaluation criteria and associated weighting of criteria
  • Evaluation team and member’s responsibilities
  • Commercial and financial criteria

  • Evaluation Process

    The evaluation process at a high level – regardless, if a probity plan is in place or not – will follow the mentioned activities.

    One of the main challenges in Diaxion’s experience is that the evaluation process imposes significant additional workload on the evaluation team members. This can be reduced somewhat by splitting up the review into meaningful sections, e.g. technical, financial, governance or implementation, project management and commercial. That way only a small number of staff may have to review the complete response.

    Weighting the criteria is another challenge as naturally there will be a tension between the various areas, e.g. between technical and financial: where does one draw the line and a technically superior proposed solution is just not worth the quoted cost!?

    As long as the evaluation process is consistent for all staff evaluating the responses, there are no firm rules, i.e. most approaches are acceptable – whether there is a different multiplier between mandatory/desirable/optional criteria (3, 2, 1 or 5, 3, 1 or…) or evaluation results are sorted separately. Again, this is where it is good to have a probity plan that outlines these decision that can be referred to.

    Likewise, each item can be scored on a contiguous scale (0 to 5, 1 to 10, etc.) or can have select distinct scores (as an example: 0 = not provided/insufficient; 2.5 = bare minimum/severely lacking; 5 = acceptable; 7.5 = above average; 10 = outstanding/well above expectations).
    Diaxion recommends a minimum of 3 reviewers for each section of each response to detect any outliers. These outliers, i.e. where one person scores “0” and another “10”, should then be discussed as part of the evaluation process. Where the difference is not significant, no such discussion is required – this will speed up to reach a common ranking of responses.

    Depending on the number and spread of responses, a second round of evaluation between the top 2 or 3 responses may include workshops, proof of concept, demo sessions or further financial refinement.

    It is quite common for the top two vendors to provide a “best and final offer” (BAFO), which provides responders with an opportunity to reduce the price, provide additional services or similar – with the aim to show their interest or appetite to actually want to win the business. There can be times when this BAFO phase results in rather unexpected outcomes – with one vendor recently reducing the scope to come back with a financially viable outcome, but ultimately increasing the total amount over the lifetime of the contract.

    Outcomes like this may be somewhat disappointing when the preferred and best solution is just not financially viable, it however allows to reach a joint decision quickly.

    Once the successful vendor has been notified, a common next phase is the creation of a Statement of Work, which then is followed by the steps outlined in the SoW – be that technical implementation, transition of services or some other project initiation.

    Categories
    Blog

    Migrate Your Users Home Drives to OneDrive for Business

    Here is one of those tasks that just about every organisation who is running under Microsoft products and adopting or migrating to cloud will have on their “to-do” list. It is also the task that the business will not have got around to starting because of every other item stacked on top of it on that list. That task is to migrate all user home drives in the organisation to cloud storage, which in the Microsoft version means transferring everything to their OneDrive for Business service.

    The reason this is on everyone’s task list is because it is yet another Microsoft cloud service where they are making the offer far too sensible not to consider. In most cases, migrating your user home drives to OneDrive would fall under the “no-brainer” category when making executive decisions as anyone with an Office 365 license is already paying for the storage and service.

    The following table explains the storage allocation your licensed Office 365 users are already eligible for:

    These figures make for a fairly easy sell to any company who want to save some storage, service and management dollars. After all, that is the ‘migrate to cloud’ sell that we have all been supposed to take onboard from the beginning yet it has had us scratching our heads as to whether there is enough business value in it. In the case of a home drive move to OneDrive, the writing on the wall is a little clearer than migrating most other services to cloud. Migrate existing user home drive data to:

  • Free up and decommission your own expensive storage which you pay for, to cloud storage which you are already paying for
  • Allow user home drive access from anywhere with a network connection
  • Remove SAN management and costs for user home drives
  • In most cases, the outcome of the migration will be that simple.
    The part of this migration task where it gets a little more complex is the “how do we get there” and “what do we do with that data once it gets there”. That is, what tools and processes can be used for a seamless migration and then, how do we secure and backup the user data once it has been moved.

    You are going to need to put a plan in place that is tailored to the needs of your organisation which will vary vastly depending on the business size and complexity. You are also going to need to make sure that the end state will meet your organisation’s requirements. Consider these areas that are at the forefront of your migration for potential headaches:
    Tools
    What tools are available to the organisation and do they provide user self-service migrate capability?
    Governance

    How will sharing be controlled internally and externally? How will users be trained to manage home drive data governance?
    Backup and Recovery
    How will OneDrive meet your retention policies? How are file restores handled?
    Policies

    Are the existing on-premises type policies able to be applied to cloud? Which policies are going to be affected by the migration?

    These are just some of the basics in evaluating your move, which can cause some serious disruptions or breaches if not planned out correctly.

    Planning this kind of migration is by far the most difficult component of the move. In order to reduce as much stress as possible on your organisation and achieve as seamless as possible a migration you will need help in gathering relevant organisational information, determining project timelines, and mitigating any surprises that may appear. If this project is on your “to-do” list and is getting buried further down in the pile, invite Diaxion in for a chat and we will gladly begin helping to plan out the OneDrive migration process with you.

    Categories
    Blog

    Security – a permanent concern

    I would propose that security has always been a concern with IT systems. It may very well have been easier to control before everything started to connect with virtually everything else; however, the last few years have seen a proliferation of threats including some new issues – be that ransomware, cryptocurrency malware, concerns around IoT and even exploits on Apple devices.

    Diaxion does not believe this will change much over the next decade. Experts seem to agree that there is a current and future lack of cybersecurity professionals, which will require new approaches to security. One example of a different approach has seen the integration of security within development as part of DevOps over recent years, where security is no longer reliant on a security team, but becomes – or should become – everyone’s business.
    Another approach is to pool resources. For example, vendors can provide the capability to send back threat information “back to base” for further analysis, which then can benefit all users of their solution. Another vendor now has opened their threat information database to competitors, as they believe that security threats need to be opposed jointly, not as a disparate field of competing vendors.

    With a continuing emergence of IoT devices and an expected proliferation of AI, there will be a number of new threats and attack vectors. While not immediately cybersecurity, “deepfakes” is a new development, which takes phishing to the next level. One does not want to investigate state-sponsored activity too much, as this can quickly move into the realm of a bad science fiction novel with their use of AI. Different countries will employ different approaches: from a collection of random volunteers to a rather systematic approach (not dissimilar to a university course including exams).

    Security will continue to be an ongoing battle, where the “other side” will make use of all resources available to them.

    Some of the challenges are:
    1.Constantly evolving and changing threat landscape with an ever increasing sophistication and number of threats.
    2.Increasing complexity of environments, e.g. hybrid environments, BYOD, IoT, remote working, etc.
    3.Complexity in managing security: number of tools, policies, vendors; with
    4.Limited resources to manage and monitor

    To combat this successfully, companies at a minimum need to:
    1.Patch consistently and regularly. There is no excuse to have unpatched systems, as this can be automated with good planning.
    2.Use multi-factor authentication, as passwords will not be sufficient and biometrics have their own set of weaknesses.
    3.Use antivirus tools (gateway and end-user devices) and a layered response as proposed by security vendors.
    4.Automate security detection and response as much as possible, as a high number of false positives is unmanageable and risks an actual breach going unnoticed.
    5.Accept that people will do the wrong thing like clicking on the obviously fake link – no amount of education can and will entirely prevent this.
    6.Have a robust framework of security policies.

    Finally, companies will need to have plans in place, how to respond to a security breach.

    Categories
    Blog

    Google Cloud Platform – Diaxion view

    Google Cloud, also known as Google Cloud Platform (GCP), refers to a large suite of public cloud computing services provided by Google. This resource pool is the backend infrastructure used by Google Search, YouTube videos and other proprietary products. The compute power, robustness, flexibility, and scalability are exceptional, and users can consume the resources “on-demand”.

    In recent years, Google has been committed to the development of enterprise cloud computing. More than 90% of global searches are driven by Google Cloud infrastructure. The company is looking for $32 billion in search advertising revenue for the United States market, nearly $30 billion more than the nearest competitor. More importantly, in terms of total resources, it is likely to be the world’s largest cloud computing company and provides a large number of open source technologies, which are the foundation of cloud computing.

    As a cloud service provider, Google Cloud offers dozens of IaaS, PaaS, and SaaS services. So what makes GCP stand out from all the others? Managing Data.

    Two notable customers (online music giant Spotify and Twitter) are utilising GCP as Google can provide the first-class support for managing online data. When online music giant Spotify announced that they would move their backend infrastructure from AWS to Google, the company took Google’s data stack advantage as the main reason and called for tools such as Dataproc for batch processing and called BigQuery (Google’s analytics data warehouse).

    Recently, Twitter decided to move the Cold Storage and Hadoop clusters into Google Cloud, and Chief Technology Officer Parag Agrawal said in a blog post that “provides a range of long-term expansion and operational advantages.” A valuable tweet can be filled with a 10 million page book. This is a lot of data that needs to be managed, so it’s reasonable to assume that if Google Cloud can handle the needs of Twitter, it should be able to meet the needs of any corporate customer.

    Categories
    Blog

    Serverless Services- uses and advantages

    The three big public cloud providers are all offering serverless services (AWS Lambda, Google Cloud Functions and Azure Functions). Obviously, the name is equally misleading as the term “cloud” itself – cloud services are run from within a data centre, serverless services run on compute infrastructure.
    What then are the advantages of serverless computing (and after virtualisation, containers and now serverless will there be any smaller unit)?

    The main advantages of serverless computing are

  • Low cost (per execution)
  • Quick release / deployment time
  • No server-related tasks
  • Automatic scaling
  • The aim of serverless computing is to run a “function” only when required. I.e. instead of consuming compute resources 24*7 (as in a local data centre), functions are only called, when required, perform their designated task and are terminated. The term “event-driven architecture” is frequently used in the context of serverless computing.

    Care needs to be taken, when using functions to ensure these points are taken into consideration:

  • Architected securely, i.e. dependencies are known and defined and vulnerabilities are managed. Likewise access to the platform must be secure, e.g. including two-factor authentication.
  • Costs are controlled. As functions are on a “pay per use” they should be executed infrequently – calling the same function millions of times per day makes it more suitable to be run within a server environment, as costs are small per execution, but can quickly add up; this is not different to the VM sprawl experienced with server environments, i.e. there is a risk of “function sprawl”.
  • Stateful vs. stateless: typically, functions are stateless, i.e. outputs are generated based on the input. There is a fine line from there to “microservices”, which can be stateful.
  • Portability: How easily can functions be transferred between one cloud provider and another, i.e. there is a risk of lock-in with functions.
  • Latency requirements
  • To complicate matters further, there are open source technologies that allow to build serverless frameworks. These can run in a public or private cloud environment.

    Common use cases are in the areas of:

  • Mobile backend
  • Input from IoT sensors
  • Change data capture / database updates
  • Scheduling of batch jobs
  • Chatbots
  • CI pipelines
  • REST APIs and web apps
  • Like with all technologies there is not a “one size fits all”. Servers, virtual machines, containers and functions have all their own advantages and disadvantages. These need to be evaluated and assessed; once implemented, they need to be controlled and managed.

    More information on Myths of Serverless Computing

    Categories
    Blog

    Public clouds – Solution for security and management concerns

    Given the convenience of cloud computing in TCO and deployment maintenance and expansion, more and more enterprise organisations are moving to public clouds. The main concerns organisations have about public clouds are still security and management issues compared to local deployments. Correspondingly, start-ups that have solved these concerns have emerged. CloudCheckr, which provides enterprises with rich monitoring methods and deployment optimisation practices, is one of them.

    CloudCheckr is a startup that provides a cloud service management platform, founded in 2011 and headquartered in Rochester, New York. CloudCheckr simplifies cloud infrastructure for public cloud users, provides transparency and visibility into deployments, and leverages best practices to help users save money.

    The solution consists of three modules:

  • The first is resource control: Resource control can provide detailed information about the deployment of the cloud, such as historical snapshots, current usage lists, trend reports, change monitoring, etc., allowing users to accurately understand the current status and scope of the cloud used.
  • The second is cost optimisation: Through this module, users can analyse their own cost in the public cloud, explore the utilisation of resources, and propose scenario-based cost optimisation recommendations.
  • The third is the best practice module: This module identifies exceptions in the user’s deployment that are poorly configured and may affect security, cost, and availability. Users can respond accordingly or plan accordingly.
  • CloudCheckr also offers the Total Compliance module that is free for all Security based customers.

    The Total Compliance module does three things:

  • First, it automatically and continuously monitors your infrastructure for compliance with 35 different standards, such as HIPAA, PCI DSS, CIS, NIST, SOC2 and more.
  • Second, if a problem is found, the software can fix the issue for you with “Self-Healing” automation.
  • Third, CloudCheckr Total Compliance provides a detailed log with historical details and remediation notes for third-party auditors that prioritises issues and detailed instructions to help users take remedial action.
  • Currently, CloudCheckr is available in both free and professional versions supporting AWS and Azure. With the platform helping users save money and improve the security and availability of cloud services, more than 40% of AWS Premier Consulting Partners now use its management services, and 150 of AWS and Azure resellers are using CloudCheckr. Companies including Nasdaq, Siemens, Recycling, etc. are direct users, and the annual cloud expenses they manage exceed $1 billion.

    Diaxion has partnered with CloudCheckr to assist our clients with cloud compliance and cost optimisation and management. Contact Diaxion for more information on how CloudCheckr fits into the Diaxion eco system.