Categories
Blog

Citrix BYOD Starter Kit

Citrix want you to be able to empower people, simplify management and secure data, apps and devices with BYOD. So they have put together a BYOD Starter Kit.

You can download the Citrix BYOD Starter Kit to get best practices for developing a BYOD strategy that provides the flexibility people want and the control IT needs.

Get best practices and key strategies for implementing BYOD including:
• Forrester report on five steps to successful BYOC
• Best practices for planning and implementing BYOD
• Guide to securely deliver information to tablets and smartphones
• How to embrace BYOD with enterprise mobility management
• Examples of how other organizations have implemented BYOD

Categories
Blog

XtremeIO/ScaleIO

If there’s one thing that is certain in IT, it’s change. A quick review of this industry’s history shows consistent improvements in capability, capacity, and power requirements for a given workload. The sub-area of storage is no exception to this rule; the introduction of flash storage in the late 1980s heralded such an improvement that has become of increasing importance to the data centre over the past five to ten years, culminating in the release of multiple flash storage arrays from a large swathe of vendors. In order to understand the potential significance of flash storage in general, it is first necessary to look at the role hard disks have played in data centre storage over the past thirty years.

A standard hard disk works by storing data on one or more spinning platters of magnetic media. Reading or writing data involves moving the magnetic head to the right radius (involving a seek delay), and waiting for the required data to move underneath the head (a rotational delay.) How quickly data can be accessed is a function of how scattered the data is across the hard disk – how much seek and rotational delay is necessary to pull it off the platter. High throughput is typically achieved by grouping disks together in a RAID set (RAID 1+0 for high performance databases; RAID 5 or 6 when performance is less important), and frequently by using only the faster, outer parts of the hard drives to minimise the seek delay. All this increases cost.

Flash storage, on the other hand, does not require any head seeking movement, significantly reducing the latency in retrieving data. IOPS (Input/Output Operations Per Second) are higher than hard disks by over three orders of magnitude (depending upon the flash product), making a great many applications – virtualisation and databases are the big ones – far more responsive. But there is a cost for this: flash chips have limited ability to accept data writes, with the consequence that their lifespan is limited (although as technology has improved, the write endurance – and techniques to minimise the wear caused by writing data out – has also improved to a degree.) The per-gigabyte cost of flash is also significantly higher than that for hard drives, though this is mitigated to an extent by the elimination of the need to keep storage unused for high performance.

To date, most flash storage devices have emulated hard disks, allowing them to be drop-in replacements (via SATA for home systems, or PCIe for higher performance configurations.) Such configurations do relatively little to compensate for the weaknesses of flash – the write limitations, and the high cost per gigabyte. This is where flash-specific designs, such as XtremIO, come into play.

XtremIO’s design is geared towards high throughput rates, whilst minimising unnecessary writes to the flash storage backend. It achieves this by deduplicating data on the fly; the system’s RAM holds a cache of hashes (using the SHA-1 algorithm) representing the data already stored, allowing writes of redundant data to be intercepted before they hit the backend. This, in turn, reduces the wear on the underlying flash storage; however, because the XtremIO device also keeps in memory the information about the chains of data that make up the fully constructed data seen by the client, it comes at a cost. It is critical that an XtremIO device be powered by a reliable source (generally meaning that it be connected to an uninterruptible power supply, with a connection to inform the system if the power goes down), so it can write the internal state to non-volatile storage in the event of a power disruption.

One of the consequences of this design is that cloning the data – as might be done for point-in-time snapshots of virtual machine states, or copying database data from production to development or staging systems for testing purposes – is incredibly quick: all the controller has to do is make a copy of the metadata chain and then present that to the required host, rather than physically copy the data. The system is also designed in such a way as to allow solid scale-out ability: double the number of XtremIO hosts in the array, and throughput doubles.

Overall, the release, growth, and general increase in maturity in flash storage for the data centre promises to ease the pain of managing storage performance into the future. Where the market will go is anybody’s guess, but based upon initial indications, the future of flash in the storage hierarchy is well and truly assured.

Categories
Blog

VNX2 SNAPSHOTS

EMC have recently launched a refresh of the VNX called VNX2. It has been interesting exploring the new features from memory enhancements to improved snapshots. I was very excited reading about all the new features. However, what I found more interesting was that the new updates which include most of the features with the exception of snapshot enhancements cannot be loaded on current gen boxes even with plenty of CPU and Memory resources.. Nevertheless apart from many other enhancements such as space efficiency, LUNs load balancing, improved auto tiering and easier management of pools, there are some good long awaited features that have been added specifically to snapshots.

Snapshots have always been a very useful and interesting feature of VNX, as it can be used in many ways like data backups, software development, testing and repurposing. They do however come with some limitations as per previous flare code but, good news, EMC has added some really good features which previous flare code revisions desperately needed and my favourite is Redirect on First Write which replaces Copy on First write and its performance overhead.

Scalability has also been an issue in any Storage environment and a limitation which has annoyed most in previous versions of snapview is the snapshots per LUN. EMC has raised the per LUN snapshots limit from 8 to 256 and up to 32,000 per system as of OE/Flare 7.1.x/05.32.x; all writable with improved performance on writes. Multiple copies can be thinly provisioned with snap space consumed from the virtual pool giving more freedom in space management with up to 90% space saving. EMC has also included the ability to take snaps of the snap, called branching. The resulting snapshot becomes a copy of the source, it retains the LUN properties and resides within the same pool. This feature opens up the possibility for many new use cases. Testing, development and point-in-time backups are a few of the very useful cases for today’s storage environment.

EMC has also improved performance on snapshot deletion, which might not be very useful in a small environment, but in larger environments it will save a significant amount of time as the new code utilises available array resources to make it faster if there is no activity in the pool. Some other useful new features include instance restore, delete out of order and enhanced interoperability with AppSync. Which focuses on Microsoft products such as SharePoint, SQL and Exchange as well as VMware environment. By utilising the new features of Snapshots and AppSync, we can have application-consistent snapshots and restores in no time.

Although there are many other new features in VNX2 the improved memory management and CPU multithreading with the enhanced snapshots, put EMC as a leading player in midrange storage.

Categories
Blog

Flexpod Datacentre with Citrix Xendesktop7.1 and VMware vSphere 5.1

Cisco have recently released a Cisco Validated Design (CVD) for XenDesktop 7.1. If your organisation is looking to implement a virtual desktop solution, FlexPod infrastructure creates a compact, powerful, and reliable solution for XenDesktop 7.1. Find the full report published by Cisco here.

Since the launch of XenDesktop 7 last year in May, 7.1 release follows a new unified FlexCast Management Architecture for provisioning all Windows apps and desktops either on hosted-shared RDS servers or VDI-based virtual machines. The new architecture combines simplified and integrated provisioning with personalisation tools. Whether you are creating a system to deliver just apps or complete desktops, Citrix XenDesktop 7.1 leverages common policies and cohesive tools to govern infrastructure resources and access.

In a constant effort of design optimisation, testing and validation based on latest FlexPod infrastructure, the Cisco Validated Design (CVD) lays out architecture for a cost-effective virtual desktop infrastructure solution scaling to 2000 seats. The infrastructure is fully virtualised on VMware vSphere ESXi 5.1 hypervisor platform and hosted on third generation Cisco UCS B200-M3 blade servers and NetApp FAS3200-series storage array. Citrix Provisioning Server 7.1 manages desktop images for a mixed workload of XenDesktop hosted shared desktops (1450) and pooled hosted virtual Windows 7 desktops (550), which is common for many customer scenarios.

Let Diaxion, a Cisco and Citrix partner demonstrate how Citrix XenDesktop 7.1 and XenApp 7.5 can be deployed in a cost effective way under a single management architecture.

This article was first reported by Malathi Malla at blogs.citrix.com.

Categories
Blog

EMC’s XtremIO all-flash storage array

Last year EMC announced its new product line XtremIO all flash array as a result of a startup acquisition. Flash arrays are becoming widely popular and are usually considered the best option for workloads with high IO and where performance is critical, such as databases and VDI.

Arrays based on solid-state storage are not new. Most vendors have a product offering of arrays based on solid-state drives. However products like EMC’s purpose-built XtremIO all-flash array can provide the real technology performance edge compared to arrays with a combination of spindles and flash.

EMC has created their all-flash array with some unique features. XtremIO comes in modules called X-bricks with 10TB of storage; 20TB storage bricks have been announced and will be coming soon. A rack unit of XtremIO can have as many as eight X-bricks with 256GB of memory, which holds the metadata in duplicated form and is replicated in the scale-out version to another X-brick. In-line duplication and distribution of data across the cluster for load balancing, linking of X-bricks to distribute data efficiently among controllers using direct memory access gives an edge to EMC over other flash based arrays.

XtremIO will help to reduce the data centre footprint, as it comes in a 6U package with low energy consumption; XtremIO integrates well with VMware. Some of the keys features of XtremIO include:

Removal of duplicate data on the fly using EMC data placement scheme
A powerful and enhanced metadata engine able to place data anywhere in whole cluster and able to avoid the impact of garbage collection, (one of the main side effects of flash based storage systems).
A data protection algorithm for preventing any impact by SSD failures and allowing users to use close to 100% of the capacity with full performance
Content based data placement which intelligently places data with XtremIO optimisation keeping the array balanced and at full performance levels
Shared in-memory metadata for fast deployment of virtual machines.
Consistent latency below 1ms
No single point of failure and emphasis on keeping the data protected at all times

There is significant demand and desire for high performance storage systems driven by applications with high IOPS requirements and performance hungry workloads. EMC is well-positioned with XtremIO to deliver this performance and flexibility to customers. EMC’s all-flash XtremIO technology delivers consistency, reliability, integration and performance. Other storage vendors such as NetApp and IBM are moving toward a similar approach, EMC’s early entry has definitely made an impact for now.

Categories
Blog

Dark Data

What is dark data?

Gartner defines dark data “as the information assets organizations collect, process and store during regular business activities, but generally fail to use for other purposes (for example, analytics, business relationships and direct monetizing). Similar to dark matter in physics, dark data often comprises most organizations’ universe of information assets. Thus, organizations often retain dark data for compliance purposes only. Storing and securing data typically incurs more expense (and sometimes greater risk) than value.”

The outlook of this definition may cause alarm bells in some and less so in others. The fact of the matter is you need to keep data for all sorts of reasons; business functionality and compliance purposes to name just two. What does this really mean for your environments capacity growth, its associated costs and opportunities to better understand your business’s potential?

Whilst it is said that dark data has no value to the business, it yet remains an area that isn’t tapped into thoroughly enough to make such statements officially. Business to business needs and wants will differ, therefore the ability to analyse and trend dark data will have varying results from one business to another.

Once an understanding of how this impacts your environment is present, improved solutions can be sourced and deployed to maximize your return on investment.

Further from Techopedia explains “Dark data is data that is found in log files and data archives stored within large enterprise class data storage locations. It includes all data objects and types that have yet to be analyzed for any business or competitive intelligence or aid in business decision making. Typically, dark data is complex to analyze and stored in locations where analysis is difficult. The overall process can be costly. It also can include data objects that have not been seized by the enterprise or data that are external to the organization, such as data stored by partners or customers.

IDC, a research firm, stated that up to 90 percent of big data is dark data.”

How does this fit into your strategy? Will your organisation be able to take an economical advantage of dark data to drive new opportunities and revenues? What about enhance efficiencies internally by reducing costs?

References:
Gartner – www.gartner.com/it-glossary/dark-data
Techopedia – www.techopedia.com/definition/29373/dark-data

Categories
Blog

AGILE!!!!

So what is agile?

That seems to depend on your perspective! For CIO’s generally it seems to be about making sure IT can respond to the business (not that they have not been trying to do this before) and how to become an efficient service driven organisation. For the architecture community it is about trying to do even more with less, a new urgency on transforming business and application silos, modularisation and compatibility across people, process and technology and a focus on business understanding. For development and testing it is about new or enhanced ways of delivering, developing and testing software on time while adapting to changing needs of the stakeholders. For infrastructure and operations it is about changing the organisation from being inherently reactive and risk adverse to accepting a bit more risk where appropriate and delivering useable services that have demand. Breaking an infrastructure program into smaller milestone decision points to show progress and return to the business.

What is the point of agile?

To deliver something concrete – preferably working software at the end of each sprint. Many times we see the transition of traditional Waterfall to Agile which to me seems to create the worst of both, little delivered until the end while governance goes out the window or, over governed agile delivery which again, delivers little. It really is no good if your backlog is growing with business deliverables while sprints are delivering mainly IT ones and documentation. Sprint reviews are useful but spending a whole day at the end of the 4 or 6 week sprint could be indulgent – let’s say 10 people at an average of $150 per hour fully loaded, in a review for 7 hours = $10,500 and, this happens 9 times per year, this is $94,500 per year! What else could the business do with this money on the project?

The other part of agile, when not practised properly, is the lack of strategic direction. Customer needs and user stories do not live in a vacuum; they live in the context of the organisation from which they were derived and within the either overt or covert business strategy and goals. Often Agile seems to be practised with the short term view of the organisation or even more often, the project view. This leads to shortcuts and removal of items that are beneficial to the organisation going forward but, are harder for the project to justify on its own. This may seem like the view of an enterprise architect bemoaning the lack of strategy and direction in organisations but, talking to business colleagues rather than IT this seems to be a somewhat consistent view.

What can IT do about this?

Delivering something concrete

Deliver things the business can see and preferably use at the end of each sprint

Keep introspection and sprint review to the minimum necessary

Bring the wider stakeholders into the change process. It is very easy for people on the periphery to undermine your efforts as they have not had the communications or explanation

Don’t start with a large project if your organisation has not done Agile before. Start small, be seen to deliver value then build, under promise and over deliver!

Keeping an eye on the big picture

  1. Are you building ‘debt’ (technical, process, organisational) into the project? Have a review with your friendly Enterprise Architect or Application Architect from outside the project.
  2. Make sure you understand and can communicate the end to end process from the customer perspective and what the project is doing within that process
  3. Be prepared to push back on short cuts or deferment of backlog items
Categories
Blog

Diaxion’s Rules of Engagement

It’s been proven time and time again that in order to successfully deliver a project there must be a clear and defined engagement process which is followed throughout the lifecycle of the project. The engagement process must be communicated to all stakeholders involved in the project, with roles and responsibilities clearly defined thus to enable complete ownership of deliverables.

There is a multitude of ways to approach delivering a project from inception through to final completion, however following the below 8 rules will ensure success each and every time:

  1. Maintaining Consistency – having the Diaxion PMO responsible for overseeing the project from kick-off through to project closure, there is a consistent methodology applied to all engagements.
  2. Having a Process-Driven Approach – by having a process in place that can be referred to for all phases of the engagement ensures uniformity.
  3. Being Clear As to Ownership – by ownership of various tasks being assigned to each of the resources involved in a project, there is accountability at all levels. Diaxion’s people are given the opportunity to “love the deliverable”.
  4. Utilising Performance Tools – having standard metrics whereby performance can be measured means that the success of a project is quantified and action can be taken to better the delivery next time around. These performance tools include all status report templates, risks and issues registers and lessons learnt logs.
  5. Integration of Quality Controls – these are required to ensure that project goals can be met successfully using best practice. Quality controls need to exist at every level of a project, from the initial construction of scope right through to project reporting.
  6. Using a Knowledge Management System – having a central repository for tools and templates required to carry out a project ensures that information can easily be found and leveraged for future engagements. This includes all documents in both the sales and delivery kits, as well as those in the project management tool kit – all which can be found on Diaxion’s SharePoint.
  7. Committing Everyone to Obtaining New Business – gone are the days of the “hard-sell”, where the onus falls solely on a salesperson to generate new business and work on maintaining existing client relationships to encourage repeat business. By continually educating staff at all levels of the various service offerings, Diaxion is committed to giving all their people the tools required to qualify and report back on potential opportunities.
  8. Implementation of Effective and Efficient Project Management – a myriad of Project Management methodologies have been developed over the years to enhance the delivery of projects, however the most important aspect of choosing which one to use in your organisation is to ensure that it is fit for purpose. Regardless of whether you choose one methodology and strictly adhere to it, or use a hybrid approach, there are four key success elements that need to be considered:
  1. A dedicated project manager and project team should be in place
  2. The project manager should select the project team
  3. All projects should have a clear beginning and ending with performance milestones along the way
  4. People who will be delivering the project should be involved in the process right from the initial scoping phase

Diaxion follows a Project Engagement Process methodology, and adherence to this project engagement process assists in ensuring that Diaxion’s project objectives are met each and every time. These objectives are: to deliver a project within budget; within the scheduled timeframes; to deliver a valuable solution and professional service to the client and; client satisfaction and reengagement. Diaxion is constantly reviewing the Project Engagement Process to ensure continuous improvement is achieved.

Categories
Blog

Clouds, Big Data, Dark Data and the next big thing

In a recent blog post we wrote about Gartner’s definition of “dark data”. The definition includes that dark data is generally unused, often kept for compliance reasons and ultimately costs more in keeping than it is worth. A typical example mentioned is the sprawl of PST files in corporate environments and on people’s laptops.

I can confirm from own experience that even in corporations that discouraged (or I could say prohibited) the use of PST files for email archival, this did not stop people from creating them – as it is just more convenient to have your old emails accessible at any time than having to rely on your company’s email archive system – which is only accessible when online and connected to your company’s VPN.

This behaviour is not that different from the use of cloud providers like Dropbox for file sharing, which is also discouraged by most companies, but used by many employees. The more tech-savvy employees are, the bigger the risk that your company’s IT policies will be ignored, circumvented, or adapted.

Coming back to dark data in the form of PST files – these become an issue, if people decide to store copies on your network shares, which suddenly results in them being automatically backed up; they take up space in your infrastructure and suddenly you end up with unknown data that may be queried for compliance. Instead of being able to manage the scheduled deletion of old emails, suddenly they are not only back, but also hidden away in a PST container. In summary you end up with no value at added cost!

Ultimately I see this as the main risk of dark data: if it adds no value and is not required for compliance (as compliance should be handled as part of an internal policy), then it should be deleted to avoid unnecessary costs.

There are two possible approaches to the handling of dark data:

As part of a Big Data project, which analyses the existing data. We can also just continue to call this part of Business Intelligence/Analytics. The advantage of this approach is that it actually should generate additional value; on the downside it requires significant effort.

Perform a Data Profiling exercise, which will provide useful information about your data like its age (how many files have not been touched for years?), their ownership (how much data is owned by people no longer with the company?) and the content (what is the percentage of PST, zip, audio and video files?). The emphasis of this approach is to reduce the costs for maintaining unrequired – or “dark” – data.

Ultimately dark data is just a subset of big data and I am not convinced we needed a label for it. However the last few years have seen a number of new buzzwords in IT, some of which have actually taken of (“cloud”). No doubt there will be a new flavour of the year soon – now if I only could come up with something catchy for all things “software-defined”…

Categories
Blog

Redefine EMC Forum – a review

This year’s EMC Forum was under the banner of “Redefine” with four separate streams of

– Redefine Cloud

– Redefine Storage

– Redefine Applications

– Redefine Business

On a personal note it probably was not the best situation that I had returned from a France holiday barely 36 hours earlier; however even if fully alert I think I would have judged this year’s EMC Forum to be one of the weaker ones.

For me the main problem was that EMC really did not have anything new to present. ViPR – EMC’s software defined storage solution – is now at version 2 and has seen incremental improvements; which is only what is to be expected. It is still available as a free download and in that respect can be considered to “redefine” something. XtremIO – EMC’s all-flash array has been very successful thanks to EMC’s market position and its robust design, but it is ultimately still the 1st generation with more enhancements still to come. Vblock as a converged infrastructure has been around for a few years now, but how would it “redefine cloud”?

EMC has also been shopping again and two of their recent purchases – TwinStrata and DSSD – should make an impact. TwinStrata, a storage gateway company, will most likely be incorporated into the next generation of the Symmetrix VMAX arrays and will in this way allow the connectivity to any cloud storage (be it private, public or hybrid).

Not that much is known about DSSD; it sounds like a very fast PCI-e based server extension suited very well for Big Data applications and to run large databases virtually at in-memory speeds. As such it could nicely complement EMC’s flash offerings – from ScaleIO, hybrid arrays, XtremIO to ViPR and its Elastic Cloud Storage.

Reading between the lines EMC is setting itself up for another shift in their business – this time away from big monolithic arrays. Considering that there is now a significant number of startup companies offering easy-to-manage hybrid and all-flash arrays, cloud services are beginning to gain significant traction including cloud storage and even their own VMware provides a software-defined storage solution, they will have to come up with options to protect their market leadership.

In that context the motto for this year’s EMC Forum makes sense – just that the presentations did not live up to the promise. Here is looking forward to EMC Forum 2015, where I expect a host of new services.