10 Jul

What do Baking and DR testing have in common?

Growing up in NYC and working at a bakery gave me a deep understanding of the processes that go into baking quality cakes and baked goods.  One of the tasks I had at our bakery was making the homemade blondies and brownies each day.  So you may ask, what does this have to do with Disaster Recovery testing? Well quite a lot.

At Storcom, we consistently see that Disaster Recovery testing is one of areas within IT that many mid-tier sized organizations push to the the back burner. When juggling the demands of today’s IT needs it’s to push this task off the priority list.  When it comes to protecting data in general, we all know that we need to do “backups”  but why? Simple, because a time may come where we need some part or all of a data set brought back to a point in time.

When baking a cake you go through a methodical process of getting the right ingredients together, mixing them, and finally going through the baking process.  When a baker puts a sheet pan in the oven they don’t just set a timer and walk away and come back 30 minutes when the recipe says it’s supposed to be done.  You consistently have to come back to the oven while you are doing other tasks and check the cake and make sure it’s baking properly, if one part is getting cooked faster than another then you might turn the pan or move it to another rack.

Checking your data integrity and disaster recovery process should be no different.  Most companies are simply waiting  for the “timer” to go off and find out that their “cake” is ruined.   Part of the “Recover by Storcom” solution is making sure all of our clients data is being regularly checked for data integrity as well as scheduling full disaster recovery tests on a periodic basis. Whether they be yearly or quarterly, we don’t take chances when it comes to our process working properly when it will count the most.

Obviously, my analogy to baking a cake is much more simplistic, but it is realistic. A baker does not want to spend all of that time and resources on baking a cake to find out it needs to be thrown in the trash.   Companies need to make DR testing a priority so that when they need their data the most it is available and all the processes necessary to get users to be able to access that data are in working order.

Dave Kluger – Principal Technology Architect, – Storcom

 

Share this
08 Jul

VSphere 6 & VVOL’s: What the hype all about ?

VMware vSphere 6 and VVOL’s  VVOLs is a provisioning feature for vSphere 6 that changes how virtual machines (VMs) are stored and managed. – They have been hyping this technology for quite some time and it’s finally here.  So what is its real value.

VVOLs, which is short for Virtual Volumes, enables an administrator to apply a policy to a VM which defines the various performance and service-level agreement requirements, such as RAID level, replication or deduplication. The VM is then automatically placed on the storage array that fits those requirements.

VVOLs’ other advantage is the ability to snapshot a single VM instead of just the traditional snapshot of an entire logical unit number that may house several VMs. This feature saves wasted space on the data store and reduces the amount of administrative overhead.

So here is the most important part….To use VVOLs, the storage hardware has to support the vStorage APIs for Storage Awareness (VASA). VMware introduced VVOLS at VMworld 2011 during a technical preview session but now has offi clyy released it.

Alot of storage vendors are just getting to the point that they can support it, they must be compatible with

Storage awareness 2.0 , this means writing and testing new code which can take a while from an R&D standpoint so dont expect to see a wide adoption of VVOLs right out of the gate.

 

There are also a number of features that you may be using or want to use which are not compatible with VVOLs so these are important to consider if you think you want to use it:

 

  • VMware vSphere 6.0.x features that are not interoperable with Virtual Volumes (VVols) are:
  • Storage I/O Control
  • NFS version 4.1
  • IPv6
  • Storage Distributed Resource Scheduler (SDRS)
  • Fault Tolerance (FT)
  • SMP-FT
  • vSphere API for I/O Filtering (VAIO)
  • Array-based replication
  • Raw Device Mapping (RDM)
  • Microsoft Failover Clustering

 

There are a few Storage vendors on the market today that have already implemented their own version of VVOLs , they have all been based on using NFS so that the file system is exposed.   With VVOLs their are many benefits and we see good things that will come from it, but it needs to get mainstream adoption from the Storage Vendors to get widely adopted as well as some of the features listed above need to not be a barrier to using it for us to really see the value.
Dave Kluger – Storcom Principal Technology Architect

Share this
30 Jun

How Do I Choose The Right Provider for DRaaS?

A recent post from Brian Posey over at Searchdisasterrecovery “How do I choose the right cloud services provider for DRaaS?” points out some of the pros and cons of using the big cloud providers like AWS and Azure over a smaller managed service provider like Storcom to provide DR-as-a-service.

He does a good job of pointing out the fact that it all depends on an organization’s needs, which is absolutely true. However at Storcom, we think that in most cases when looking at companies in the Mid-enterprise space, the need for directed consulting on the entire DR process is way more valuable than the benefits of scalability that the large cloud providers bring to the table.

He points out that large cloud providers can provide rock solid SLA’s, unlimited resources, and multi-site data centers which all make sense when looking at running production workloads.  At Storcom, we believe that the benefits of having a custom tailored solution that is designed to meet your needs, as well as providing the consulting expertise and services to extend your IT workforce, outweigh any of the large cloud provider benefits.

The Recover by Storcom managed DR-as-a-service solution provides organizations with the ability to turn on and turn off consulting services as needed, eg. helping design a DR/BC runbook with the process and tasks needed to bring critical systems back online and provide proper connectivity.

A smaller managed services solution provider also brings to the table other services such as helping make sure all data is protected on a day to day basis or an on-demand basis. This is something that is not available from the large Cloud providers.  If you do not have the in-house expertise, then you simply can rely on a managed services provider that has a staff of IT professionals that can aid in these gaps that your staff may have.

All-in-All, Brian Posey did a good job of pointing out the benefits and drawbacks of the solutions from a pure technical standpoint, but missed a few key points from a services standpoint. Storcom can provide the benefits of managed data protection as a service that much of the industry is still not seeing today.

Full Article from searchdisasterrecovery.com

-Dave Kluger

Principal Technology Architect

Share this
16 Jun
16 Jun
16 Jun
23 Apr
23 Apr
23 Apr
05 Feb

vSphere 6 Revealed

 

VMware has been testing its vSphere 6 platform for some time and finally announced the platform at an event on Monday. And the new release of vSphere is much more than just a simple number change according to VMware CEO Pat Gelsinger. Check out the details here.

Share this

Copyright © 2013 Storcom • All Rights Reserved.