28 Jul

VMUG Indianapolis 2015

Storcom was happy to sponsor VMUG Indianapolis 2015. We got to meet some new people, see some old friends and get out and see what is going on from a user perspective in the VMWare community.  Being a VMware certified partner we believe we are certainly up on the technical aspects of the product but what is also interesting about events like this is talking with end users in a forum where we can understand some of the real world challenges users are having and learn about creative ways IT professionals are solving common problems.

Obviously the other cool part of VMUG is seeing some of the new technology vendors and what they are offering. It seemed like there was a good representation of most of the new technology vendors, although there are a few that we did not see that I think are worth mentioning.

Storcom over the years has worked with many emerging technology companies, it’s a long list of vendors that have been very successful and moved onwards and upwards to be part of some of the largest companies in the Storage, Data Protection and Disaster Recovery industry.

But to be an emerging technology company you have to start somewhere, and all of these type technology companies- whether it be Compellent, NetApp, Data Domain to Nimble Storage, someone purchased system number 1.

So it takes a certain type of person & company who sees the value of these technologies and is willing to put their job on the line for the benefits they believe that specific technology will bring to their organization.

So with that said, there are a few technologies I think that are worth mentioning that did not make it to the show or if they did they are definitely the new kid on the block that anyone who likes to track emerging technology companies should keep an eye on.

I am not listing these in any order of preference, nor am I endorsing these technology companies specifically- but for now I think they are certainly noteworthy enough for what they are doing and potential value they may bring to the technology landscape:

https://swiftstack.com/

http://www.datrium.com/

http://rubrik.com/

http://velostrata.com/

 

Share this
22 Jul

Millennials Adopting Cloud Based Technology

Ali Mirdamadi  has written a great editorial “How Millennials Will Accelerate the Adoption of Cloud Technology” on his Linkedin page. At Storcom a large portion of our salesforce fits into the Millennial generation.   I couldn’t agree more with Ali Mirdamadi about his views on how Millennials are not “narcissistic, impatient, delusional, and happy-go-lucky.”

 

The ones that work with us here at Storcom very much fall into the entrepreneurial mentality and with that I can certainly see how millennials will move in the direction of using cloud based software as a service as well as cloud based infrastructure.  It just makes good business sense.   As this progresses I think the influence of millennials, as they get older, will push SAS based solutions more and more into the mainstream.  For all of the reasons that Ali Mirdamadi mentioned it makes good sense.

 

One thing that I do still think is important to point out is that millennials also hold many positions within larger IT environments where cloud based initiatives will  be much harder to implement, not only from a technical standpoint but also a political standpoint.

In these types of organizations I am not sure millennials will have necessarily the same impact that they do in the smaller more nimble companies that we typically think of them working for.

 

All-in-all I think this editorial has a lot of merit to it and it’s a brave new world we are facing .  At Storocm we are also starting to see more and more new solutions that leverage a hybrid approach of on-prem and public cloud technologies that I can also see being adopted which will fuel some of the metrics that were sited in the editorial

 

Dave Kluger – Principal Technology Architect @ Storcom Inc.    Dave Kluger has 20+ years IT expeice and runs the blog at Storcom.net
Ali Mirdamadi is a Sr. Business Development Manager for Abacus Data Systems Inc. and currently runs the blog caseforthecloud.com where he discusses about the future of cloud computing and how it will impact businesses.

Share this
21 Jul

Commvault Intellisnap Technology

For those of you who are looking into snapshot-based recovery, we highly recommend looking into Commvault Intellisnap Technology. Gone are the days of manually integrating applications with snapshots from scripts designed to combat varying levels of automation, support, functionality and awareness.

Intellisnap Technology solves this problem by providing central snapshot management across different storage platforms that automates and integrates with backup processes.

At Storcom we believe in making business better through the use of technology and we feel that Intellisnap will help organizations eliminate downtime while increasing productivity.

Read more about Intellisnap technology HERE.

Contact Storcom’s Sales team today for consultation on Intellisnap technology and how it can integrate with your current environment.

Share this
17 Jul

Better data management: to archive or not to archive?

Over the years I’ve had many conversations with clients about using archive technologies to offset storage costs. The concept that as data gets older is less used fueled the entire concept of hierarchical storage management.

As storage solutions progressed, proving HSM within a storage array, or what would be know as tiered storage, became the mainstay of storage management. New data gets written and read the most so put that on the top tier of storage and then let it move down as it ages. Archiving, however, is really a different concept. An archive by definition is “any extensive record or collection of data:” So in many organizations what we think of as a traditional backup routine, daily/monthly/yearly backups, can be used as an archive if you have a way to catalog that data.

In the past decade as email and file service data has exploded, however, more and more organizations have been trying to figure out ways to better manage long term retrieval of old data without deleting it from its primary location. Many companies that we have worked with have over the years implemented technologies that in some cases “stub” the original piece of data with a stub file. This points to the new location of that data in a priority compressed format or in other cases will uses proxy technologies to redirect entire shares or mounts.

So here is where it get complicated, the almighty cloud. How does this affect data management? We all hear about how great it is and that using SAS makes so much sense. CIO’s everywhere are pushing their IT staff to look at how they can save money with hosted email and file service technologies. What these CIO’s have not taken into account is the migration process back out of the existing archiver and what that means for a number of factors. End user experience will be vastly different, re-hydrating the data can be a very cumbersome process, and before you know it the TCO on the hosted email solution has sky-rocked.

We are in the midst of a number of projects where organizations are moving internal Microsoft Exchange email to Office 365. They were aware that we needed to deal with the existing archived data, but what they didn’t take into account was the sheer amount of time that would be added to the migration process. While the speed in which email data can be moved is greatly dependent on the MAPI protocol (you know we all love MAPI), with 5 concurrent VMs running while pulling data, the current estimated time is nearly 400 days to complete the migration.

When Microsoft built this amazing ROI analysis for this company they failed to build that into the TCO. The point in all of this is as we see more and more companies trying to leverage public cloud and all the benefits that it provides , it is very important they consider what existing technologies may be used and how that may affect the migration . When the SAS companies talk about their migration strategies they are always talking about best case, not what may happen in the real world. Lastly, it’s important to think about how your existing archiving strategy may impact your overall disaster recovery and data protection strategy when you do move to the cloud. Access to old data may still be an requirement and moving all of that to the cloud may not always be the most cost effective.

Next week I will write some more on this topic and discuss what new cloud based archive technologies may be a good alternative to managing data growth.

Dave Kluger – Principal Technology Architect

Share this
10 Jul

What do Baking and DR testing have in common?

Growing up in NYC and working at a bakery gave me a deep understanding of the processes that go into baking quality cakes and baked goods.  One of the tasks I had at our bakery was making the homemade blondies and brownies each day.  So you may ask, what does this have to do with Disaster Recovery testing? Well quite a lot.

At Storcom, we consistently see that Disaster Recovery testing is one of areas within IT that many mid-tier sized organizations push to the the back burner. When juggling the demands of today’s IT needs it’s to push this task off the priority list.  When it comes to protecting data in general, we all know that we need to do “backups”  but why? Simple, because a time may come where we need some part or all of a data set brought back to a point in time.

When baking a cake you go through a methodical process of getting the right ingredients together, mixing them, and finally going through the baking process.  When a baker puts a sheet pan in the oven they don’t just set a timer and walk away and come back 30 minutes when the recipe says it’s supposed to be done.  You consistently have to come back to the oven while you are doing other tasks and check the cake and make sure it’s baking properly, if one part is getting cooked faster than another then you might turn the pan or move it to another rack.

Checking your data integrity and disaster recovery process should be no different.  Most companies are simply waiting  for the “timer” to go off and find out that their “cake” is ruined.   Part of the “Recover by Storcom” solution is making sure all of our clients data is being regularly checked for data integrity as well as scheduling full disaster recovery tests on a periodic basis. Whether they be yearly or quarterly, we don’t take chances when it comes to our process working properly when it will count the most.

Obviously, my analogy to baking a cake is much more simplistic, but it is realistic. A baker does not want to spend all of that time and resources on baking a cake to find out it needs to be thrown in the trash.   Companies need to make DR testing a priority so that when they need their data the most it is available and all the processes necessary to get users to be able to access that data are in working order.

Dave Kluger – Principal Technology Architect, – Storcom

 

Share this
08 Jul

VSphere 6 & VVOL’s: What the hype all about ?

VMware vSphere 6 and VVOL’s  VVOLs is a provisioning feature for vSphere 6 that changes how virtual machines (VMs) are stored and managed. – They have been hyping this technology for quite some time and it’s finally here.  So what is its real value.

VVOLs, which is short for Virtual Volumes, enables an administrator to apply a policy to a VM which defines the various performance and service-level agreement requirements, such as RAID level, replication or deduplication. The VM is then automatically placed on the storage array that fits those requirements.

VVOLs’ other advantage is the ability to snapshot a single VM instead of just the traditional snapshot of an entire logical unit number that may house several VMs. This feature saves wasted space on the data store and reduces the amount of administrative overhead.

So here is the most important part….To use VVOLs, the storage hardware has to support the vStorage APIs for Storage Awareness (VASA). VMware introduced VVOLS at VMworld 2011 during a technical preview session but now has offi clyy released it.

Alot of storage vendors are just getting to the point that they can support it, they must be compatible with

Storage awareness 2.0 , this means writing and testing new code which can take a while from an R&D standpoint so dont expect to see a wide adoption of VVOLs right out of the gate.

 

There are also a number of features that you may be using or want to use which are not compatible with VVOLs so these are important to consider if you think you want to use it:

 

  • VMware vSphere 6.0.x features that are not interoperable with Virtual Volumes (VVols) are:
  • Storage I/O Control
  • NFS version 4.1
  • IPv6
  • Storage Distributed Resource Scheduler (SDRS)
  • Fault Tolerance (FT)
  • SMP-FT
  • vSphere API for I/O Filtering (VAIO)
  • Array-based replication
  • Raw Device Mapping (RDM)
  • Microsoft Failover Clustering

 

There are a few Storage vendors on the market today that have already implemented their own version of VVOLs , they have all been based on using NFS so that the file system is exposed.   With VVOLs their are many benefits and we see good things that will come from it, but it needs to get mainstream adoption from the Storage Vendors to get widely adopted as well as some of the features listed above need to not be a barrier to using it for us to really see the value.
Dave Kluger – Storcom Principal Technology Architect

Share this

Copyright © 2013 Storcom • All Rights Reserved.