27 Apr

Why Simplivity outperforms EMC in the Hyperconverged Space

Simplivity was founded in September of 2009 and developed their product for 43 months before releasing in April 2013. Doron Kempel, founder of Simplivity, has always had a vision to create a single platform that fully integrates with the hypervisor management platform  (eg vCenter) and provides storage, deduplication, WAN optimization, replication, backup & restores as well as DRBC.  Simplivity has not missed a mark and has developed every aspect and feature on time as promised.

Kempel is no novice to startups – Founder and CEO of Diligent,  which was purchased by IBM and the main competitor to Data Domain,  he understands Data DeDuplication very well.

According to Morgan Stanley, Simplivity is growing three to four times faster than competing companies, including Data Domain, EqualLogic, Nimble and 3 Par.

Let’s look at how Simplivity compares to EMC’s offerings in that space.

VxRails – VSAN on Whitebox from EMC.   EMC has fuddled around for number of years trying to figure out how to go to market with a hyper-converged solution.  They understand that hyper-converged is a very distributive force in the market which is part of why they allowed themselves to get sold.   The monolithic storage business in the mid-tier is dying , partly due to the cloud and also in part by these technologies.

EMC has been trying to cobble a bunch of products together for a while and call them hyper-converged. This was Vblock. This product is not hyper converged- it’s fancy marketing for three different products sold under one name.

Then they started working on EVC:Rail under the VCE name. This is their first attempt into using VMware’s native VSAN technology and bundling it with white box hardware.  In Aug 2014 they launched, but VSAN was immature and there were many issues with early implementations that hurt the product .  EMC marketing was a mess.  Also at the same time VSAN ready nodes become available so VMWare partners like HP & Dell could also start selling what is basically the same product.  So who do you buy VSAN from???  

VSAN is missing key functionally that the competition Simplivity, Nutanix and Maxta all have built into their product. VMware’s product forces you to shell out more cash for these functions. In the case of EMC they do what they always do and try and bolt on more products to accomplish what companies like Simplivity can do within their core product.

With the release of VSAN 6 EMC decided to try it again and rebrand VSAN- this time it was Vspex Blue, but again it was short lived and they decided to rebrand again and call it VXrail which is built on VSAN 6.2.

All of this means a lot of disruption in this space for EMC and VMWare. Add to that mix the Dell / EMC acquisition and level of confusion and risk skyrockets .  There are so many speculations about the acquisition that can put fear, uncertainty and doubt into a prospects mind- but from our vantage point this puts Simplivity in an excellent position.   

Here is why: If Simplivity had the EMC name on the front of it and VXrails was sitting next to it I am almost 100% confident that anyone would choose Simplivity over the VXrail – It’s simply a better platform.  With that said they are not part of this big mess with EMC & Dell and are somewhat sheltered from what the real outcome will be . VMware is not going away, they are not going to cut off all of their partners when it’s such a huge part of their revenue stream.

Then you add Nutanix into the mix with the Dell/ EMC acquisition. I would say that Nutanix’s product is still better and provides more of what Hyper-converged should then VXrails does, so what happens to them? Remember Michael Dell has a large chunk invested as a part owner in Nutanix. Then you also have the VSAN ready nodes which Dell sells.  What happens to that offering?  My point again is that Simplivity is in a great place because they are not affected by this huge merger with competing offerings from the same corporation.

Simplivity is planning their IPO, growing very fast and have a solid technology. What will happen to them long term is not completely known, but again we can look at the market and see big companies like Cisco, IBM, Hitachi, NetApp who have very weak offerings in this space.  The point is that Simplivty is in a good position to continue to grow like they have been. Maybe they do become part of one of these larger organizations offerings someday, and that is not necessarily a bad thing. My last point will be this:  With a platform that is this solid they are not going out of business and closing their doors.  

Other true Hyper-converged solution benefits that Simplivity brings to the table that the competition struggles with.  

Simplivity provides a single interface for all aspects of the management & protection of your VM’s and data all through vCenter plugin.  No need for multiple products and add on’s  – EMC’s solution uses multiple products which are added in the form of VM’s to provide functionality. For example, backups through Avamar.  When the VM performance gets over run, EMC will say you need to purchase more hardware which equals more expense.

Simplivity provides Global Deduplication which allows very effective replication and movement of VM’s  This means with mergers or acquisitions, a single appliance can be deployed at the new site and all data can easily be imported and then moved to the primary site.  Since Simplivity has global DeDupe, very little data needs to actually be move over the WAN.  EMC/VSAN  does not provide this capability.  


David Kluger

Principal Technology Architect

Storcom Inc.

Share this
08 Oct

Recover is Named a Top DR/BC Solution with CIO Review Award

We are pleased by CIO Review’s announcement of our inclusion in their 20 Most Promising Disaster Recovery Solution Providers. This award validates what we have believed from our inception in 2000: that every company deserves a disaster recovery and business continuity solution that meets business needs both technically and financially. We strive to continually provide reliable DR/BC strategies for any size or structure of organization with our Recover solution.

CIO Review has recognized our efforts in seeking the most cutting edge strategies and technologies for an IT world that is constantly evolving. “Storcom has been on our radar for some time now for stirring a revolution in the Manufacturing space, and we are happy to showcase them this year due to their continuing excellence in delivering top-notch Disaster Recovery Solutions,” said Jeevan George, Managing Editor, CIO Review. “Storcom’s solutions continued to break new ground within the past year benefiting its customers around the globe, and we’re excited to have them featured on our top companies list.”    

As exciting as it is to be recognized for our contributions to IT solutions, knowing that our customers’ data is safe and obtainable 24/7/365 is the real reward.

Whether your organization is looking to implement its first DR/BC solution or improve upon its current one, Storcom can assist in the planning, implementation and support. Give us a call or email today to find out more information.

-Rob Everette

Sales Operations Manager, Storcom Inc.

Share this
08 Oct

CIO Review Names Storcom Among 20 Most Promising Disaster Recovery Solution Providers


The annual list showcases the 20 Most Promising Disaster Recovery Solutions Providers 2015. Storcom makes it to CIO Review’s most Promising Disaster Recovery Solution Providers list for its ’Recover by Storcom’ which provides local data protection at customers’ sites.

FREMONT, CA—September 30, 2015 — CIO Review (www.cioreview.com) has chosen Storcom (Storcom.net) for its 20 Most Promising Disaster Recovery Solutions (DRS) Providers 2015. The positioning is based on Storcom’s disaster recovery and business continuity planning as well as structured cloud-based backups.

The annual list of companies is selected by a panel of experts and members of CIO Review’s editorial board to recognize and promote technology entrepreneurship. “Storcom has been on our radar for some time now for stirring a revolution in the Manufacturing space, and we are happy to showcase them this year due to their continuing excellence in delivering top-notch Disaster Recovery Solutions,” said Jeevan George, Managing Editor, CIO Review. “Storcom’s solutions continued to break new ground within the past year benefiting its customers around the globe, and we’re excited to have them featured on our top companies list.”

Storcom is honored to be recognized by CIO Review’s panel of experts and thought leaders,” said David Kluger, Principal Technology Architect, Storcom.

About Storcom

Storcom is a company focused on the management, movement and protection of data. The company is a storage-centric solutions integrator and consulting company headquartered in Chicago IL, with a presence in the Midwest and Southeast US.

About CIO Review

CIO Review constantly endeavors to identify “The Best” in a variety of areas important to tech business. Through nominations and consultations with industry leaders, our editors choose the best in different domains. Disaster Recovery Special edition is an annual listing of 20 Most Promising Disaster Recovery Solutions Providers in the U.S.

Share this
26 Aug

Converged vs. Hyper-Converged Infrastructure

Like the use of the word “Digital” or “HD”, these two terms are often misused by manufacturers to position the solution at a targeted opportunity.   I will not name any specific vendor, but I have seen in certain cases a sales person call the same solution a “Converged” as well as a “Hyper-Converged” So what is the real definition?

Generally speaking, there are two approaches companies can take to building a converged infrastructure:

  • The hardware-focused, building-block approach of solutions like  VCE (a joint venture of EMC, Cisco, and VMware), simply known as converged infrastructure;   NetApp Flexpod’s are also fit into this category.  Dell FX platform of servers are also being called Converged because they bring Compute network and storage together.  They all can have a hypervisor installed on the hardware but that does not make them a “hyper-converged” solution.
  • The other approach is built on leveraging software to provide capabilities that encapsulate a all of what the converged infrastructure solutions provide as well as other features into a single appliance or platform, that because the building block for scaling out the solution. This is what we dub hyper-converged infrastructure. These solutions include but are not limited to; Simplivity, Nutanix, VMware VSAN, Maxta, Scale Computing as well as a number of even newer technology vendors that are getting into this space.

Recently I had the good fortune to be able to go visit the corporate headquarters of one of Storcom’s technology partners and meet with their CEO and founder.  The purpose of this blog post is to discuss and potentially educate our readers about the differences between what is called a Converged Infrastructure solutions and what is termed as a Hyper-Converged solution and the evolution of the two.   Storcom represents a number of vendors that produce both types of solutions, I am not trying to give a specific opinion on what I think is the best solution rather educate and give some insight.

I think it is first important to lay the framework as to why we are talking about hyper convergence in the first place.   Over the last decade extreme growth and adoption of virtualization within IT infrastructure have changed the landscape for most organizations in some way.  In the X86 open systems market we have seen a shift towards more and more integration between storage , compute, network and the hypervisor.  This is what has fueled companies like Simplivity, Nutanix, Maxta and even VMware to develop their platforms as well as a host of newcomers.   At the same time the larger existing “legacy vendors” have also tried to get into this space by taking existing hardware and combining it together to provide what they may call converged or even hyper-converged solutions. What we need to disseminate is what the “converged” solutions are really providing.

I think it’s also important to understand the role that companies like Google , Facebook and Amazon have had in this process.  They all built their platforms on the premise that they wanted to shrink the entire stack into a single platform using commodity low cost hardware that they could scale out massively as they needed to grow as well as provide a very high level of redundancy across the data they they service.

After sitting down with CEO and Founder Doron Kempel from Simplivity and hearing his perspective it became clear that the only way to really develop a true hyper-converged solution is to build a platform for the mainstream enterprise businesses of today from the ground up.

Solutions like VCE (VMware, Cisco, EMC) Vblocks and NetApp FlexPods were not build from the ground up to accomplish the task of shrinking what traditionally took 6 or 7 products into one platform.  Although they may do a good job at it ,  there is still a level of complexity that no matter how hard they try they simply can’t hide. More complexity usually means more costs somewhere else whether it be in hardware or administrative overhead, even if they can match the functionality of a technology vendor who have purpose built software and hardware they still need to do this by certifying and integrating all of the components and in a lot of cases if specific features eg. if replication is needed they still may need an additional product to accomplish this.

This is the same paradigm shift that legacy vendors had to go through when storage vendors that performed block level virtualization came to the market forcing the legacy storage vendors to retrofit their products to provide the same functionality these products that were designed from day one to accomplish.

The goal of the Hyper-converged solutions like Simplivity and the family of products that are now available that fit into that mold provide is to shrink down the entire stack of products we in IT are used to managing , storage, compute, network, data protection , WAN optimization, disaster recovery and most importantly the Hypervisor into a single product and platform that can be very easily managed.  Unlike their big brothers at Google, Facebook, Amazon etc. ease of use for the typical IT administrator is tantamount to their product’s success.

As technology is advancing we are starting to see more and more companies getting into the hyper-converged space because it undoubtedly solves the fundamental challenge of maintaining and managing of resources that used to take multiple products.  In my discussions with Doron Kempel from Simplivity he made it clear that this was ultimately the direction and vision of his company and they built it that way from day one.  It is important to point out is that each one of the vendors in this field bring something to the table, features or functionality that may be better then the next. This drives innovation which is good for the consumer. This, as well as my discussions with Kempel, leads me to believe feature sets that the hyper converged solutions like Simplivity will provide are going to keep getting better.

Unfortunately for the larger monolithic solution providers it means bolting together pieces and parts from their portfolio or multiple vendors portfolios to even start to accomplish what this next generation of technologies can provide. This is not to say that these vendors solutions don’t work or do not provide a certain value for companies that may be more focused on name brand recognition. They will always have their place in IT infrastructure solutions.

What I hope to see is continued innovation that leads to IT administrators managing their infrastructure more easily and giving them extra time to focus on the projects and initiatives that are being rolled out.  With all innovative solutions that break the traditional mold of how we design, deploy and manage IT infrastructure, comes a new set of challenges.  One area that I think will set some of these solutions apart will be the flexibility to expand.  Unlike the traditional storage, compute and network solutions, expanding singular IT resources within hyperconverged infrastructure can be difficult. With the addition of storage comes automatic additions of compute and networking or vice versa.  How companies like Simplivity, Nutanix and some of the other new kids on the block address this will help them get a foothold in the market by providing better ROI then the next vendor.

The next 18 months will be interesting for all of these vendors whether they are a true hyper-converged or not.  I have already seen a lot of really cool and useful new features from a number of vendors we follow.





Share this
05 Aug

The Future of Storage Vendors with the Public Cloud

Al Basseri wrote a very poignant editorial about the Big Iron storage vendors and their cloud offerings versus the king of the cloud, AWS.  His points are dead on for the 3 companies he listed and their ability to execute when it comes to cloud offerings, as well as hyper-converged solutions.   He also comments on their all-flash solutions for the enterprise and the lack of features throughout.

What I do find interesting is that Al Basseri is the VP of Solution Architects at Tegile Systems. He does a very good job of not making any specific claims about Tegile in his editorial. He alludes to certain feature sets that are important but certainly does not explain how his technology vendor might be positioned any better than the next company to actually provide a solution that can sit alongside what he calls a “credible cloud storage story against the maturity and power of AWS.”

So what does any storage vendor do when you look at the potential benefits the Cloud brings for each different sized organization?  They need to change the game so organizations still have need or desire to make capital expenditures.

For any new company or smaller sized business it’s pretty obvious now why it makes sense to move 100% of your operations to the cloud.  I don’t think it’s hard to make that argument when you look at the administrative and capital costs savings.  When you get into the mid sized business is where it becomes a lot more undefined.  Move your mail to the cloud?  Sure that may be easy to justify, but all of your infrastructure?

AWS and Azure are slowly but surely encroaching on what the Big Iron storage vendors are not doing very well and also to a certain extent eating away at any technology vendors that rely on its product revenue being tied to hardware that is designed to be used in a data center (that is not one of the big cloud providers).

Al Basseri certainly does not go into if Tegile is building features into its product that will allow it to interface with AWS or any other cloud vendor. Some of the Technology vendors we work with that sell products are starting to provide certain capabilities to create a “hybrid” cloud.   This is a logical extension of a product’s capabilities.

I think this is where things are going to get real interesting in the next 18 months.  As AWS / Azure get bigger and better at what they do, the hardware companies who make storage and compute solutions are going to have to get more and more creative in how they interface with them.

I will not name any names just yet , but there are some very interesting new software companies that are going to be build able to build foundations for any organization to potentially build their own hybrid cloud that may use existing hardware along with some of the big cloud providers. This would allow mid sized organizations to really start to make a Hybrid model make sense.

The point of all this is that I don’t think not having a cohesive plan on how to live alongside the Cloud Giants is a problem that all hardware & software vendors need to grapple with.

Putting a storage product at a data center who has a direct pipe to the AWS cloud is a short term approach to solving what will prove to be a much bigger challenge long term for all of the current batch of hardware vendors to solve.

Lots to look at and lots to think about, but again I do agree with alot of what Al Basseri said in his editorial.

-Dave Kluger, Principal Technology Architect


Share this
28 Jul

VMUG Indianapolis 2015

Storcom was happy to sponsor VMUG Indianapolis 2015. We got to meet some new people, see some old friends and get out and see what is going on from a user perspective in the VMWare community.  Being a VMware certified partner we believe we are certainly up on the technical aspects of the product but what is also interesting about events like this is talking with end users in a forum where we can understand some of the real world challenges users are having and learn about creative ways IT professionals are solving common problems.

Obviously the other cool part of VMUG is seeing some of the new technology vendors and what they are offering. It seemed like there was a good representation of most of the new technology vendors, although there are a few that we did not see that I think are worth mentioning.

Storcom over the years has worked with many emerging technology companies, it’s a long list of vendors that have been very successful and moved onwards and upwards to be part of some of the largest companies in the Storage, Data Protection and Disaster Recovery industry.

But to be an emerging technology company you have to start somewhere, and all of these type technology companies- whether it be Compellent, NetApp, Data Domain to Nimble Storage, someone purchased system number 1.

So it takes a certain type of person & company who sees the value of these technologies and is willing to put their job on the line for the benefits they believe that specific technology will bring to their organization.

So with that said, there are a few technologies I think that are worth mentioning that did not make it to the show or if they did they are definitely the new kid on the block that anyone who likes to track emerging technology companies should keep an eye on.

I am not listing these in any order of preference, nor am I endorsing these technology companies specifically- but for now I think they are certainly noteworthy enough for what they are doing and potential value they may bring to the technology landscape:






Share this
22 Jul

Millennials Adopting Cloud Based Technology

Ali Mirdamadi  has written a great editorial “How Millennials Will Accelerate the Adoption of Cloud Technology” on his Linkedin page. At Storcom a large portion of our salesforce fits into the Millennial generation.   I couldn’t agree more with Ali Mirdamadi about his views on how Millennials are not “narcissistic, impatient, delusional, and happy-go-lucky.”


The ones that work with us here at Storcom very much fall into the entrepreneurial mentality and with that I can certainly see how millennials will move in the direction of using cloud based software as a service as well as cloud based infrastructure.  It just makes good business sense.   As this progresses I think the influence of millennials, as they get older, will push SAS based solutions more and more into the mainstream.  For all of the reasons that Ali Mirdamadi mentioned it makes good sense.


One thing that I do still think is important to point out is that millennials also hold many positions within larger IT environments where cloud based initiatives will  be much harder to implement, not only from a technical standpoint but also a political standpoint.

In these types of organizations I am not sure millennials will have necessarily the same impact that they do in the smaller more nimble companies that we typically think of them working for.


All-in-all I think this editorial has a lot of merit to it and it’s a brave new world we are facing .  At Storocm we are also starting to see more and more new solutions that leverage a hybrid approach of on-prem and public cloud technologies that I can also see being adopted which will fuel some of the metrics that were sited in the editorial


Dave Kluger – Principal Technology Architect @ Storcom Inc.    Dave Kluger has 20+ years IT expeice and runs the blog at Storcom.net
Ali Mirdamadi is a Sr. Business Development Manager for Abacus Data Systems Inc. and currently runs the blog caseforthecloud.com where he discusses about the future of cloud computing and how it will impact businesses.

Share this
21 Jul

Commvault Intellisnap Technology

For those of you who are looking into snapshot-based recovery, we highly recommend looking into Commvault Intellisnap Technology. Gone are the days of manually integrating applications with snapshots from scripts designed to combat varying levels of automation, support, functionality and awareness.

Intellisnap Technology solves this problem by providing central snapshot management across different storage platforms that automates and integrates with backup processes.

At Storcom we believe in making business better through the use of technology and we feel that Intellisnap will help organizations eliminate downtime while increasing productivity.

Read more about Intellisnap technology HERE.

Contact Storcom’s Sales team today for consultation on Intellisnap technology and how it can integrate with your current environment.

Share this
17 Jul

Better data management: to archive or not to archive?

Over the years I’ve had many conversations with clients about using archive technologies to offset storage costs. The concept that as data gets older is less used fueled the entire concept of hierarchical storage management.

As storage solutions progressed, proving HSM within a storage array, or what would be know as tiered storage, became the mainstay of storage management. New data gets written and read the most so put that on the top tier of storage and then let it move down as it ages. Archiving, however, is really a different concept. An archive by definition is “any extensive record or collection of data:” So in many organizations what we think of as a traditional backup routine, daily/monthly/yearly backups, can be used as an archive if you have a way to catalog that data.

In the past decade as email and file service data has exploded, however, more and more organizations have been trying to figure out ways to better manage long term retrieval of old data without deleting it from its primary location. Many companies that we have worked with have over the years implemented technologies that in some cases “stub” the original piece of data with a stub file. This points to the new location of that data in a priority compressed format or in other cases will uses proxy technologies to redirect entire shares or mounts.

So here is where it get complicated, the almighty cloud. How does this affect data management? We all hear about how great it is and that using SAS makes so much sense. CIO’s everywhere are pushing their IT staff to look at how they can save money with hosted email and file service technologies. What these CIO’s have not taken into account is the migration process back out of the existing archiver and what that means for a number of factors. End user experience will be vastly different, re-hydrating the data can be a very cumbersome process, and before you know it the TCO on the hosted email solution has sky-rocked.

We are in the midst of a number of projects where organizations are moving internal Microsoft Exchange email to Office 365. They were aware that we needed to deal with the existing archived data, but what they didn’t take into account was the sheer amount of time that would be added to the migration process. While the speed in which email data can be moved is greatly dependent on the MAPI protocol (you know we all love MAPI), with 5 concurrent VMs running while pulling data, the current estimated time is nearly 400 days to complete the migration.

When Microsoft built this amazing ROI analysis for this company they failed to build that into the TCO. The point in all of this is as we see more and more companies trying to leverage public cloud and all the benefits that it provides , it is very important they consider what existing technologies may be used and how that may affect the migration . When the SAS companies talk about their migration strategies they are always talking about best case, not what may happen in the real world. Lastly, it’s important to think about how your existing archiving strategy may impact your overall disaster recovery and data protection strategy when you do move to the cloud. Access to old data may still be an requirement and moving all of that to the cloud may not always be the most cost effective.

Next week I will write some more on this topic and discuss what new cloud based archive technologies may be a good alternative to managing data growth.

Dave Kluger – Principal Technology Architect

Share this
10 Jul

What do Baking and DR testing have in common?

Growing up in NYC and working at a bakery gave me a deep understanding of the processes that go into baking quality cakes and baked goods.  One of the tasks I had at our bakery was making the homemade blondies and brownies each day.  So you may ask, what does this have to do with Disaster Recovery testing? Well quite a lot.

At Storcom, we consistently see that Disaster Recovery testing is one of areas within IT that many mid-tier sized organizations push to the the back burner. When juggling the demands of today’s IT needs it’s to push this task off the priority list.  When it comes to protecting data in general, we all know that we need to do “backups”  but why? Simple, because a time may come where we need some part or all of a data set brought back to a point in time.

When baking a cake you go through a methodical process of getting the right ingredients together, mixing them, and finally going through the baking process.  When a baker puts a sheet pan in the oven they don’t just set a timer and walk away and come back 30 minutes when the recipe says it’s supposed to be done.  You consistently have to come back to the oven while you are doing other tasks and check the cake and make sure it’s baking properly, if one part is getting cooked faster than another then you might turn the pan or move it to another rack.

Checking your data integrity and disaster recovery process should be no different.  Most companies are simply waiting  for the “timer” to go off and find out that their “cake” is ruined.   Part of the “Recover by Storcom” solution is making sure all of our clients data is being regularly checked for data integrity as well as scheduling full disaster recovery tests on a periodic basis. Whether they be yearly or quarterly, we don’t take chances when it comes to our process working properly when it will count the most.

Obviously, my analogy to baking a cake is much more simplistic, but it is realistic. A baker does not want to spend all of that time and resources on baking a cake to find out it needs to be thrown in the trash.   Companies need to make DR testing a priority so that when they need their data the most it is available and all the processes necessary to get users to be able to access that data are in working order.

Dave Kluger – Principal Technology Architect, – Storcom


Share this

Copyright © 2013 Storcom • All Rights Reserved.