In both the business and home, we've witnessed an evolution of technology. The power of personal computing, communications, and media has extended our reach to just about everywhere, and at a breakneck pace. Traditional businesses that found success efficiently winning and keeping customers are now amid a new, daunting existential dilemma—the design, implementation, and integration of technology to further sales agendas.
Everyone knows the buzzwords centered in "digital transformation"—mark your BINGO cards accordingly! Since the genesis of the interconnection of business and technology, IT departments have strived to remain relevant and ready for what's next. And with the advent of hybrid infrastructure, IT professionals have learned to tether the traditional "big iron" with other applications, databases, and file systems running on a myriad of physical and virtual machines across data centers and regions.
The move to the cloud
Over the last couple of years, we've also seen a migration to private and public cloud. Cloud providers, like Azure, attempt to accommodate customers with simple compute and storage solutions. But this culmination did not happen overnight. In many cases, it has been piecemealed together over long periods of time as budget, resources, and technology became available—not the ideal way of building out an environment. This reactive approach presents numerous integration and support challenges, and today we often learn just enough to evolve and get by.
So, really, what have we learned?
We've learned the challenge in monitoring data and its supporting infrastructure, and the importance of abstracting useful data related to performance, reliability, and cost of ownership. And this is just for on-premises data centers. With the introduction of Azure and other service providers to our environments, we're realizing the advantages and disadvantages of running compute instances in the cloud along with their many storage options, and the accompanying egress charges associated with bringing data back.
Really, we've learned most of this the hard way. We use the native tools within each application and function, some of which require homegrown scripting. Even then, the results may not provide visibility outside of that particular function. Alternatively, we deploy the pray for success approach. This popular, albeit unwise strategy, includes things like the do it live and hope it works disaster recovery policy. In many cases, there isn't an option to build a runbook or adequate testing framework provided by these native application tools. So, we cobble together a migration policy that frequently either fails or goes well beyond Service Level Agreements. We've also learned ignorance of lurking Skunkworks projects does not absolve our teams of responsibility, or worse, bad outcomes.
How do we thrive in complexity?
Many organizations now rightfully maintain hybridized environments, taking advantage of numerous tools to make infrastructure available—this is positive—but how do we address the resulting complexity at hand beyond just good enough. There are two ways to tackle this IT beast:
With clarity, you can identify what you have—true awareness of resources, storage, and workloads—and the meaningful details of each, including inefficiencies and what is and is not protected. With clarity, you can target overutilized and underutilized CPUs and assets, high water mark and orphan storage, and even Skunkworks projects. And probably most important—with clarity you can command a complete understanding of your organization's business-critical applications and systems and develop and test reliable disaster recovery plans.
Visit Veritas.com/insights/aptare-it-analytics for insight into our approach to seeking clarity on data and infrastructure at Veritas Technologies.