Cloud Archives | TierPoint, LLC Power Your Digital Breakaway. We are security-focused, cloud-forward, and data center-strong, a champion for untangling the hybrid complexity of modern IT, so you can free up resources to innovate, exceed customer expectations, and drive revenue. Wed, 07 Feb 2024 19:13:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.tierpoint.com/wp-content/uploads/2022/05/cropped-TierPoint_Logo-1-150x150.png Cloud Archives | TierPoint, LLC 32 32 Online Gaming in PA: Finding the Right Data Center Provider https://www.tierpoint.com/blog/online-gaming-in-pa-finding-the-right-data-center-provider/ Fri, 27 May 2022 15:52:05 +0000 https://tierpointdev.wpengine.com/?p=7496 In Pennsylvania (PA), online casinos and internet gaming (igaming) are legal. However, the state requires online gaming companies to work with a registered gaming service provider certified by the Pennsylvania Gaming Control Board (PGCB). The state ensures the integrity of online gaming through licensing requirements and strict enforcement. More states are expected to adopt the PA online gaming model.

To achieve compliance and certification in Pennsylvania, businesses with online gaming applications turn to data center providers and their certified partners for guidance and assistance. In addition to the PA gaming board compliance benefits, data center providers help businesses overcome some of the common digital infrastructure hurdles, like physical data center security, cybersecurity, threat mitigation, data retention, and disaster recovery.

We examine two of the biggest benefits of using a data center provider for online gaming in Pennsylvania.

Hosting online gaming with data center colocation services

Some online gaming businesses build and manage on-premises data centers in Pennsylvania. Many other businesses look to 3rd party data center providers for colocation (colo) services in the state.

What is data center colocation?

Colocation is the practice of sharing third-party space in a data center. Instead of housing your equipment in your organization’s on-premises data center, your IT equipment will be housed in a facility managed by a colocation provider. Typically, data center colocation facilities provide floor space, cooling, power, and physical security. This allows organizations to deploy a data center facility without having to buy or manage it.

Moving off-premises to a colocation data center provider brings many benefits, like:

  • data center services reduce the burden of IT staff managing day to day data center tasks
  • increasing IT infrastructure resilience and an extra layer of protection against disasters
  • improved latency and performance due to edge data center benefits
  • Interconnected data center footprint throughout the United States
  • data center 24/7/365 physical security
  • moving from Capital expenses (CapEx) into Operational expenses (OpEx)
  • additional connectivity options and network services
  • access to additional managed services, like security products, managed cloud services, remote hands support, and disaster recovery solutions.

Colocation simplifies management; the facility, data center infrastructure, experienced IT personnel, and managed data center services are ready for you. That’s why members of the online gaming industry use data center providers to set up and maintain equipment, security, and disaster recovery solutions for PA gaming. Here are the most important elements of a modern data center infrastructure.

Data Center providers offer a variety of network connectivity options; routing and switching infrastructure, power redundancy, and physical security controls are other benefits of colocation. They also offer a 100% uptime guarantee for power, cooling, and space for peace of mind to business leaders and increases game availability for players.

Read our Strategic Guide to Colocation and Data Centers to learn more about the value of data center colocation.

Overcoming cybersecurity challenges with security services

Cyber attackers frequently target online games with ransomware, DDoS, and other attacks. The need for comprehensive cybersecurity tools and an IT security strategy is paramount. When looking for a data center provider to host online games in PA, businesses should look out for these security features and offerings:

  • cybersecurity audits
  • next-generation firewalls
  • intrusion detection and protection
  • DDoS mitigation tools
  • Endpoint protection
  • Encryption

Compliance with online gaming regulations in Pennsylvania is a challenge. Requirements include interactive gaming testing and controls (like software authentication and security policy) and gaming platform requirements (like data logging and security, records retention, and disaster recovery). Together with secure colocation and hosted private clouds, TierPoint provides proactive managed security and disaster recovery services for online gaming.

Secure and compliant data center services help online gaming companies ensure regulatory compliance. TierPoint data centers offer these compliance standards: SSAE 18 Type II & SOC 2, Type II, HIPAA/HITECH, GLBA, PCI DSS v3.2, NIST SP 800-53 (FISMA), SOC 2 + HITRUST, and EU-US Privacy Shield.

TierPoint provides resources to help our customers achieve and maintain compliance for PA online gaming. This includes expert guidance from approved registered gaming service providers certified by the Pennsylvania Gaming Control Board (PGCB).

Read our Strategic Guide to IT Security to understand the key threats and protections available against some of the biggest cybersecurity threats all industries face.

Your data center guide to online gaming in Pennsylvania

IT Strategy Workshop - when an important decision needs to be made about Cloud, Security, or Disaster Recovery. Learn more...

At TierPoint, we work with experts who specialize in preparing businesses for PA gaming board certification and can help your business get to market and maintain compliance. With data centers in Allentown, Bethlehem, Lehigh Valley, Philadelphia, Valley Forge, and the Philadelphia suburbs, we can help you achieve your digital infrastructure goals. In addition to cybersecurity and colocation data center solutions, we offer cloud & hybrid cloud services, disaster recovery solutions, and a host of managed services to help you get set up to operate in Pennsylvania. Contact our sales team to learn more.

Dave Callan has sold mission-critical data centers solutions throughout his career, working for several major infrastructure and data center services providers, each specializing in designing, delivering, and supporting high availability solutions. Today, Dave is responsible for TierPoint’s Atlantic Region business which includes the Commonwealth of PA, and several surrounding states. For more information on TierPoint’s offerings, Dave can be reached at david.callan@tierpoint.com.

]]>
The Benefits of Colocation in a Pennsylvania Data Center https://www.tierpoint.com/blog/the-benefits-of-colocation-in-a-pennsylvania-data-center/ Mon, 26 Apr 2021 23:00:00 +0000 https://tierpointdev.wpengine.com/?p=7505 It’s easy to assume the growth of hyperscale clouds like AWS, Azure, and GCP has impacted the colocation market, but colocation remains a strong choice for businesses today. In this post, we focus on the benefits of colocation by narrowing in on one of the best geographies for data centers in the U.S.: Pennsylvania. We’ll look at why so many companies choose to house their workloads in a Pennsylvania data center and why colocation is often their data center solution of choice.

Why Pennsylvania for a data center?

When developing a data center location strategy and selecting a data center location, it’s not always about what the location offers. Just as often, it’s what the location doesn’t have that makes it suitable for housing sensitive workloads and meeting high-availability computing needs. Pennsylvania is one such location:

  • Not a very seismically active part of the country.
  • Surrounded by hills, thus less likelihood of tornados.
  • Not as susceptible to flooding as other parts of the country

Population density within Pennsylvania is also low compared to many other Northeastern states. Philadelphia is the largest city in Pennsylvania (1.5M people), but the number of people per square mile is less than half that of New York City. Especially when coupled with a high volume of business traffic, bandwidth used by consumers in a densely populated area can put a strain on network availability. For businesses that need reliable connectivity and lots of bandwidth, low population density is a big benefit.

While less populated, Pennsylvania is still very close to major centers of business. Three of the major markets served by Pennsylvania data centers include New York City, Baltimore, and the District of Columbia. The furthest commute from some of the popular Pennsylvania data center locations to these major metros is just over three hours; the shortest is less than an hour.

Businesses that choose to colocate will typically own (or lease) their hardware and often take all or most of the day-to-day responsibility for managing it. It’s very important to these companies that the data center is within a relatively easy commute. Distance is also a critical factor when choosing a disaster recovery site. They want their disaster recovery data center to be close enough to visit if they need to, but far enough away that it remains unaffected by a regional disruption.

The benefits of Pennsylvania for data center colocation

We’ve talked about the benefits of housing your data in a Pennsylvania data center. Let’s now turn to how a colocation facility in Pennsylvania can help meet some of an organization’s other critical requirements.

Security and compliance

Companies that choose colocation services are often trying to balance security and compliance with cost structure. They want to decommission their on-premises data center and convert at least some of their CapEx into OpEx. However, maintaining responsibility for the hardware they own or lease gives them a better sense of control.

Online gaming is another type of business served by Pennsylvania data centers. Dave Callan, TierPoint’s V.P. of Sales for the Atlantic Region recently wrote about how colocating equipment in Pennsylvania can help these businesses achieve their IT goals. Read his full blog post here: Online Gaming in PA: Finding the Right Data Center Provider

High availability and low latency

Many hospitals and major financial institutions choose to house their equipment in data centers located in Pennsylvania. These organizations have high availability requirements, and they need low latency connections so they can create a better customer experience. All of the benefits of housing workloads in a Pennsylvania data center that we have already mentioned are vitally important to them.

Managed Services

Banks and hospitals also have stringent compliance and security requirements, and they often feel most comfortable taking a hands-on approach to their hardware. Data center colocation in Pennsylvania gives organizations like these the best of both worlds. Data center providers often offer additional managed services, remote hands, and IT security monitoring, to help fill gaps when staff can’t be there in person.

Business Continuity options

Business continuity workspace is another benefit of data center colocation with a provider. Data center providers often allocate hundreds of seats and meeting room space for clients to use in the event of a disaster.

During COVID-19, many businesses were forced to rethink their work-from-home policies, and they discovered it was possible to send at least a portion of their workforce home. However, a different type of disaster could quickly impact connectivity to the home, and business continuity workspace gives these employees a place to go.

Our five key Pennsylvania colocation data center regions

In our experience, we found these regions to be the best data center locations for our clients. Here are some of the benefits:

Valley Forge

Our Valley Forge data center houses over 600 clients in a 137K square foot facility and offers every product in our portfolio. While many of these customers house their primary workloads in Valley Forge, the data center is also used as a recovery site for many businesses based in Baltimore.

Come see our state-of-the-art data center in Valley Forge, PA

Allentown

Our Allentown-TekPark data center, at 122K square feet, is often the choice for our D.C. clients. Although it’s a little further away than Baltimore, TekPark is set up to handle the level of computing power required by these clients. In fact, we just completed a major power upgrade to this facility to increase computing density, ensure redundant power, and make it hyperscale-ready.

Come see our state-of-the-art data center in Allentown, PA

Lehigh Valley and Bethlehem

Our data centers on Courtney Street and LeHigh Valley in Bethlehem are smaller sites, with 25.9K and 27.7K square feet respectively. A few hours outside the major metro zones, many of our customers choose Bethlehem for disaster recovery, so we’ve configured these data centers to handle their high-compute needs.

Philadelphia

At 25.7K square feet, our Philadelphia data center is also a great disaster recovery site, but its strategic location in the Philadelphia Navy Yard also makes it a popular choice for a primary production site.

All five of our Pennsylvania data centers are connected by a dark fiber network ring that can provide sub-2ms connections. We also have dark fiber connectivity down to 401 North Broad in Philadelphia and up to 60 Hudson and 11 8th Ave in New York City. Our fast connections to these well-known data centers allow us to offer faster recovery times to clients using our Pennsylvania data centers for disaster recovery.

Many of our clients take a hybrid approach to cloud computing (public and private), housing workloads in a mix of on-premises data centers, TierPoint cloud services, colocation, and disaster recovery sites. No matter what combination they choose, all of our Pennsylvania customers benefit from our security-first approach. This includes industry-standard security best practices such as checkpoints, gates, fences, 24x7x365 on-site personnel, badge/photo ID access, biometric access screening, secure cages, and full-building video capture.

Come and see one of our Pennsylvania colocation data centers

If you’d like to learn more about the benefits of colocation, download our Strategic Guide to the Data Center and Colocation. This resource goes deeper into how colocation works, the advantages of colocation, and how colocation can help optimize a hybrid IT infrastructure.

Learn more about our data centers to understand why we’re one of the best colocation providers in Pennsylvania. To help you decide whether one of our Pennsylvania data centers is right for your needs, we’ve posted our data center spec sheets online:

Are you ready to see one of the data centers for yourself? One of our expert advisors would be happy to give you an on-site tour of any of our Pennsylvania data centers.

Schedule a tour to learn more about our Pennsylvania data centers today.

]]>
Is Edge Computing the Next Big Digital Infrastructure Trend? https://www.tierpoint.com/blog/is-edge-computing-the-next-big-digital-infrastructure-trend/ Tue, 16 Mar 2021 18:30:05 +0000 https://tierpointdev.wpengine.com/blog/is-edge-computing-the-next-big-digital-infrastructure-trend/ The traditional IT infrastructure has evolved from a single, central data center to a connected constellation of services and devices. Those services are all dispersed across multiple cloud providers and platforms. However, this distribution of resources often faces one major obstacle: latency. Fortunately, edge computing is an infrastructure model aimed at boosting performance and reducing latency across widely distributed networks. We examine how edge computing is influencing digital infrastructure in 2021.

An edge computing overview

Edge computing is a model where information processing (data and computing) is physically located close to the things and people that produce or consume them.

Depending on the use case, an edge deployment may be anything from equipment in a colocation data center, a computer closet in a branch office, or an edge-configured virtual machine at a local cloud provider.

Before we discuss how edge computing is used by enterprises, here are the top five examples of industries innovating with edge computing:

  • Manufacturing
  • Transportation, logistics, and autonomous vehicles (and self-driving cars)
  • Healthcare
  • Media and entertainment
  • Retail

This list will likely grow, however. Any organization that holds virtual meetings, has remote workers with virtual desktop software, or runs performance-heavy applications, like Artificial intelligence (AI), Machine Learning (ML) and business analytics over a network will benefit from an edge deployment.

Additionally, Internet-of-things (IoT) devices, such as environmental monitors, factory floor robotics, or intelligent traffic controllers, will also need the localized processing capabilities and real-time communication that edge computing provides.
IDC Technology Spotlight Key Trends Driving Enterprises Toward the Future of Digital Infrastructure in 2021

How enterprises use edge computing

Edge computing brings performance boosts and cuts costs for a range of current and future use cases. Some of the most common examples include:

Distributed workforces

Cloud computing was the first step toward a “work anywhere” model. However, remote users encountered slow and unpredictable bandwidth. By leveraging edge resources businesses can potentially improve application performance and overall user experiences for remote workers.

Tracking equipment and assets

Many industries including manufacturing, construction, and oil and gas maintain expensive equipment in the field or on factory floors. They must keep track of the equipment’s location, condition, and current usage. Maintaining an up-to-date record depends on rapid communication.

Predictive equipment maintenance

Likewise, equipment and machinery need to be kept in working condition. An unexpected failure can cost a company lost productivity and, potentially, the failure to meet key deadlines.

Monitors can send an alert if a part is wearing out faster than expected or when the equipment needs a tune-up. By locating edge computing resources near the equipment, companies can have real-time updates on equipment performance.

Monitoring patients

Hospitals are increasingly using monitors and other smart devices to ensure the well-being of patients. Medical equipment and patient monitors are constantly producing alerts and data that must be analyzed for a quick response and, later, stored. An edge server can process patient data quickly and, because the data stays within the hospital network, without risking a breach of HIPAA regulations.

Staying compliant with regulations

Companies that must comply with regional and international consumer data regulations can leverage edge computing to ensure that sensitive consumer data stays within national or state borders. By keeping data in edge servers in the geographic locations of their customers, they can better comply with local data privacy and data sovereignty laws.

New infrastructure technologies support the edge

Edge computing isn’t a standalone technology. Besides the cloud, two other important technologies that support the growth of edge development are software-defined infrastructure and hyperconverged infrastructure.

Software-defined infrastructure (SDI)

Software-defined infrastructure (SDI) is a composable architecture that allows developers to define IT infrastructure resources (storage, compute, networking, and other resources) using a software abstraction layer. With SDI, a developer can break down resources into individual edge computing resources, located where they are needed most, and reallocate them as workloads and other needs change.

SDI provides greater flexibility than fixed or static hardware-based resources, which must be physically replaced as needs change. Allocating resources using SDI can be done in minutes as compared to the days or weeks required with a traditional hardware procurement cycle.

Hyperconverged infrastructure (HCI)

Hyperconverged infrastructure (HCI) is a related software-based architecture that tightly integrates IT infrastructure resources (storage, compute, networking, virtualization, etc.) into a single plug-and-play appliance or software stack. A single HCI instance might serve as an edge “data center” or be clustered with other HCI instances. HCI can take advantage of software-defined infrastructure and can have its configuration automated and managed remotely

How your digital infrastructure can gain edge computing benefits

Most edge deployments require customization due to the unique needs of different businesses. While there are many edge solutions coming onto the market, they must still be customized to fit different use cases. Customization requires substantial experience in cloud services, SDI, and networking. As IDC notes in its Spotlight, even small modifications to an edge compute stack may significantly change its ability to serve a particular use case.

A collaboration with an experienced cloud provider, hosting company, or professional service provider can supply the expertise to ensure a successful edge deployment. In addition, outside partners can provide physical support such as regional data centers, colocation facilities, and links to major cloud platform providers.

Cloud service providers with edge data center experience, like TierPoint, can help with edge planning and deployment as well as with the overall modernization of your digital infrastructure.

IDC Technology Spotlight Key Trends Driving Enterprises Toward the Future of Digital Infrastructure in 2021

]]>
Hyperconverged Infrastructure (HCI) is Changing the Data Center https://www.tierpoint.com/blog/hyperconverged-infrastructure-hci-is-changing-the-data-center/ Tue, 20 Oct 2020 00:25:43 +0000 https://tierpointdev.wpengine.com/blog/hyperconverged-infrastructure-hci-is-changing-the-data-center/ Data center managers have faced massive changes to traditional data center infrastructure over the past decade, from software-defined networking and edge computing to 5G networks and AI. These innovations change the makeup and requirements of enterprise data centers. They also place greater demands on data center staff, who must research, configure, implement, and maintain these new technologies. Innovation brings fresh opportunities, but it also makes the work of data center managers considerably more complex.

There is one new technology, however, that promises to simplify data center architecture and make it easier to manage. Hyperconverged Infrastructure (HCI) consolidates IT resources such as storage, memory, and processing into uniform building blocks, which are sold as either hardware appliances or virtual HCI stacks.

HCI uses off-the-shelf commodity components–typically x86 boxes, equipped with identical networking and storage hardware, hypervisor, and management software. Each unit or appliance serves as a miniature data center in a box, which can be scaled up by simply attaching more boxes. This modular infrastructure enables organizations to build new, more streamlined data centers or quickly add resources to an existing data center.

The demand for HCI is growing rapidly. The global market for HCI products and services was estimated at $6.1 billion in 2019 and is expected to reach $22.2 billion by 2027. Adoption of HCI is rising the most among organizations that need to reduce their IT costs, speed-up data center deployments, simplify IT management and operations, or create a more scalable and flexible infrastructure capable of easily meeting future IT demands.

HCI evolved from the concept of converged infrastructure (CI), which is a collection of preconfigured hardware and software that an IT manager can purchase and install themselves. It’s somewhat like a make-your-own-tacos kit that comes with packets of sauce, spices, cheese, and taco shells in a single box. With CI, IT managers avoid having to check for hardware compatibilities, order from multiple vendors, and configure everything from scratch.

HCI takes converged infrastructure one step farther by assembling and integrating all of the parts into a plug-and-play unit—more like a TV dinner than a DIY kit. If your HCI-built data center needs more capacity, just buy more HCI units. This ease of scalability gives enterprises the ability to expand very quickly to meet an organization’s changing needs.

Key advantages of HCI within a data center infrastructure

In addition to scalability, HCI appliances offer four other key advantages in data center design and management.

Lower-cost components

Most HCI appliances are based on off-the-shelf x86 servers, which helps enterprises dramatically cut costs and lower procurement complexities. The organization just needs to procure one type of device instead of separate proprietary storage arrays, controllers, and networking components every time they need to add capacity. This single-vendor approach means that businesses can get technical help much faster and implement patches and upgrades more smoothly, without confusion and a multitude of calls to different suppliers.

Software-defined flexibility

In a software-defined data center, infrastructure components are treated as services. These virtual infrastructure services can be managed and configured remotely, allowing IT to provide more, or less, storage capacity, memory, processing power, and other software-defined resources as needed. The software-defined infrastructure of HCI means that data center resources can be updated on demand.

Resource optimization

Hyperconverged infrastructure optimizes the usage of compute, storage, and network resources by treating them as pooled resources within clusters. Should one type of workload or application demand more memory, storage capacity, or processing power, the unused resources on other clustered HCI appliances can provide it. Unified management allows these assets to be discovered, pooled, and provisioned to applications based on need.

Unified management

All aspects of the HCI appliance – the computing capacity, file storage, memory, and network connectivity – are managed through a single console. HCI clusters, both local and remote, are administered together, greatly simplifying IT management.

IT organizations that choose the HCI model also gain advantages in terms of IT labor costs and productivity. IT teams need not wrestle with system deployment, integration, upkeep of individual hardware components, and other data center management challenges that are common with three-tier data center infrastructure.

In addition, organizations have less need for IT specialists in areas like storage and networking. With HCI, IT generalists can handle most of the work, reducing the cost of expensive consultants.

Also read: How Hyperconverged Infrastructure Works with Private Cloud

HCI empowers cloud development

Flexibility, scalability, and ease of management are all important in cloud infrastructure. Organizations developing their own private clouds can benefit from using HCI as it nearly eliminates the need to vet multiple hardware products, configure them, and deploy individual compute, storage, and network components. The labor savings gives IT staff more time to work on cloud application development and other business-critical IT projects.

Likewise, HCI’s rapid scalability and compact infrastructure is gaining traction among public cloud providers who are incorporating HCI into their own infrastructure and also developing products based on HCI.

Also read: The Benefits of Hyperconverged Infrastructure for Disaster Recovery

Cloud providers are exploring more ways to create niche HCI platforms and services targeted at unique use cases. Providers have HCI solutions tailored for specific uses, such as data center consolidation, private cloud development, edge computing, remote office infrastructure, and virtual desktop environments. AI is another use case that could benefit from a customized HCI platform. While HCI doesn’t solve all infrastructure issues, it does promise to be a useful alternative for organizations seeking to simplify their data center infrastructure or address specific IT use cases quickly and efficiently.

Exploring Hyperconverged Infrastructure for your environment?

TierPoint offers support and consulting services on HCI and other infrastructure questions. We also provide HCI solutions, powered by our Hosted Private Cloud, and in our over 40 data centers across the U.S. We also provide a full menu of disaster recovery services, colocation, cloud and security solutions. Contact us to see how we can help you assess and find the right solutions for your IT environment.

New call-to-action

]]>
What Happens When Hyperconvergence Meets the Hybrid Cloud? https://www.tierpoint.com/blog/what-happens-when-hyperconvergence-meets-the-hybrid-cloud/ Wed, 07 Oct 2020 18:53:29 +0000 https://tierpointdev.wpengine.com/blog/what-happens-when-hyperconvergence-meets-the-hybrid-cloud/ Applications have diverse requirements, and rarely can a single cloud computing environment meet the needs of every application. Data analytics and AI applications have different IT needs from email or mobile sales apps. Streaming video needs high-performance processing while video editing needs massive amounts of storage and memory. To meet all of these storage, memory, and processing needs, without breaking the budget, organizations have adopted multicloud and hybrid cloud environments. To ensure their cloud environments are running as efficiently as possible, they turn to hyperconvergence.

Hybrid cloud environments come in many flavors, from multicloud with a mix of public and private clouds and multiple cloud providers or platforms to hybrid cloud environments, which may have public cloud, private cloud, and non-cloud IT workloads or workloads in a colocation facility. The big challenge is not only selecting the right environments for different workloads, but also managing and maintaining all of the moving parts within those environments. Hybrid cloud management has become increasingly complex.

The IT industry has struggled to reduce the complexity of hybrid and multicloud management, through cloud management applications, increased automation, cloud orchestration tools, and software-defined storage and networking. Software-defined infrastructure (SDI) offers greater flexibility in allocating and managing computing resources.

This is where hyperconvergence comes in. Hyperconverged infrastructure (HCI) is an implementation of software-defined infrastructure. HCI consolidates IT resources such as storage, memory, and processing into uniform building blocks, which are sold as either hardware appliances or virtual HCI stacks. These hardware or software nodes contain the CPU, networking, storage, memory, and hypervisor and may be scaled by adding or subtracting HCI nodes. The hypervisor manages virtual machines (VMs) on each node that run the IT resources and applications.

What is converged infrastructure or “converged”?

A converged infrastructure solution is a bit like a data-center-in-a-box, with all the networking, storage, and server hardware pre-selected, pre-configured, tested, and unified on a single hardware appliance. Alternatively, an IT department that already has some of the vendor-specified hardware can buy just a converged infrastructure reference architecture on which to base its own custom solution. This DIY approach may be attractive to IT departments that want to re-use existing hardware investments or that need a highly customized converged infrastructure solution.

What is hyperconverged infrastructure or “hyperconvergence”?

HCI is infrastructure a software defined “platform-in-a-box” with networking, computing, and storage services tightly integrated and installed on a commodity x86 server. These HCI servers can then be stacked and managed as clusters. An HCI platform includes software-defined storage, hypervisor for virtualized computing, an operating system, and virtualized networking, all managed through a single system management console. An organization can also opt to build its own HCI clusters by purchasing a vendor’s HCI software platform and implementing it on any of the x86 servers on the vendor’s list of approved hardware.

The benefits of hyperconvergence for hybrid cloud environments

Scalability

A big hyperconvergence benefit for hybrid cloud is scalability. While individual HCI devices are limited by the storage, CPU, and memory capacity of the hardware, they can be clustered to share storage space and data between the hosted VMs. If more VMs, memory or storage capacity are required, then more nodes are added to the cluster. Additionally, an IT organization using software HCI can implement it on a high-performance physical server customized to its needs and expandable as demand for resources grows.

Mobility

HCI offers greater mobility within a hybrid environment. With the addition of a cloud orchestration application, cloud managers can administer and move workloads across public and private clouds as well as between different cloud vendors. That’s useful in hybrid cloud and multicloud settings when workload requirements change and need to be moved to another cloud or non-cloud environment.

Organizations sometimes find they must repatriate a workload or application from the public cloud to a private cloud or on-premise non-cloud system, either because the public cloud didn’t deliver the promised cost savings for that workload or for compliance or performance issues. HCI is making that repatriation more appealing.

Cost effective

Public cloud adoption has been driven largely by the potential for lower, more predictable costs and freedom from the need to invest in hardware. That motivation is still valid. However, the emergence of hyperconverged infrastructure is making private clouds considerably easier and cheaper to build and manage, and easier to move workloads between environments.

Configurable for enterprise workloads

Because HCI is a comprehensive, software-defined environment, it’s configurable for a broad set of enterprise workloads. Additionally, most HCI products support containers as well as VMs, which increases the mobility of workloads. Containers are smaller and lighter than VMs and are portable between environments. Containers bundle applications with their libraries and other dependencies and can be moved to any environment with the same operating system. A container orchestration platform—most commonly the Kubernates platform—provides container management and configuration capabilities.

Also read: The Benefits of Hyperconverged Infrastructure for Disaster Recovery

Ease of management

Over time, HCI vendors are incorporating more sophisticated capabilities into their products, often with hybrid cloud management as a focal point. Some features include service catalogues that support portable workloads, the ability to move data between non-cloud, private cloud, and public cloud environments, and management tools for moving applications between environments.

Flexibility

Organizations today need flexibility to respond quickly to the IT needs of the business. The multiple moving parts inherent in hybrid and multi-cloud environments often make it difficult to identify, deploy, and expand IT solutions fast enough to support business goals. The adoption of hyperconverged infrastructure provides a way forward to reducing IT complexity and increasing IT agility.

Want to explore hyperconvergence for your hybrid cloud?

TierPoint offers support and consulting services on HCI and other infrastructure questions. In addition, we provide customers with access to HCI products and services in its 40-plus data centers across the U.S., as well as a full menu of disaster recovery services and solutions. Contact us to see how we can help you assess your IT environment and find the solutions to manage your hybrid cloud or multicloud platforms.

New call-to-action

]]>
The Benefits of Hyperconverged Infrastructure for Disaster Recovery https://www.tierpoint.com/blog/the-benefits-of-hyperconverged-infrastructure-for-disaster-recovery/ Wed, 23 Sep 2020 17:21:16 +0000 https://tierpointdev.wpengine.com/blog/the-benefits-of-hyperconverged-infrastructure-for-disaster-recovery/ There are many benefits of hyperconverged infrastructure for disaster recovery as in-house Disaster Recovery (DR) can be expensive and complex to manage. Cloud-based Disaster Recovery is simpler and sometimes less expensive, but, depending on your cloud platform of choice, puts you at the mercy of your provider’s cloud performance. An emerging third alternative is hyperconverged infrastructure (HCI), which provides a simplified architecture for creating and managing in-house DR systems, as well as supporting private cloud DR development.

Hyperconverged Infrastructure merges all of the hardware and software components needed in an IT environment—the CPU, storage and networking hardware, hypervisor for running virtual machines, and the software for file serving, security, and networking—into a single integrated unit. HCI essentially operates like a miniature, self-contained data center or, if you add DR software, a disaster recovery solution.

Hyperconverged Infrastructure is growing in popularity due to its efficiency and manageability benefits. HCI vendors take off-the-shelf commodity components, preconfigure them, and add the necessary software stack. Clusters of HCI equipment can be expanded by simply adding more units, making them much easier to buy, install, and operate than a typical three-tier IT environment. In contrast, traditional three-tier IT infrastructure is a diverse collection of hardware and software products from multiple vendors, with old and new technologies, all needing to be configured and integrated.

As hyperconverged infrastructure grows in popularity, HCI vendors are developing HCI equipment for specific use cases that can best benefit from the unique features of HCI. Many vendors are working with disaster recovery software companies to create integrated HCI DR solutions.

These compact HCI DR products are also gaining more sophisticated features. For example, Nutanix’s HCI platform now supports multi-site disaster recovery and near-zero data loss with a recovery point objective (RPO) of 20 seconds. It also includes DR orchestration with runbooks, to give IT managers the ability to focus on certain applications.

Four major HCI benefits for disaster recovery

Hyperconverged infrastructure offers several specific benefits for backup and disaster recovery. These benefits include:

Scalability

Gartner defines hyperconverged infrastructure as “a category of scale-out software-integrated infrastructure that applies a modular approach to compute, network and storage on standard hardware, leveraging distributed, horizontal building blocks under unified management.” That means, essentially, that scaling out an HCI environment is like stacking building blocks. To expand memory, storage, and CPU power, just add more HCI blocks. The key advantage of such an infrastructure is simplicity and rapid scalability. A cluster of HCI units can be easily expanded with more devices, and multiple clusters can work together to create a powerful IT environment. And there’s no need to spend hours researching compatible hardware as each new unit is identical to the first.

Flexibility

The combination of commodity hardware and software-defined architecture enables HCI environments to be extremely flexible when needs change. Clusters can be expanded or reduced, networked, or reproportioned to handle different workloads. HCI’s use of virtualization makes for more flexible DR environments as well. With HCI, production environments can be rapidly replicated to virtual machines, which can then be restored to cloud and on-premises systems. A virtual backup environment can be created, updated, and relocated without concern over the underlying hardware. Because software-defined resources are programmable, they can quickly adapt to new demands.

Predictable costs

HCI optimizes efficient use of resources, allowing IT departments to get more value from their investments. Traditional DR environments have many moving parts, usually from different vendors, making them more difficult and costly to manage. Using the HCI architecture model streamlines the components and reduces the amount of labor needed for administration and maintenance. An organization can start with a small HCI cluster and gradually expand as needed. That ensures that IT investments get optimal usage.

High availability

Virtualization also supports high availability with near zero downtime. One of HCI’s key features is “instant recovery” or “rapid recovery” which involves restoring directly from backup to a virtual machine. This method is faster than the traditional backup and restore approach of copying data from backup and then restoring the production environment. Workloads and data can be replicated across multiple HCI DR clusters, including remote HCI clusters and cloud based HCI clusters. Because HCI devices can be remotely implemented and managed, an organization can administer a network of DR clusters in different geographic locations from a single console. In a disaster, this distributed replication can provide immediate failover to a working replica located far from the disaster zone. Employees and customers in other areas need not suffer any downtime from a fire, flood, or other disaster outside of their area.

Disaster Recovery and HCI don’t need to be complicated

As disaster recovery becomes a bigger concern for all organizations, HCI offers a more agile, scalable, and easily managed DR solution for both large and small enterprises. TierPoint provides its customers with access to HCI products and services in its 40-plus data centers across the U.S., as well as a full menu of disaster recovery services and solutions. Contact us to see how we can help you managed DR in a Hyperconverged Private Cloud.

New call-to-action


]]>
How SaaS Companies Improve Customer Experience with Edge Computing https://www.tierpoint.com/blog/how-saas-companies-improve-customer-experience-with-edge-computing/ Tue, 11 Aug 2020 18:29:56 +0000 https://tierpointdev.wpengine.com/blog/how-saas-companies-improve-customer-experience-with-edge-computing/ Software as a Service (SaaS) has dramatically transformed the customer experience for many business application users. No longer do they have to plunk down thousands to millions of dollars for a piece of software that quickly becomes obsolete.

With SaaS, they simply subscribe to the number of licenses they need. On-boarding more employees? No problem! They can subscribe to new licenses in just minutes. Scaling back? They can unsubscribe almost as quickly. Perhaps best of all, users of SaaS applications can often keep their applications up to date with the click of a button.

SaaS growth on the rise

These are just some of the reasons many enterprises are looking to implement a SaaS-only model in their organizations. In their 2020 State of the Cloud report, Flexera also found that 43% of respondents named ‘moving on-prem software to SaaS’ a top priority for 2020. That’s up from 29% in 2019.

That’s just a look at the growth in SaaS applications designed for the enterprise. It doesn’t really consider the growing number of point solutions designed for the consumer or small to mid-sized businesses. The 50 largest publicly held SaaS solution companies as of January 2020 shows a number of names that offer products for use outside the enterprise. SaaS applications are big business.

Unfortunately, there’s a dark side to SaaS, too. Along with an explosion in new SaaS companies, the market is also seeing a significant amount of churn. According to Autopsy.com (a site focused on the death of startups, not people), as much as 92% of startups fail within three years.

If SaaS is about an improved user experience, then SaaS success will require the developer to focus on and continue to improve the SaaS experience. Many SaaS developers are doing just that with a concept called Edge Computing.

How the edge supports SaaS companies

SaaS developers deliver their applications through the cloud. These cloud resources (hardware and software) are owned and maintained by the SaaS company or a third party like TierPoint. Data is sent back and forth, usually over the internet, between the user’s console and the SaaS infrastructure.

But what if the company’s SaaS infrastructure is in New York and the customer is in California? That can have an impact on the user’s experience because distance can create lag time between a request from the user’s system and a response from the SaaS infrastructure. For some applications, it may not matter much, but for others, e.g., POS or customer service systems, even a few seconds of lag time can create a negative experience. For more advanced applications, such as robotic surgery or self-driving cars, lag time is out of the question.

Advancements like 5G will have an impact, but this impact will be short-lived in our data-driven world as applications advance to take advantage of the increased bandwidth. Edge computing is about reducing that lag time by moving the SaaS infrastructure closer to the customer.

Of course, if you’re a SaaS company serving multiple markets, the last thing you want to do is focus your efforts on setting up multiple data center locations in multiple markets across the country. Most SaaS companies would rather focus on creating great applications.

That’s where the right third-party providers can help. For example, TierPoint operates a network of forty data centers across the U.S. A company based in New York City might house their corporate systems in our Hawthorne data center and then house their customers’ applications and data in multiple Midwestern data centers closer to their customer base, e.g., Chicago, Oklahoma City, and Little Rock.

The ‘need for speed’ with today’s applications is driving tremendous growth in edge data centers. MarketWatch predicts that investments in edge computing will grow 27% annually through 2023. Bell Labs also predicts that 60% of all servers will be housed at the edge by 2025. That’s a tremendous move away from the centralized data centers of the past.

Edge computing is about more than just latency

For SaaS companies, working with an edge provider is about more than just reducing latency to improve the customer experience. Edge computing can also reduce overall costs and allow you to pass on those savings to your customers or improve your bottom line. One of the most obvious ways we can reduce costs is by allowing you to free your organization from the overhead of maintaining your own systems.

Also read: The Strategic Guide to Edge Computing

Not having to maintain your data center infrastructure can also help you increase organizational agility. SaaS companies often follow a continuous integration/continuous delivery cycle that allows them to deliver new features and applications faster than ever. We enable this model by focusing on your infrastructure while you focus on your applications.

If your customers use your applications to store or handle sensitive data, security is no doubt one of their concerns. Keeping up with the ever-changing cybersecurity threat landscape can be hard for the SaaS company focusing on delivering great applications. We can manage the security of your infrastructure in our edge data centers, freeing you up from one more worry that takes you away from a focus on your customers.

Last, but certainly not least, organizations that manage their own data centers may be more susceptible to unplanned downtime – and that can affect the customer experience even more than latency. A managed service provider can keep an eye on your edge data center and address any performance issues before they’re noticed by your customers. They can also help you develop a disaster recovery strategy that keeps your customers up and running no matter what mother nature (or human nature) throws your way.

Also read: Key Considerations for Edge Computing Deployments

 Ready to improve your customer experience?

If you’d like to learn more about edge computing and how it can help you create an even better customer experience, reach out to us today. One of our advisors would be happy to talk with you about your data center challenges and how we can help solve them.

Executives Guide to Edge Computing [white paper]

]]>
Five Examples of Industries Innovating with Edge Computing https://www.tierpoint.com/blog/five-examples-of-industries-innovating-with-edge-computing/ Thu, 04 Jun 2020 20:40:21 +0000 https://tierpointdev.wpengine.com/blog/five-examples-of-industries-innovating-with-edge-computing/ Organizations process massive amounts of data every day, in applications that often require near-real-time response rates. Innovations IoT, mobile computing, and AI are driving demand for very low latency connections. However, the traditional network model of a central data center serving distant offices and end-users can’t keep up with this need for speed. Enter edge computing, a networking model that moves data and compute power close to where it’s needed, at the edge of the network close to the devices and people that use it. In our post, we dive into some industry edge computing examples and how those industries benefit from the technology.

A quick edge computing overview

As Dominic Romeo, TierPoint’s director of product management explained, “When you’re inside a 50-mile radius, latencies get really, really low. The time it takes for the end user to send a command to the server and for the server to come back with a response are in the neighborhood of single-digit milliseconds versus double- or triple-digit milliseconds of round-trip time.”

What is edge computing?

Edge computing is a model where information processing (data and computing) is physically located close to the things and people that produce or consume it.

Edge computing enables:

  • Near real-time response rates for applications in industrial robotics, patient care, finance, and customer service.
  • Lower costs. Sending data back and forth over a long distance can be expensive, so moving it closer to users offers a cheaper alternative.

The proliferation of smart devices is a major driver of edge adoption. Many smart devices use artificial intelligence to get better at their jobs. An industrial robot, for instance, needs AI to react to changes in the production line. The closer the robot is to its AI brain, the faster it can operate. Autonomous vehicles need split second timing to assimilate traffic data and react.

Executives Guide to Edge Computing [white paper]

Edge computing examples by industry

An August 2019 report by research firm ‘MarketsandMarkets’ projects the global edge computing market to grow from $2.8 billion in 2019 to $9 billion by 2024. Companies in industries from oil and gas to online gaming are leveraging edge computing to improve their products and services, cut costs, and increase market share. To understand the current and future potential for edge computing, read about the following examples of edge applications in five major industries:

Manufacturing

Edge computing is enabling smart devices such as machine controls, environmental sensors, asset tracking, and assembly line robots to operate with greater speed and efficiency.  Smart manufacturing devices rely on a tight feedback loop between input, analysis, and output to provide timely responses. For example, quality control monitors or equipment sensors must make rapid and accurate assessments. When they don’t, products may be rejected or recalled further down the line, or equipment may fail and cause lengthy delays in production.

Storage costs are another factor. Many manufacturers collect huge amounts of data from monitors, sensors, production line equipment, shipment trackers, and so forth. Processing and storing this data centrally is far more expensive than keeping it near the equipment that generates and consumes it.

Also read: How Edge Computing Aids Modern Manufacturing

Transportation and Logistics

While autonomous vehicles are a commonly known application of edge computing, there are many other uses. Management consulting firm McKinsey & Co. has identified two dozen ways that  edge computing can improve operations in travel, transportation, and logistics, including: condition-based monitoring of transportation equipment, equipment tracking, logistics routing optimization, improved flight navigation, after-sales service of vehicles, and location based advertising on public transport. Edge computers might be placed in garages, at airports, on board vehicles, on planes, and in video displays on public transport.

Healthcare

Edge computing is spurring a range of innovations in healthcare by smarter, faster equipment. For example, hospitals can optimize equipment maintenance, track drug distribution, monitor patient condition in real-time, and manage nursing efficiency through mobile devices. AI assisted surgical robots can enable remote surgery and assist on-site surgeons to improve their success rates. Likewise, medical devices that collect patient data may also provide diagnosis and treatment recommendations.

Media and entertainment

Content delivery networks were some of the first uses of the edge computing idea. Rich content such as videos and games located on content servers close to major consumer markets reduced bandwidth demands and improved performance. Edge computing is a similar concept, with the addition of a compute component for streaming media, online gaming, or video-heavy social media sites.  Coupled with 5G networks, edge servers enable mobile users to get smoother streaming video, without the need for buffering.  Media companies can leverage edge capacity to collect and analyze data on customers to sell them more services and products.

Customer engagement

Customer service and marketing is increasingly personalized and automated. Companies analyze volumes of data on consumer behaviors so they can provide customized services, both online and in brick and mortar retail outlets. Coupled with edge computing, retail stores can track an consumer’s route through the store and use that data to re-design store layouts or create tailored in-store advertisements.  Augmented reality apps supported by edge servers can enable customers to “try on” clothes without physically putting them on. Those applications demand a lot of data and processing power, making them ideal use cases for edge computing.

Edge computing will also be adopted by other industries

There is a multitude of uses for edge computing in other industries too, including utilities, energy, semiconductor, government, telecommunications, automotive, education, and more. Within the enterprise, edge computing will enable faster connections to the cloud and improved response times by offloading data analysis and heavy content files to edge servers.

To accommodate the rising demand, cloud services providers, data center facilities, and network companies are expanding their distributed edge computing infrastructures. Many, such as TierPoint, already have regional networks of data centers that can support edge computing.

TierPoint’s 40-plus data centers with edge services provide reduced latency and powerful local computing capacity. Cut latency with robust local networking and fast last-mile connectivity for content and local processing for IoT and mobile applications.

Learn more about edge computing, download TierPoint’s Strategic Guide to Edge Computing.

New call-to-action

]]>
Why Businesses Are Choosing Hyperconvergence https://www.tierpoint.com/blog/how-are-businesses-using-hyperconverged-infrastructure-hci/ Sun, 10 May 2020 16:49:29 +0000 https://tierpointdev.wpengine.com/blog/how-are-businesses-using-hyperconverged-infrastructure-hci/ As hyperconverged infrastructure (HCI) has emerged as a compact and simplified alternative to traditional three-tier IT infrastructure, IT organizations have found many ways to deploy it in the enterprise. HCI is being used to streamline and consolidate data center infrastructure, decrease management overhead, simplify operations, and support new IT projects.

What makes HCI appealing to IT organizations? Unlike the usual 3-tier model, which may include disparate hardware products and software platforms, HCI provides a turnkey infrastructure that may be installed and scaled with minimal work. IT organizations are deploying HCI nodes in a variety of use cases, from private clouds to disaster recovery and edge computing.

HCI consolidates essential IT resources – compute power, networking, storage, memory, data services, and cloud management—into an x86 box or software-only stack that runs both virtualized or containerized workloads. Because HCI uses software-defined infrastructure, it’s flexible and capable of supporting different workloads and application requirements. Additionally, multiple HCI boxes can be clustered to pool resources.

HCI grew out of an earlier model called converged Infrastructure (CI) that consolidates computing resources in a looser package of pre-validated components—storage, networking, CPU, etc.—that customers assemble themselves. Another model is disaggregated HCI, or dHCI, which separates the storage and compute components so they can be scaled separately. dHCI is useful in environments that require large amounts of storage or, conversely, more CPU and memory.

The most common Hyperconverged Infrastructure uses

Following are some of the common ways in which organizations are deploying hyperconverged infrastructure:

Virtual desktops

Virtual desktop infrastructure (VDI) enables users to access their virtual desktops from other computers or devices. Each desktop, with its operating system, software, and data, runs in its own virtual machine on a server. Virtual desktops require a lot of storage to support multiple desktops and data, as well as hardware that is easily scaled as the user base grows. HCI offers a compact footprint that is easy to scale and manage remotely, making it a popular infrastructure for VDI.

Also read: 5 Reasons Businesses Choose Virtual Desktop Infrastructure (VDI)

Data center consolidation

Consolidation is a major driver for HCI adoption. HCI appliances—whether bought or built—provide a flexible, stackable infrastructure for consolidating multiple types of IT environments. HCI’s software-defined infrastructure gives IT the ability to customize the environment for different uses while providing a basic VM platform to support multiple servers.

Private cloud and hybrid cloud

HCI offers a quick path to private cloud development. Workloads vary greatly in their performance, bandwidth, security, and memory requirements and some require an on-premises solution. HCI can also work well as part of a hybrid cloud. This combination gives IT the flexibility to migrate workloads across public and private cloud environments without re-writing code or resorting to multiple development tools and management interfaces. HCI provides a comprehensive compute environment that scales easily and is flexible enough to support a broad range of workloads.

Disaster recovery

Backup and disaster recovery (DR) solutions benefit from HCI’s ease of scalability and administration. HCI supports VM replication within or between HCI clusters and, depending on the product, may include failover and other DR features as part of the software. With HCI, IT can use low-cost, standard x86 hardware with support for replication and rapid recovery. Production environments are replicated to the VM on the HCI appliance and can be failed over or back nearly instantaneously. The DR environment can reside on an HCI appliance or software stack at a remote data center or colocation facility, or in the cloud on a private HCI environment or using HCI-as-a-service.

Also read: The Benefits of Hyperconverged Infrastructure for Disaster Recovery

Edge computing

Edge computing enables organizations to conserve network bandwidth and decrease latency by moving content and computing power closer to end users. HCI provides a preconfigured, integrated package of hardware and software that is easy to deploy and scale. IT can manage HCI nodes at the edge remotely, without requiring them to visit each location. Because HCI is a preconfigured package, it does not require an IT specialist to maintain it.

Read our Strategic Guide to Edge Computing

Remote office and branch office (ROBO)

A single HCI node or a larger HCI cluster can provide the full range of IT resources needed by remote or branch offices. Because HCI is easy to install and administer, IT departments can leverage HCI to meet their future computing needs without having to rehost or rewrite applications. IT can manage several HCI deployments from a single control interface, saving the cost of having IT staff at each location or the need for local tech support visits. HCI nodes are particularly attractive for situations that would otherwise require specialized IT personnel. Because they are preconfigured and simple to operate, general IT staff or even non-IT employees can administer them.

Storage

HCI nodes can replace storage area networks (SANs), which contain storage arrays, servers, and network switches – all demanding their own connections and management interface. An HCI cluster with a virtual SAN replaces the traditional SAN configuration with a simplified, virtual solution.

How would your business benefit from Hyperconverged Infrastructure?

Hyperconverged infrastructure is one of the technologies and best practices that are helping IT organizations modernize their data centers. As enterprises tackle digital transformation projects, they need to streamline data center infrastructure, as well as provide flexibility and scalability to support rapid change. HCI and software-defined infrastructure offer a cost-effective model for current and future IT needs.
TierPoint offers support and consulting services on HCI and other infrastructure questions. In addition, we provide customers with access to HCI products and services in its 40-plus data centers across the U.S., as well as a full menu of disaster recovery services and solutions.

Learn more about our Hyperconverged Infrastructure & Hosted Private Cloud solution powered by Nutanix.

New call-to-action

Originally published in Dec 2020, this post was updated on May 11, 2021, to add more context around hyperconverged infrastructure (HCI).

]]>
What’s Next for Hyperconverged Infrastructure (HCI)? https://www.tierpoint.com/blog/whats-next-for-hyperconverged-infrastructure-hci/ Tue, 26 Nov 2019 20:18:37 +0000 https://tierpointdev.wpengine.com/blog/whats-next-for-hyperconverged-infrastructure-hci/ Hyperconverged infrastructure basics

Hyperconverged Infrastructure (HCI) is a promising development for digital transformation projects, as well as IT departments that are frustrated by cumbersome, legacy IT infrastructures. Unlike traditional three-tier infrastructure, HCI offers greater flexibility and scalability—both of which are critical capabilities for organizations engaged in digital transformation.

Hyperconverged Infrastructure is a software-defined “platform-in-a-box” with tightly integrated services deployed on inexpensive x86 hardware. An HCI appliance includes infrastructure and platform services, including software-defined storage, hypervisor for virtualized computing, operating system, and virtualized networking. These turnkey platforms are well suited for uses such as shared storage clusters, private cloud development, software defined data centers or cloud or virtualization development.

In contrast, traditional three-tier IT infrastructures usually have disparate hardware and software, from multiple vendors, and a mix of old and new technologies. As the organization’s needs grow and change, legacy IT infrastructures can’t always keep pace.

“IT departments have to be more agile and these Hyperconverged Infrastructure platforms can meet that requirement more easily than the traditional three-tier infrastructure,” noted Ryle Edwards, Solutions Engineer for SHI.

Edwards, along with Mike Piccininni, Dell EMC’s senior manager for global alliances in North America,  and TierPoint’s Dave McKenney, the director of product management, recently hosted a webinar on current and future developments in Hyperconverged infrastructure.

According to these panelists, HCI solutions are easy to manage, with a simple interface that eliminates the need for IT specialists. That’s helpful for companies with small IT staffs. In addition, they offer fast deployment and scalability, and make planning ahead easier as well.

“You can start your transformation without having to guess where you’re going to be in three or five years,” Piccininni said. “It’s ‘pay as you grow.’”

The future of Hyperconverged Infrastructure

HCI’s growing popularity is underscored by IDC’s prediction that the market will see major growth in the coming years.

Two trends are helping to make the technology more appealing: the increasing number of use cases for HCI appliances, and the growing number of software makers who are having their applications validated for HCI platforms.

More ways to deploy Hyperconverged Infrastructure

Hyperconverged infrastructure vendors are adding more services to their stack to expand the use cases and value of their products. Today, HCI appliances are already available to meet a range of needs, including edge computing—such as for moving heavy data processing or video closer to end-users—or for branch office infrastructure, virtual desktop environments, server consolidation, and private clouds.

“More and more services are being rolled into hyperconverged platforms,” said McKenney.

That trend will continue, adding additional services such as artificial intelligence for needs such as AI Ops for automated IT troubleshooting or for monitoring customer digital experiences. An example is the Dell EMC VxRail HCI server which comes with an Analytical Consulting Engine.

Other HCI products may include services for data protection, cloud services integration, change management, or service desk functionality.

Application validation

At the same time, more software makers are having their applications validated to run on HCI platforms. This makes it easier for customers to move more of their enterprise applications onto HCI platforms and gives them greater flexibility in how they use HCI.

Customers increasingly prefer to rely on their vendors to provide standardization and consistency in HCI products, said Piccininni.

“Change is happening faster, so the more I can put the responsibility for maintaining consistency on my vendors, the better. The more bespoke things are, the harder they are to maintain in year 3 or 5 of an operation,” said Piccininni.

Edward added, “It allows you to focus on the business factors driving the service, rather than on ensuring interoperability.

While HCI isn’t for every IT use case, it can provide many organizations with the scalability and agility needed by an increasing range of environments.

“The goal is flattening out silos of technology like networking, like storage, and putting it all into a single platform where you can accomplish more complex workloads and more consistent operations,” explained Piccininni.

More on Hyperconverged Infrastructure trends

In our recent webinar, Mike also discusses the changing expectations of organizations migrating to the cloud, the new Hyperconverged infrastructure features and tools coming out, and a roadmap to enable the wave of HCI (by Dell EMC). Watch the full webinar: The New Wave of HCI: Hyperconverged Infrastructure Meets Evolving Expectations.

]]>