Edge Computing Archives | TierPoint, LLC Power Your Digital Breakaway. We are security-focused, cloud-forward, and data center-strong, a champion for untangling the hybrid complexity of modern IT, so you can free up resources to innovate, exceed customer expectations, and drive revenue. Thu, 18 Jul 2024 19:11:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.tierpoint.com/wp-content/uploads/2022/05/cropped-TierPoint_Logo-1-150x150.png Edge Computing Archives | TierPoint, LLC 32 32 Multicloud vs Hybrid Cloud: What’s the Difference? https://www.tierpoint.com/blog/hybrid-vs-multicloud-whats-the-difference/ Thu, 18 Jul 2024 19:11:49 +0000 https://tierpointdev.wpengine.com/blog/hybrid-vs-multicloud-whats-the-difference/ As of 2024, 89% of organizations have adopted strategies that include multiple public clouds or a hybrid cloud infrastructure. When discussing multicloud vs hybrid cloud deployments, we often focus on what’s different. However, the differences are less important than the unified goal of forming your IT strategy based on what you want to accomplish as a business.

Whether those goals are best met with one cloud, a hybrid model, or a multicloud model will depend on your unique situation, dependencies, budget, and available resources. We’ll cover the difference between multicloud and hybrid cloud so you can make an informed next step.

Public Cloud vs Private Cloud?

Hybrid environments combine public and private clouds. And in the case of hybrid IT, it can also include non-cloud environments. Generally, the choice between public and private cloud will come down to how much control businesses want over resources compared to the amount of flexibility they need.

Public cloud providers, such as AWS and Azure, rent out resources to companies in predetermined amounts at a discount, or on a model where you pay for what you use. Businesses have the flexibility to scale up or down their resources on-demand. However, they must navigate and configure the security settings and tools provided by the public cloud provider to ensure optimal security.

Private cloud can run on-premises or offsite with a data center provider. Organizations have significantly more control over configurations and security settings in a private cloud environment. However, scaling resources can be more challenging, and the infrastructure is often more expensive compared to public cloud options. This control and security, combined with the challenges of scalability and cost, make hybrid cloud solutions an attractive option for many businesses.

What is the Difference Between Multicloud and Hybrid Cloud Computing?

In cloud computing, we often hear the terms “multicloud” and hybrid cloud. While both terms sound similar, there are a few key differences organizations tend to overlook. Understanding the differences between these two cloud approaches is essential for organizations that are striving to ensure cloud optimization and meet business needs.

Architecture

A hybrid cloud is the combination of cloud and on-premises infrastructure in a unified framework. It could include public cloud (Microsoft Azure, AWS, etc.) and private cloud infrastructure. Hybrid cloud adoption has increased over the past few years due to its many benefits, which we’ll be covering shortly.

Multicloud computing is the use of multiple public cloud platforms to support business functions. Multicloud deployments can be part of an overall hybrid cloud environment. A hybrid cloud strategy may include multiple clouds, but a multicloud strategy isn’t necessarily hybrid.

Intercloud Workloads

In a multicloud environment, workloads are deployed across different public clouds and often require additional processes and tools for interoperability. Similarly, hybrid cloud environments can include these workloads but also involve movement between cloud and on-premises infrastructures. This flexibility is often necessary for legacy systems with numerous dependencies that cannot be easily migrated to the cloud.

Vendor Lock-in

Vendor lock-in happens when a business feels overly reliant on one cloud provider and finds it difficult to switch to a new provider without significant investment and resources to do so. While both formats may introduce vendor lock-in, this may be more common in hybrid cloud environments where businesses are only using one public cloud provider. In a multicloud configuration, organizations may have more flexibility to move workloads to different public cloud environments.

Pricing

This flexibility in options within a multicloud environment can lead to more competitive pricing for businesses. Public cloud resources can be purchased in discounted packages for predictable workloads, while pay-as-you-go pricing is available for variable workloads.

Availability

With hybrid cloud, availability depends on both the public cloud provider and the on-premises infrastructure in use. In contrast, a multicloud environment can offer higher availability since data and workloads are distributed across multiple public clouds, reducing the risk of downtime.

Data Storage

Data storage has some similarities and differences between cloud environments. In hybrid cloud storage, on-premises storage (private cloud) is combined with public cloud resources. This provides greater control for sensitive data stored on the private cloud, but also requires tools to move data between environments that may be harder to set up compared to multicloud environments. Hybrid cloud can be ideal for businesses that have a mix of sensitive and non-sensitive data, and for those that want greater control over their core infrastructure.

With multicloud storage, data is stored across public cloud providers, which offers greater flexibility and scalability. Although multicloud storage can also be complex to manage, it reduces the risk of vendor lock-in by providing businesses the option to choose between different public cloud providers based on their specific needs and cost considerations. Multicloud is well-suited for businesses that want more scalability and flexibility, and don’t have as many data residency regulation concerns.

Security

In comparing multicloud and hybrid cloud environments, security plays a crucial role. Hybrid cloud setups allow organizations to implement tailored security measures across both public and on-premises infrastructures, providing greater control over sensitive data. In contrast, multicloud environments, which rely on multiple public cloud providers, often have less room for customization. While this can present challenges for specific compliance needs, many public cloud providers still meet essential standards such as GDPR and HIPAA. Ultimately, the choice between the two depends on an organization’s specific security requirements and regulatory obligations.

Flexibility

In terms of flexibility, hybrid cloud environments offer organizations the ability to seamlessly integrate on-premises and public cloud resources. This allows businesses to choose where to host specific workloads based on factors like cost, performance, and compliance. On the other hand, multicloud environments provide flexibility through the use of multiple public cloud providers, enabling organizations to select the best services from each provider.

While both approaches enhance adaptability, hybrid clouds excel in integrating legacy systems, whereas multicloud setups offer diverse options and avoid vendor lock-in, allowing businesses to respond more dynamically to changing needs.

How is Hybrid Cloud Similar to Multicloud?

Despite these differences, hybrid cloud and multicloud share many similarities. They can both be solid frameworks to store sensitive data when configured well, but they can come with common challenges, such as cloud complexity.

Infrastructure Security

Both hybrid and multicloud environments operate on a shared responsibility model, where the level of infrastructure security responsibility may vary. Cloud providers are responsible for securing the underlying infrastructure, while customers must secure their applications, data, and access controls within that infrastructure.

Key responsibilities for businesses include identity and access management (IAM), data encryption, and vulnerability management. Users should have access only to the resources necessary for their roles, whether in public or private clouds. Data must be protected both at rest and in transit, so organizations need to implement proper encryption measures. Regularly scanning for vulnerabilities and applying patches is essential to mitigate risks associated with security weaknesses, including zero-day attacks. By actively managing these responsibilities, organizations can enhance their overall security posture in any cloud environment.

Storing Sensitive Data

Even though public cloud providers offer fewer security customizations for businesses, both hybrid and multicloud environments can be suitable for storing sensitive data. Hybrid cloud gives organizations the power to place their most sensitive information on private infrastructure, whereas multicloud infrastructure allows for redundancy across multiple public cloud providers, mitigating risks from outages and data breaches.

Managing Data

In both multicloud and hybrid cloud, businesses must determine how to manage data across different platforms without compromising accessibility or performance. Hybrid clouds require tools and processes to facilitate data movement between public and private environments. While multicloud setups can simplify data management by leveraging multiple public clouds, they may still necessitate additional configuration to ensure effective data movement between those clouds.

Regulatory Compliance

Different businesses and industries are subject to different regulatory requirements, such as HIPAA, GDPR, CCPA, and PCI-DSS. Most public cloud providers are certified to meet common compliance standards, but if you have very specific needs, you may need to talk with the provider to confirm they can meet your compliance capabilities. Hybrid cloud offers more control over regulatory compliance, allowing businesses to store sensitive data on-premises or in an offsite private cloud.

Cloud Complexity

Cloud complexity is an issue for hybrid and multicloud environments, but what is being managed is where the difference resides. Hybrid cloud involves managing public and private cloud infrastructure. Multicloud involves managing different public cloud provider platforms, APIs, and security settings.

Can a Hybrid Cloud be a Multicloud?

A hybrid cloud can incorporate multicloud elements if it includes multiple cloud environments, such as a combination of public and private clouds. However, multicloud specifically refers to the use of multiple public cloud services from different providers, so it is not accurate to consider all multiclouds as hybrid clouds. While a hybrid cloud may include public clouds, it is distinguished by the integration of on-premises or private cloud resources.

Why Do Companies Use Multicloud?

Companies use multicloud to escape vendor lock-in and improve flexibility and performance across cloud environments. This isn’t a great fit for companies that have legacy frameworks they can’t easily move to the cloud. However, for businesses looking to innovate, multicloud can be a great option.

Why Do Companies Use Hybrid Cloud?

Companies tend to use hybrid cloud when they are either not completely ready to move all of their workloads to the cloud, or when moving some workloads would require more effort than it is worth, but they still want to leverage the benefits of the cloud. Hybrid cloud can serve as a happy medium or a long-term solution for digital transformation in a company, allowing for more innovation and flexibility compared to on-premises frameworks.

Find the Right Cloud Strategy For You with Cloud Experts

Choosing between hybrid cloud and multicloud hinges on your unique business needs. Data sensitivity, scalability, compliance requirements, and budgetary limitations will determine the optimal solution. Need guidance in figuring out what configuration will work best for you? TierPoint’s cloud experts can help you choose the right mix of cloud platforms that will help you reach and exceed your digital transformation goals while keeping your financial constraints and regulatory requirements in mind.

Part of adopting the cloud is convincing your leadership that it’s time to modernize your IT infrastructure. The drivers could be network performance, on-premises data center costs, and more. Read our complimentary eBook to learn how to have those conversations.

]]>
The Cloud’s Importance in a Hybrid IT Strategy https://www.tierpoint.com/blog/the-clouds-importance-in-a-hybrid-it-strategy/ Wed, 03 Nov 2021 22:03:51 +0000 https://tierpointdev.wpengine.com/blog/the-clouds-importance-in-a-hybrid-it-strategy/ The cloud has evolved greatly over the past few years. Businesses are leveraging a wide range of cloud services to meet customer needs and achieve digital transformation goals. The result is that most organizations combine a variety of cloud and non-cloud systems into hybrid IT environments. 

In fact, nearly three-quarters of organizations have implemented or are planning to implement a hybrid IT environment, according to 451 Research. 

How the cloud and the hybrid IT model are evolving is the topic of TierPoint’s webinar, Why Hybrid IT Environments are a must in 2022 and Beyond, moderated by TierPoint’s senior director of product research Dave McKenny. 

McKenny, along with Tara Kovaleski, a TierPoint solution architect, and Bryan O’Neal, director of product management for cloud solutions at TierPoint, shared their insights into the challenges in hybrid IT and what enterprise customers are looking for in cloud services. 

Priorities for enterprise cloud customers

Flexibility is a top priority of enterprise customers, they noted. 

“Enterprises are looking for solutions that combine some type of multi-tenant, public, SaaS [software-as-a-service], as well as dedicated and private solutions,” said Kovaleski. 

She noted that customers frequently need greater flexibility with their hybrid environments to meet a growing range of IT and business needs. 

“Different applications fit better in different environments. A very static type of application might fit better in a private, dedicated environment, whereas a dynamic, fluid application works well in the cloud,” she said. 

What is hybrid IT?

Traditionally, a hybrid IT environment might include public cloud services, often from multiple cloud providers, along with non-cloud on-premises infrastructure or co-located systems, and a hosted or on-premises private cloud. Different environments serve the different needs of the applications and the organization. 

The public cloud uses a multitenant cloud platform so that cloud resources are pooled and shared across multiple customers. A multitenant structure optimizes the usage of resources improves the cost-effectiveness for customers. The public cloud is also well suited for dynamic or highly distributed applications, however, the shared infrastructure can occasionally cause performance issues. 

Alternatively, private clouds provide a single-tenant platform dedicated to one customer. Private clouds give more control to the customer, who is also responsible for the full cost and maintenance of the infrastructure. Customers with higher performance requirements or who must meet stringent data privacy and security laws often prefer a dedicated cloud infrastructure. 

However, private and public are no longer the only types of cloud. The development of new cloud models and technologies provides more options for IT organizations seeking to move applications to the cloud and achieve greater flexibility with a multicloud or hybrid infrastructure. 

Cloud innovation driving hybrid cloud environment flexibility

In the webcast, O’Neal, McKinney, and Kovaleski highlighted four developments in cloud computing that provide greater flexibility in hybrid environments. 

Edge computing maximizes performance

For example, edge computing is a new model in cloud computing for moving content and data closer to the end-users and applications that need it. Media companies, for example, move video and other content to edge cloud locations closest to different geographical markets to reduce latency and use less bandwidth. 

IT organizations are leveraging edge computing in a variety of use cases, not just for media and video content. For example, companies can use the edge model to create local networks of smart devices in offices or create ecommerce systems specific to different international markets. 

“The edge is a key part of an organization’s success in expanding globally,” noted O’Neal. 

Public and private clouds merge

Major cloud providers such as Amazon and Google now offer software and services for creating public cloud infrastructure within their own data centers. Solutions such as VMWare’s vSphere, Amazon’s AWS Outpost, and Microsoft’s Azure Stack are examples of software aimed at bringing the advantages of the multi-tenant public cloud model to on-premises data centers. Azure Stack is a portfolio of products that extend Azure services and capabilities to other environments, such as a private data center, edge locations, or remote branch offices. 

“A statistic from Gartner predicts that, by 2023, more than 10% of large enterprises will be using on-premises public cloud infrastructures within their own private data centers,” said McKinney. “That’s up from less than 1% in 2019.” 

Similarly, TierPoint has a multitenant hosted private cloud that gives customers the security and performance of dedicated infrastructure but a multitenant structure to share IT resources (and costs) across a customer’s enterprise. 

Cloud expands to hardware

A growing percentage of IT shops are purchasing new technology through subscriptions, rather than making large investments in new hardware. 

“The subscription model is appealing for hardware as you don’t have to make a large capital purchase at the start of your five-year plan,” said Kovaleski. “Instead, you can focus on a much shorter window, and expand your hardware as needed.”

There are several options. The first cloud model to address hardware was the infrastructure-as-a-service (IaaS) model. IaaS provides access to a provider’s cloud-based infrastructure resources like storage and computing. However, IaaS is a packaged service that limits a customer’s control over the hardware. 

Bare metal cloud services is another hardware services model that caters to customers who want a dedicated infrastructure they alone control. Bare-metal cloud services provide dedicated hardware resources via subscription. As the term implies, bare-metal hardware services come without any installed operating systems or virtualization infrastructure. Bare metal subscriptions spare customers the burden of maintaining and replacing old hardware. 

The rise of software-defined infrastructure—software-defined networking, storage, etc.—provides even more flexibility in selecting and configuring hardware. A software-defined infrastructure, whether in the cloud or at an on-premises data center, is a virtual infrastructure with compute, storage, networking, and other infrastructure elements. 

More workloads made for the cloud

Several factors are increasing the range of workloads that can run in the cloud. For starters, the pandemic created a huge demand for online work applications that could be quickly implemented and easily scaled. Online collaboration tools, video conferencing, and virtual desktops became ubiquitous. 

Rising ransomware and other cyber-attacks, as well as concern over climate disasters–such as the fires in California—have driven up demand for cloud-based storage and disaster-recovery-as-a-service (DRaaS). Also, more developers are creating cloud-native applications as well as cloud-enabling legacy applications. There are fewer IT systems that can’t operate on cloud infrastructure, whether public or private. 

“There are many more workloads today that are no-brainer candidates for the cloud, whether they’re software-as-a-service or mission-critical workloads,” said O’Neal. “This is a wake-up call to customers who’ve been entrenched in the on-premises world that there are many benefits to the cloud delivery model.” 

Learn more about hybrid IT

You can listen to the entire webinar here to learn more about what’s trending in today’s hybrid IT world and how the cloud can help you achieve your hybrid IT goals. 

Why Hybrid IT Environments are a Must in 2022 and Beyond_Webinar

]]>
3 Ways Data Center Interconnection Works for Your Business https://www.tierpoint.com/blog/3-ways-data-center-interconnection-works-for-your-business/ Tue, 12 Oct 2021 19:58:05 +0000 https://tierpointdev.wpengine.com/blog/3-ways-data-center-interconnection-works-for-your-business/ Is data center interconnection the missing link in your data center strategy? Whether you’re looking to improve business continuity or help bring data processing closer to your users, you can stand to benefit from data center interconnection. Connecting your data with other centers across the country can help you achieve your digital transformation or business goals and better serve your customers. If you’re looking to adopt a hybrid cloud strategy, improve business continuity or disaster recovery, or if you want to move your IT infrastructure from your onsite data center to a third-party provider, you could benefit greatly from data center interconnection. 

What is data center interconnection?

Data center interconnection is dedicated network connectivity that spans your data centers, permitting your apps, data, and users to connect reliably and quickly. With interconnection services, data center facilities offer services where customers can connect between various facilities across a regional or nationwide network. If you work with a provider that offers data center connectivity, resources can become available closer to your users via private lines, ethernet, and waves. This decreases latency and improves your users’ internet service experience. While the focus used to be on connecting first-tier markets, many centers can directly connect to second- and third-tier markets as well.  

At any of these locations that are interconnected, you’re also likely to have support available for all assets under one roof, including a colocation data center, cloud services, and network assets. Plus, when you’re looking for processing and coverage in specific geographies, you can narrow in on providers that have data centers closest to your users.  

How can data center interconnectivity benefit your business?

Edge computing enhances user experience

With edge computing, the close proximity of a data center to a user is what helps make for the best user experience (UX). You get high speeds and lower latency. While the service started in Tier 1 cities, those with the largest populations, Software-Defined Networking helped extend edge computing out to additional Tier 1 cities as well as Tier 2 cities and beyond. 

Edge computing has been developing for decades. It started with content delivery networks (CDNs) in the 1990s and continued to take shape with peer-to-peer overlay networks (P2P) in the early 2000s, public clouds in the mid-2000s, and fog computing in the 2010s. These earlier technologies contributed to the eventual development of edge technology in different ways, including balancing workloads between machines and improving network distribution.  

The U.S. edge computing market is expected to reach $3.2 billion by 2025. With edge computing, businesses are able to better cater to a remote workforce, process data from IoT devices, and reduce the load and actual weight of devices such as wearables and vehicles due to this closer data processing. The edge is where people and devices connect to the internet. It represents the location of all connected devices worldwide. When your business delivers computing to the edge, you’re better serving your customers exactly where they are.  

Disaster Recovery / Business Continuity improves resilience

Before choosing a data center provider, you should be asking questions around reliability and resiliency. Downtime can be expensive, and relying on a provider that doesn’t have contingency plans for outages and downtime can be costly for your business. 

Make sure you know about the type of network connectivity provided, how resilient the center is, how data and disaster recovery is handled, and types of cloud service or connectivity options available at the site, among other things. You can survive carrier outages unscathed if the center you’re working with is connected to other facilities, has buildings designed to withstand disaster, and has plans to offload data to another center should something unforeseen occur.  

Hybrid solutions increase options for your workload deployments

Hybrid solutions include a combination of cloud and on-premises solutions, or multiple clouds off-premise. Expanding how your data is processed in this way can also keep you from feeling limited by cloud based applications or an on-premise center. With 58% of companies already using or planning to use hybrid infrastructure in the near future, thinking about what hybrid may look like for you is a step in the right direction, and data center interconnection can lend a hand.  

Data Center Interconnection providers may also offer cloud onramps to public cloud providers. Cloud onramps are dedicated cloud connectivity for the major public cloud services. Think of it as data center interconnection for the public cloud providers. 

Achieve resilience and high performance with a data center provider

As a data center provider, TierPoint offers all of the interconnectivity businesses need to effectively run their IT environment, no matter your needs. We offer a full suite of data centers across the U.S., so wherever your users are, we have them covered.  

Are you wondering what building a data center on-premises vs. off-premises would cost? Check out our Data Center Build vs. Buy calculator. 

]]>
Is Edge Computing the Next Big Digital Infrastructure Trend? https://www.tierpoint.com/blog/is-edge-computing-the-next-big-digital-infrastructure-trend/ Tue, 16 Mar 2021 18:30:05 +0000 https://tierpointdev.wpengine.com/blog/is-edge-computing-the-next-big-digital-infrastructure-trend/ The traditional IT infrastructure has evolved from a single, central data center to a connected constellation of services and devices. Those services are all dispersed across multiple cloud providers and platforms. However, this distribution of resources often faces one major obstacle: latency. Fortunately, edge computing is an infrastructure model aimed at boosting performance and reducing latency across widely distributed networks. We examine how edge computing is influencing digital infrastructure in 2021.

An edge computing overview

Edge computing is a model where information processing (data and computing) is physically located close to the things and people that produce or consume them.

Depending on the use case, an edge deployment may be anything from equipment in a colocation data center, a computer closet in a branch office, or an edge-configured virtual machine at a local cloud provider.

Before we discuss how edge computing is used by enterprises, here are the top five examples of industries innovating with edge computing:

  • Manufacturing
  • Transportation, logistics, and autonomous vehicles (and self-driving cars)
  • Healthcare
  • Media and entertainment
  • Retail

This list will likely grow, however. Any organization that holds virtual meetings, has remote workers with virtual desktop software, or runs performance-heavy applications, like Artificial intelligence (AI), Machine Learning (ML) and business analytics over a network will benefit from an edge deployment.

Additionally, Internet-of-things (IoT) devices, such as environmental monitors, factory floor robotics, or intelligent traffic controllers, will also need the localized processing capabilities and real-time communication that edge computing provides.
IDC Technology Spotlight Key Trends Driving Enterprises Toward the Future of Digital Infrastructure in 2021

How enterprises use edge computing

Edge computing brings performance boosts and cuts costs for a range of current and future use cases. Some of the most common examples include:

Distributed workforces

Cloud computing was the first step toward a “work anywhere” model. However, remote users encountered slow and unpredictable bandwidth. By leveraging edge resources businesses can potentially improve application performance and overall user experiences for remote workers.

Tracking equipment and assets

Many industries including manufacturing, construction, and oil and gas maintain expensive equipment in the field or on factory floors. They must keep track of the equipment’s location, condition, and current usage. Maintaining an up-to-date record depends on rapid communication.

Predictive equipment maintenance

Likewise, equipment and machinery need to be kept in working condition. An unexpected failure can cost a company lost productivity and, potentially, the failure to meet key deadlines.

Monitors can send an alert if a part is wearing out faster than expected or when the equipment needs a tune-up. By locating edge computing resources near the equipment, companies can have real-time updates on equipment performance.

Monitoring patients

Hospitals are increasingly using monitors and other smart devices to ensure the well-being of patients. Medical equipment and patient monitors are constantly producing alerts and data that must be analyzed for a quick response and, later, stored. An edge server can process patient data quickly and, because the data stays within the hospital network, without risking a breach of HIPAA regulations.

Staying compliant with regulations

Companies that must comply with regional and international consumer data regulations can leverage edge computing to ensure that sensitive consumer data stays within national or state borders. By keeping data in edge servers in the geographic locations of their customers, they can better comply with local data privacy and data sovereignty laws.

New infrastructure technologies support the edge

Edge computing isn’t a standalone technology. Besides the cloud, two other important technologies that support the growth of edge development are software-defined infrastructure and hyperconverged infrastructure.

Software-defined infrastructure (SDI)

Software-defined infrastructure (SDI) is a composable architecture that allows developers to define IT infrastructure resources (storage, compute, networking, and other resources) using a software abstraction layer. With SDI, a developer can break down resources into individual edge computing resources, located where they are needed most, and reallocate them as workloads and other needs change.

SDI provides greater flexibility than fixed or static hardware-based resources, which must be physically replaced as needs change. Allocating resources using SDI can be done in minutes as compared to the days or weeks required with a traditional hardware procurement cycle.

Hyperconverged infrastructure (HCI)

Hyperconverged infrastructure (HCI) is a related software-based architecture that tightly integrates IT infrastructure resources (storage, compute, networking, virtualization, etc.) into a single plug-and-play appliance or software stack. A single HCI instance might serve as an edge “data center” or be clustered with other HCI instances. HCI can take advantage of software-defined infrastructure and can have its configuration automated and managed remotely

How your digital infrastructure can gain edge computing benefits

Most edge deployments require customization due to the unique needs of different businesses. While there are many edge solutions coming onto the market, they must still be customized to fit different use cases. Customization requires substantial experience in cloud services, SDI, and networking. As IDC notes in its Spotlight, even small modifications to an edge compute stack may significantly change its ability to serve a particular use case.

A collaboration with an experienced cloud provider, hosting company, or professional service provider can supply the expertise to ensure a successful edge deployment. In addition, outside partners can provide physical support such as regional data centers, colocation facilities, and links to major cloud platform providers.

Cloud service providers with edge data center experience, like TierPoint, can help with edge planning and deployment as well as with the overall modernization of your digital infrastructure.

IDC Technology Spotlight Key Trends Driving Enterprises Toward the Future of Digital Infrastructure in 2021

]]>
How SaaS Companies Improve Customer Experience with Edge Computing https://www.tierpoint.com/blog/how-saas-companies-improve-customer-experience-with-edge-computing/ Tue, 11 Aug 2020 18:29:56 +0000 https://tierpointdev.wpengine.com/blog/how-saas-companies-improve-customer-experience-with-edge-computing/ Software as a Service (SaaS) has dramatically transformed the customer experience for many business application users. No longer do they have to plunk down thousands to millions of dollars for a piece of software that quickly becomes obsolete.

With SaaS, they simply subscribe to the number of licenses they need. On-boarding more employees? No problem! They can subscribe to new licenses in just minutes. Scaling back? They can unsubscribe almost as quickly. Perhaps best of all, users of SaaS applications can often keep their applications up to date with the click of a button.

SaaS growth on the rise

These are just some of the reasons many enterprises are looking to implement a SaaS-only model in their organizations. In their 2020 State of the Cloud report, Flexera also found that 43% of respondents named ‘moving on-prem software to SaaS’ a top priority for 2020. That’s up from 29% in 2019.

That’s just a look at the growth in SaaS applications designed for the enterprise. It doesn’t really consider the growing number of point solutions designed for the consumer or small to mid-sized businesses. The 50 largest publicly held SaaS solution companies as of January 2020 shows a number of names that offer products for use outside the enterprise. SaaS applications are big business.

Unfortunately, there’s a dark side to SaaS, too. Along with an explosion in new SaaS companies, the market is also seeing a significant amount of churn. According to Autopsy.com (a site focused on the death of startups, not people), as much as 92% of startups fail within three years.

If SaaS is about an improved user experience, then SaaS success will require the developer to focus on and continue to improve the SaaS experience. Many SaaS developers are doing just that with a concept called Edge Computing.

How the edge supports SaaS companies

SaaS developers deliver their applications through the cloud. These cloud resources (hardware and software) are owned and maintained by the SaaS company or a third party like TierPoint. Data is sent back and forth, usually over the internet, between the user’s console and the SaaS infrastructure.

But what if the company’s SaaS infrastructure is in New York and the customer is in California? That can have an impact on the user’s experience because distance can create lag time between a request from the user’s system and a response from the SaaS infrastructure. For some applications, it may not matter much, but for others, e.g., POS or customer service systems, even a few seconds of lag time can create a negative experience. For more advanced applications, such as robotic surgery or self-driving cars, lag time is out of the question.

Advancements like 5G will have an impact, but this impact will be short-lived in our data-driven world as applications advance to take advantage of the increased bandwidth. Edge computing is about reducing that lag time by moving the SaaS infrastructure closer to the customer.

Of course, if you’re a SaaS company serving multiple markets, the last thing you want to do is focus your efforts on setting up multiple data center locations in multiple markets across the country. Most SaaS companies would rather focus on creating great applications.

That’s where the right third-party providers can help. For example, TierPoint operates a network of forty data centers across the U.S. A company based in New York City might house their corporate systems in our Hawthorne data center and then house their customers’ applications and data in multiple Midwestern data centers closer to their customer base, e.g., Chicago, Oklahoma City, and Little Rock.

The ‘need for speed’ with today’s applications is driving tremendous growth in edge data centers. MarketWatch predicts that investments in edge computing will grow 27% annually through 2023. Bell Labs also predicts that 60% of all servers will be housed at the edge by 2025. That’s a tremendous move away from the centralized data centers of the past.

Edge computing is about more than just latency

For SaaS companies, working with an edge provider is about more than just reducing latency to improve the customer experience. Edge computing can also reduce overall costs and allow you to pass on those savings to your customers or improve your bottom line. One of the most obvious ways we can reduce costs is by allowing you to free your organization from the overhead of maintaining your own systems.

Also read: The Strategic Guide to Edge Computing

Not having to maintain your data center infrastructure can also help you increase organizational agility. SaaS companies often follow a continuous integration/continuous delivery cycle that allows them to deliver new features and applications faster than ever. We enable this model by focusing on your infrastructure while you focus on your applications.

If your customers use your applications to store or handle sensitive data, security is no doubt one of their concerns. Keeping up with the ever-changing cybersecurity threat landscape can be hard for the SaaS company focusing on delivering great applications. We can manage the security of your infrastructure in our edge data centers, freeing you up from one more worry that takes you away from a focus on your customers.

Last, but certainly not least, organizations that manage their own data centers may be more susceptible to unplanned downtime – and that can affect the customer experience even more than latency. A managed service provider can keep an eye on your edge data center and address any performance issues before they’re noticed by your customers. They can also help you develop a disaster recovery strategy that keeps your customers up and running no matter what mother nature (or human nature) throws your way.

Also read: Key Considerations for Edge Computing Deployments

 Ready to improve your customer experience?

If you’d like to learn more about edge computing and how it can help you create an even better customer experience, reach out to us today. One of our advisors would be happy to talk with you about your data center challenges and how we can help solve them.

Executives Guide to Edge Computing [white paper]

]]>
Five Examples of Industries Innovating with Edge Computing https://www.tierpoint.com/blog/five-examples-of-industries-innovating-with-edge-computing/ Thu, 04 Jun 2020 20:40:21 +0000 https://tierpointdev.wpengine.com/blog/five-examples-of-industries-innovating-with-edge-computing/ Organizations process massive amounts of data every day, in applications that often require near-real-time response rates. Innovations IoT, mobile computing, and AI are driving demand for very low latency connections. However, the traditional network model of a central data center serving distant offices and end-users can’t keep up with this need for speed. Enter edge computing, a networking model that moves data and compute power close to where it’s needed, at the edge of the network close to the devices and people that use it. In our post, we dive into some industry edge computing examples and how those industries benefit from the technology.

A quick edge computing overview

As Dominic Romeo, TierPoint’s director of product management explained, “When you’re inside a 50-mile radius, latencies get really, really low. The time it takes for the end user to send a command to the server and for the server to come back with a response are in the neighborhood of single-digit milliseconds versus double- or triple-digit milliseconds of round-trip time.”

What is edge computing?

Edge computing is a model where information processing (data and computing) is physically located close to the things and people that produce or consume it.

Edge computing enables:

  • Near real-time response rates for applications in industrial robotics, patient care, finance, and customer service.
  • Lower costs. Sending data back and forth over a long distance can be expensive, so moving it closer to users offers a cheaper alternative.

The proliferation of smart devices is a major driver of edge adoption. Many smart devices use artificial intelligence to get better at their jobs. An industrial robot, for instance, needs AI to react to changes in the production line. The closer the robot is to its AI brain, the faster it can operate. Autonomous vehicles need split second timing to assimilate traffic data and react.

Executives Guide to Edge Computing [white paper]

Edge computing examples by industry

An August 2019 report by research firm ‘MarketsandMarkets’ projects the global edge computing market to grow from $2.8 billion in 2019 to $9 billion by 2024. Companies in industries from oil and gas to online gaming are leveraging edge computing to improve their products and services, cut costs, and increase market share. To understand the current and future potential for edge computing, read about the following examples of edge applications in five major industries:

Manufacturing

Edge computing is enabling smart devices such as machine controls, environmental sensors, asset tracking, and assembly line robots to operate with greater speed and efficiency.  Smart manufacturing devices rely on a tight feedback loop between input, analysis, and output to provide timely responses. For example, quality control monitors or equipment sensors must make rapid and accurate assessments. When they don’t, products may be rejected or recalled further down the line, or equipment may fail and cause lengthy delays in production.

Storage costs are another factor. Many manufacturers collect huge amounts of data from monitors, sensors, production line equipment, shipment trackers, and so forth. Processing and storing this data centrally is far more expensive than keeping it near the equipment that generates and consumes it.

Also read: How Edge Computing Aids Modern Manufacturing

Transportation and Logistics

While autonomous vehicles are a commonly known application of edge computing, there are many other uses. Management consulting firm McKinsey & Co. has identified two dozen ways that  edge computing can improve operations in travel, transportation, and logistics, including: condition-based monitoring of transportation equipment, equipment tracking, logistics routing optimization, improved flight navigation, after-sales service of vehicles, and location based advertising on public transport. Edge computers might be placed in garages, at airports, on board vehicles, on planes, and in video displays on public transport.

Healthcare

Edge computing is spurring a range of innovations in healthcare by smarter, faster equipment. For example, hospitals can optimize equipment maintenance, track drug distribution, monitor patient condition in real-time, and manage nursing efficiency through mobile devices. AI assisted surgical robots can enable remote surgery and assist on-site surgeons to improve their success rates. Likewise, medical devices that collect patient data may also provide diagnosis and treatment recommendations.

Media and entertainment

Content delivery networks were some of the first uses of the edge computing idea. Rich content such as videos and games located on content servers close to major consumer markets reduced bandwidth demands and improved performance. Edge computing is a similar concept, with the addition of a compute component for streaming media, online gaming, or video-heavy social media sites.  Coupled with 5G networks, edge servers enable mobile users to get smoother streaming video, without the need for buffering.  Media companies can leverage edge capacity to collect and analyze data on customers to sell them more services and products.

Customer engagement

Customer service and marketing is increasingly personalized and automated. Companies analyze volumes of data on consumer behaviors so they can provide customized services, both online and in brick and mortar retail outlets. Coupled with edge computing, retail stores can track an consumer’s route through the store and use that data to re-design store layouts or create tailored in-store advertisements.  Augmented reality apps supported by edge servers can enable customers to “try on” clothes without physically putting them on. Those applications demand a lot of data and processing power, making them ideal use cases for edge computing.

Edge computing will also be adopted by other industries

There is a multitude of uses for edge computing in other industries too, including utilities, energy, semiconductor, government, telecommunications, automotive, education, and more. Within the enterprise, edge computing will enable faster connections to the cloud and improved response times by offloading data analysis and heavy content files to edge servers.

To accommodate the rising demand, cloud services providers, data center facilities, and network companies are expanding their distributed edge computing infrastructures. Many, such as TierPoint, already have regional networks of data centers that can support edge computing.

TierPoint’s 40-plus data centers with edge services provide reduced latency and powerful local computing capacity. Cut latency with robust local networking and fast last-mile connectivity for content and local processing for IoT and mobile applications.

Learn more about edge computing, download TierPoint’s Strategic Guide to Edge Computing.

New call-to-action

]]>
4 Technologies That Will Change Healthcare Forever https://www.tierpoint.com/blog/4-technologies-that-will-change-healthcare-forever/ Fri, 04 Oct 2019 15:25:59 +0000 https://tierpointdev.wpengine.com/blog/4-technologies-that-will-change-healthcare-forever/ For decades now, the healthcare industry has been undergoing rapid change. New technologies have made everything from R&D, to diagnostics, to appointment setting much easier, allowing providers to treat more people more effectively than ever. As fast as things have changed in the recent past, change is about to accelerate to light speed as four technologies converge to disrupt the industry: the IoT, 5G, AI, and edge computing. In this post, we take a look at current adoption trends for each of these technologies and how they will work together to impact the future of healthcare.

#1 Growth in healthcare IoT

IoT (Internet of Things) is taking off in a big way across numerous industries, including healthcare. When Forbes and Intel teamed up to study the impact of the IoT on seven different industries, they found that 55% of healthcare executives already had robust IoT initiatives underway.

Healthcare IoT covers many different devices such as intelligent, connected equipment, tablets, and smartphones devices used by mobile providers, and wearable technologies from smartbands (e.g., Fitbit, Apple Watch, and Garmin Activity Monitors) to glucose monitors and insulin pumps.

It’s hard to say how many devices there are in the healthcare IoT. Some estimates approach 20 billion or more. Worldwide shipments of wearable devices reached 153.5 million in the fourth quarter of 2020. I think it’s safe to say that the impact of these devices on the industry will be significant. Here are just a few of the ways the IoT may disrupt the healthcare industry.

  • Personal monitoring devices encourage people to be more “health-conscious” and responsible for their own well-being. Insurance companies are beginning to encourage this with discounts for subscribers willing to use these devices to follow a healthy lifestyle.
  • Managing chronic illnesses will become easier and more cost-effective. Instead of making frequent trips to a doctor’s office, doctors will be able to remotely monitor a patient’s condition for changes. Tele-medicine is expected to grow at a CAGR of 16.5% from 2017 to 2023.
  • Wearable devices can detect warning signs earlier, allowing people to seek treatment sooner and improving outcomes.
  • Remote patient monitoring (RPM) can save money for an industry struggling to hold down costs. Studies estimate that remote monitoring of chronic conditions could save as much as $200 billion over the next 25 years in the US alone, and that RPM can reduce costs for elder care in rural areas by 25%.

#2 5G networks will enhance overall healthcare experience

Most people reading this probably have at least a basic understanding of 5G’s advantages. We remember what happened when our phones went from using 3G networks to 4G networks. Suddenly we could stream Netflix and NFL football to our smartphones. 5G is set to have a much broader impact on healthcare.

“More than just speed, 5G is also about capacity. Sometimes those get confused with each other, so to clarify, let’s revisit the analogy of a car on the highway. If 5G is that car going 100 miles an hour, then 4G is like that car going 90 miles per hour or maybe even 80 miles per hour. So, there is a difference in speed, but if 4G is like a four-lane highway, 5G is like a 16-lane highway.”

– Dominic Romeo, TierPoint Director of Product Management

Read the interview with Dominic Romeo: How Will 5G Affect Edge Computing?

With the proliferation of healthcare IoT devices, we’re going to need that capacity. It’s no longer just 10 million people all binge-watching Game of Thrones at the same time. Now we have physicians, nurses, and other healthcare providers monitoring patients and offering tele-medicine services to as many as a dozen patients a day. 5G’s 16-lane super-highway can reduce the lag times that threaten to impact both the patient experience and quality of care. (Arguably more important to our overall well-being than the need to suffer through while GOT buffers for a few seconds.)

[embed_cta portal=”135870″ cta=”a66d3b5b-2a2c-4ca3-a521-c5edb53b74fc” centered=”true”]

#3 Artificial Intelligence (AI) stands out in healthcare

Before getting into the impact of AI on healthcare, we need to differentiate it from a related technology: analytics. Analytics is the use of data to inform decision-making. Given the proliferation of data gathered by wearable devices, we could have included analytics in our list of industry disruptors. However, because analytics is a component of AI, we decided to focus on the more recent, at least from a commercial perspective, innovation.

As computing power and the amount of data available increase exponentially, the line between AI and analytics can look rather blurry. The main difference is that analytics analyzes data according to a defined model but takes no action. For example, a doctor may gather ever-increasing amounts of data about a patient and use a program to analyze patterns in the data and advise her about a possible course of treatment. But, she still makes the final decision on how to proceed.

With AI, the doctor is removed from the equation (at least theoretically) in a couple of ways. First, AI uses machine learning to adjust its conclusions in the same way the doctor might use her benefit of experience to interpret the data differently than the program. Second, full AI can take action.

As someone once said, “We’re all just an experiment of one.” How one person’s vital signs should be interpreted and treated is unique to his or her physiology. For example, a blood pressure of 110/55 could be completely normal in one person but indicate serious dehydration in another. Wearable technology fitted with AI could monitor the individual’s vital signs, analyze them, and learn from them to adjust the model of what “optimal health” looks like for that person. Not since the days of house calls have flesh and bone doctors had the time to get to know their patients at this level. (Maybe not even then.)

Second, when vital signs go awry, AI can take action by advising the individual about the actions they should take to correct matters. If a chronic condition, such as diabetes, is being monitored, the system may even administer treatment.

#4 Edge computing drives new healthcare IT

Finally, edge computing ties the previous three disruptive technologies together. Edge computing simply means moving the computing power closer to the end-user. The main benefit of edge computing is that it significantly reduces latencies – what most people think of as “wait time.”

Let’s say you suffer from an irregular heartbeat, so you wear a device that monitors your heart rhythms. Contrary to what the commercials would have you believe, heart rhythms can be fairly irregular without indicating a serious condition. If that data had to travel from your device to a central data center for processing, it would severely impact your experience and ability to leverage the data. If it were a serious condition, it might even be too late by the time you got the results. Edge computing puts the ability to analyze the data on the device itself, allowing you to get the results much faster.

That’s a pretty simple example of edge computing in action. As AI becomes more commonplace, edge computing will become even more critical in allowing intelligent systems to analyze data, reinterpret models, and take action all within a matter of seconds, if not milliseconds, potentially saving lives.

Is your healthcare business transforming?

Healthcare industry technology and practices are evolving, but each business is different. We can help you discover the right technology to power your digital transformation. Contact us today to learn more.

Want to learn more? Read our complimentary eBook on Realizing the Potential of Digital Transformation in Healthcare.

Get the eBook: Realizing the Potential of Digital Transformation in Healthcare

]]>
BraveIT Spotlight: Edge Location Troubleshooting & Security https://www.tierpoint.com/blog/braveit-sponsor-spotlight-edge-location-troubleshooting-security/ Wed, 28 Aug 2019 15:22:06 +0000 https://tierpointdev.wpengine.com/blog/braveit-sponsor-spotlight-edge-location-troubleshooting-security/ If you’ve ever been without a lunch and start experiencing hunger pains as you wrap up the late-morning meeting that is just one in the series of commitments filling your day, and then discover that your lengthy noon meeting was just cancelled, you know the sense of relief that time freedom (and a Taco Bell run) can provide.

Yet, as an IT manager, this is the type of time crunch you are likely trying to navigate on a regular basis. Supporting the ever-increasing global demand for data has led to the proliferation of the edge with computing networks expanding across geographies, challenging IT teams.

Whether you manage a team within local governments, school districts, or an enterprise, effective time management is a challenge often compounded by limited resources. Finding time for priority projects (or a Chili Cheese Burrito) is essential to business continuity and success.

Fortunately, there are infrastructure solutions that eliminate the time wasted by your IT personnel on troubleshooting edge locations. With the right solution, your team will no longer need to drive from site to site to power down servers and perform other simple troubleshooting.

Serial consoles allow IT teams to securely manage in-band and out-of-band network devices remotely, which not only saves on travel time, but also minimizes the need for personnel to intermittently access your IT facilities, creating a security risk.

“It’s as if you are standing in front of the device with a keyboard and monitor,” said Craig Pennington, vice president of European operations for NTT Europe Online. “It’s immediate and that greatly improves our response time to get onto a server and troubleshoot issues. Instead of having to create a ticket, gather as much information as we can, and then pass it on to an on-site data center technician, we can take care of it centrally in minutes.”

The Vertiv™ Avocent® ACS 800 serial console enables this type of centralized management by connecting to your infrastructure support equipment including rack power distribution units (rPDUs), single-phase uninterruptible power supply (UPS) systems, or any serial-based products that you may connect using the four available USB ports. The console ultimately gives you access and control via a single, consolidated view. With this dashboard you can easily see errors and issues that require attention and even cycle equipment on and off without leaving your office.

Reduce Security Risks Even with Network Growth

Couple the Internet of Things (IoT) with Gartner predicting nearly 21 billion connected devices by 2020, and it’s no secret that businesses are becoming more digital and adopting new business models that could create critical system vulnerabilities. Every device that connects to your network is a possible access point for well-intentioned or malicious activity.

Adding to the vulnerability is the continued strain on human resources. The IT teams tasked with managing these devices are not growing. In fact, the recent Data Center 2025: Closer to the Edge report shows a potential loss of 16 percent of the data center industry to retirement over the next six years. Again, the right type of serial console can help ease this burden.

The zero-touch provisioning of the Avocent® ACS 800 allows your IT team to update firmware and implement custom configurations easily, incrementally, and completely automatically. This console server lets you deter malicious activity and safeguard your network by maintaining the most up-to-date firmware across your infrastructure.

Maintain Connectivity Even with a Down Network

Network problems aren’t always due to a cyberattack. With typical support systems, a simple misconfiguration can render your network inaccessible. To avoid this type of human error, highly available networks rely on IT management systems that allow device access anytime, from anywhere, even if the connected network has failed.

Vertiv recognized this critical need when designing the Avocent® ACS 800, which allows you to still view and maintain your equipment’s status from anywhere in the world by connecting an external 4G LTE cellular router.

The Avocent® ACS 800 provides remote access to connected systems through the network via a telephone line and provides wireless access via a 3G/4G LTE modem. In other words, with the right serial console, you have the tools to access your network, wherever and whenever.

Choose Wisely to Take the Trouble out of Troubleshooting

The complexity of today’s data center topologies and the industry trends that are seemingly intensifying the pressure already placed on IT teams can make securing and managing your network edge quite challenging, but fortunately as an IT manager, you have the ability to help yourself through smart infrastructure choices like the Avocent® ACS 800 serial console.

Having in-band and out-of-band network remote management capabilities enables centralized, secure, and timely troubleshooting that can eliminate the operational inefficiencies of your IT personnel, giving them back the time needed to focus on more important business priorities or to pick up a Taco Party Pack for the team.

BraveIT Spotlights are guest blog posts from our 2019 BraveIT sponsors. Vertiv is a global leader in designing, building and servicing critical infrastructure for data centers, communication networks and commercial/industrial industries.

See More at BraveIT 2019

TierPoint’s BraveIT conference is an interactive, thought leadership and networking event designed for the modern IT professional. The 2019 BraveIT conference will take place September 19 in New York City, with a variety of events, activities and speakers. You can see the full agenda, as well as register for BraveIT at BraveIT 2019.

Learn more about the future of edge computing at BraveIT in NYC on 9/19/19. Register Now.

]]>
The Future of IT Security: The Good, the Bad, and the Ugly https://www.tierpoint.com/blog/the-future-of-it-security-the-good-the-bad-and-the-ugly/ Tue, 20 Aug 2019 19:09:30 +0000 https://tierpointdev.wpengine.com/blog/the-future-of-it-security-the-good-the-bad-and-the-ugly/ It comes as no surprise to anyone that the computing landscape is changing rapidly. The number of edge devices connected to the internet is growing exponentially. Industrial automation and AI are driving demand for lower latency, mostly made possible by 5G and edge computing. Internally, employees are increasingly mobile, accessing home office systems from a vast array of devices, from wherever they happen to be.

These changes are good for business, but they also have a dramatic impact on the IT security threat landscape. Paul Mazzucco is TierPoint’s Chief Security Officer and a veteran of the IT security market. We asked him to paint a picture for us of where we are today, where the business of IT security is headed, and how business will adapt to these changes down the line.

The Future of IT Security: The Bad

Interviewer: Paul, we called this interview ‘the good, the bad, and the ugly,’ but let’s start with ‘the bad’. What is ‘the bad’ in the context of the future state of IT security?

Mazzucco: With the rise of 5G, we’re seeing a real push to move workloads as close as possible to the IoT edge to remove the latency and other inefficiencies created by having to push data back to a centralized computing center. Now eventually, that data is going to need somewhere to live and be stored, but IT leaders need to be realistic about this and realize that not everything is going to live in their data center.

The challenge with this from an IT security perspective is that it creates a much larger, much less secure attack surface. Most of these workloads are processed at the application layer, and they bypass the typical network security protocols that you’d find in a centralized data center. Unfortunately, upwards of 70% of edge devices don’t require authentication for 3rd party APIs, and more than 60% don’t encrypt that API data natively. That adds to the speed and efficiency of the application, but it amplifies the security concerns.

Interviewer: How big is this issue, and how do you see cybercriminals exploiting it?

Mazzucco: When the IoT first started out, the estimates were that it was going to be roughly 75 billion connected devices by 2025, mostly consumer-related devices such as your home security cameras, connected doorbells, and large appliances connected to the internet. Now, that’s a lot of devices, but the estimates today are somewhere in the 200-300 billion range as the idea of an Industrial Internet of Things has started to take off.

When Mirai hit in 2016, we started to see the potential scope of the security threat created by edge computing. When the attack traffic was analyzed, investigators found that Mirai exploited 61 user names and passwords on industrial-type devices that still used default, factory-set passwords. This allowed Mirai to create a botnet that led to what was at the time the largest DDoS attack on record.

Mirai made it abundantly clear that the IoT botnets were not going to just attack home devices with minimal security. Hackers were going to go after industrial devices as well and in a big way. They know that people don’t change the default passwords on their devices or they use the same passwords across devices. These devices make an easy target.

And, of course, the growing IoT is going to be even more attractive as time goes on. Since the introduction of 5G, both public and private sector organizations will look to internet-connected devices to improve efficiencies. As 5G becomes more widely available, this emphasis on connected industrial devices will increase, and cybercriminals will have an even larger attack surface available to infiltrate, including essential infrastructure such hospitals, buildings, shipping, energy, and more.

The Future of IT Security: The Ugly

Interviewer: Now that we know what’s the bad, what’s the ugly?

Mazzucco: There are botnets on the dark web that make Mirai look tame. Radware, one of our business partners, discovered what they called the Zyklon botnet. It had the ability to launch multiple types of attacks and malware contamination at the same time. It could do http flood attacks, TCP flood attacks, UDP flood attacks, SYN flood attacks AND deliver malware payloads for understanding cloud-based inspection.

So, for example, the ‘http’ botnet could look at start-up files and understand what sort of malware protections you had and try to bypass those. The same exact botnet allowed browser password and ftp password sniffing and could go in and find license keys installed on your infrastructure. It had email recovery password infrastructure, and it encrypted its own communications back to its command and control servers.

You know how much it costs? $75 to buy it on the dark web. These tools based on this same basic building block infrastructure have gotten more and more sophisticated, and they’re now in the hands of pretty much anyone who wants them.

The Future of IT security: The Good

Interviewer: Please tell us there’s an upside to this story. Is there a good?

Mazzucco: While there’s no doubt in my mind that cybercriminals have the upper hand right now, I’m hopeful that we’re going to eventually figure this out with artificial intelligence and machine learning. But, it will be a real battle. 51% of the internet traffic right now is made up of bots – bad and good. It’ll all come down to how fast good bots can use machine learning to make changes to the infrastructure to thwart the bad bots that are using machine learning to try and bypass the security measures in place.

The good news is that there’s a huge commercial aspect to this. A lot of companies have a vested interest in creating these protection protocols and selling them to the commercial market and the government market in order to try and keep these bad bots at bay.

Eventually, we expect to get to the point where we will have the ability to autonomously sniff this edge and have an advanced understanding of packets moving through this edge infrastructure. 5G will contribute to that. So as machine learning and these pieces get stronger, I’m hopeful we’re going to have edge computing protections that are much more efficient and autonomous, and we won’t have worry so much about the internal devices.

How Businesses Should Adapt to the Changing Threat Landscape

Interviewer: If you were to give one piece of advice to business leaders to help them protect their systems and data today, what would that be?

Mazzucco: They need to adopt multiple protocols across their security stack right now. This includes the entire fabric of their infrastructure and not just the endpoints themselves. For example, a company might contract for 200 servers, 500 laptops, 200 firewalls, and so on. They create their network and hope that it’s protected. But, probably some 90% of these firewalls don’t get updated, and they don’t patch their endpoints.

Interviewer: Can you put a finer point on the need for patch management? 

Mazzucco: That’s easy. Once a month, Microsoft releases a roadmap for infrastructure vulnerabilities. Within three to four days of a vulnerability being announced, an exploit is available on the dark web. Cybercriminals take advantage of the fact that the vast majority of companies have poor patch management practices.

Of course, larger companies hopefully have more well-established patch management practices, and any company that pays for security monitoring may also be paying for patch management. But again, it’s not just how you protect your laptops and servers. It has to be a much broader focus on your entire infrastructure and the larger attack surface created by 5G and the IoT.

Understand the Threats and Find a Managed Security Provider

With the constant evolution of cyberthreats, IT organizations need to have a good understanding of the threat landscape and a plan to protect their vital data and applications. Some organizations, understandably need help staying up to date and ahead of these threats.

As an IT security services provider, we assist our clients with the development, implementation and management of comprehensive IT security strategies. Contact us today to learn more and see how we can help you.

You May Be Also Interested In

3 Ingredients for an effective IT Security Policy

Strategic Guide to IT Security

]]>
Edge Computing and Artificial Intelligence: The Need for Speed https://www.tierpoint.com/blog/edge-computing-artificial-intelligence-need-for-speed/ Thu, 25 Jul 2019 16:48:32 +0000 https://tierpointdev.wpengine.com/blog/edge-computing-artificial-intelligence-need-for-speed/ AI, or artificial intelligence, is the simulation of human intelligence by machines. Through the almost instant analysis of multiple data points – sometimes millions of them – systems can mimic a human response. In the case of sophisticated AI chatbots, for example, you might not know when you’re talking to a computer and when you’re talking to a machine. Edge Computing can help facilitate that experience further.

Consumer Response to AI is Changing

For everyday business use, chatbots are one of the most frequently discussed potential applications of AI, and consumer response to their use is changing rapidly. In a recent study, 62% of respondents said they’d be willing to use an online chatbot to communicate with a business or brand.

When you think about it, this makes sense. Messaging with a support chatbot isn’t all that different from messaging with a human. If we’re using the chatbot, it’s not like we’re expecting to make an emotional connection with the person on the other side. So long as we get the information we’re looking for quickly, fewer and fewer of us feel the need to get it from an actual human.

The rise in the use of personal assistance AI is one indicator of just how willing we are to talk to a computer. More than half (52%) of smartphone owners use Siri or other voice-assist AI applications already.

However, in the study mentioned above, this willingness to interact with a machine came with some notable caveats. A majority (61%) agreed ‘It would be more frustrating if a chatbot couldn’t solve my problem than a human,’ and 79% said they needed to know that a human would step in if they asked to speak to someone.

For AI to be able to help us reach our goals, people need to be willing to interact with the machine to allow it to learn. As the research shows, customers are willing to use AI, but they don’t just want a bot that can mimic human interaction. They want a bot that can perform better than its human counterparts.

Edge computing can help deliver.

Executives Guide to Edge Computing [white paper]

Enabling AI Will Require Edge Computing Resources

Data is one necessary component to a functioning AI. The more data points available to the AI, the better. Data is the input used by the program to learn and adapt the way it ‘sees’ the world. Data helps the chatbots in use today reply to a customer’s inquiry in a more human-like fashion, even how to mimic human emotions such as empathy. (Though as the research shows, empathy from a machine might still be a bridge too far for many people.)

Another necessary component is speed. An autonomous vehicle needs to process the debris (or pedestrian) in the roadway in the blink of an eye. A surgical robot needs to be able to detect a patient’s vital signs and condition with each step it takes. A room service robot needs to be able to process the hotel guests’ requests and carry them out. The faster our chatbot can provide accurate answers to the customer’s query, the better the user experience.

Combine the need for speed with the capability to gather and process millions of data points in the blink of an eye and what you get is a perfect scenario for edge computing.

What is Edge Computing?

Say what you will about Wikipedia, they offer a pretty good working definition of edge computing:

Edge computing is a distributed computing paradigm which brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth.

What ‘closer’ means, however, depends on the application. For many of our current customers, that might simply mean deploying workloads in a data center closer to where they do business. This alone can significantly reduce latency when it comes to applications such as MRP, Supply Chain Operations, and customer-facing portals.

Also read: What Edge Computing Really Means

Other types of applications require the data center to be even closer. An industrial robot might require on-board AI capable of analyzing the environment around it as it performs tasks. Amazon’s annual ‘picking challenge’ shows just how hard it can be for robots to learn to sort through thousands of items of all different sizes and either pick them or stock them accurately. To date, the winning entries still can’t perform as accurately or as quickly as their human counterparts, and Amazon’s director of robotics fulfillment has said that fully, end-to-end warehouse automation is still at least 10 years away.

In a scenario like warehouse fulfillment, latency needs to be reduced to allow the robotic worker to even come close to human-like speeds. For an application like this, the data center would probably need to be located on the property itself. Much development is being done in prefabricated, modular data centers for these types of applications.

Finally, robotic surgery is another example of an eventual use of AI, where edge computing will be vital. The ‘robotic surgeries’ being performed today are actually human doctors using robotic instruments to help them make more precise movements.

As every doctor will tell you, you never know what will happen during a surgery until you make that first cut. (Or maybe they won’t tell you that because they don’t want you to worry, but it’s still true.) To fully automate this scenario would require a robot capable of responding to a life-threatening event such as an unexpected bleed. That sort of application would most likely require on-board AI, where the edge becomes the device itself.

Find Your Edge

For most businesses, edge computing still means housing workloads in a traditional data center closer to the customer. Read our Strategic Guide to Edge Computing to help as you develop your own edge strategy.

New call-to-action

]]>