Webinar Archives | TierPoint, LLC Power Your Digital Breakaway. We are security-focused, cloud-forward, and data center-strong, a champion for untangling the hybrid complexity of modern IT, so you can free up resources to innovate, exceed customer expectations, and drive revenue. Mon, 03 Jun 2024 19:57:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.tierpoint.com/wp-content/uploads/2022/05/cropped-TierPoint_Logo-1-150x150.png Webinar Archives | TierPoint, LLC 32 32 What is Lift and Shift in a Cloud Migration? https://www.tierpoint.com/blog/lift-and-shift-cloud-migration/ Thu, 25 Jan 2024 21:28:55 +0000 https://www.tierpoint.com/?p=22773 Is your on-premises IT environment weighing you down, keeping you from innovating in the cloud? A lift and shift cloud migration can lighten the load. Lift and shift involves moving your current IT environment to the cloud, often public cloud, with minimal work to the environment itself prior to migration. We’ll cover what a lift and shift approach is, how it’s different from other cloud migration methods, and what to think about before performing lift and shift at your organization.

What is Lift and Shift?

With the “lift and shift” method, also known as “rehosting,” the data and applications are moved to the cloud without making major changes. About 75% of tech leaders are building new features and products in the cloud according to Pluralsight’s State of the Cloud report, meaning that the other 25% are still relying on lift and shift migration for their cloud projects. Plus, end-user public cloud spending is expected to surpass $1 trillion by 2027, according to a recent Gartner press release, echoing the growing importance of cloud computing in the years to come.

While a lift and shift approach can be a simple way to move workloads to the cloud, the technique doesn’t work for every scenario. Some data and applications may have dependencies that make them more difficult to migrate. However, the lift and shift method still offers a valuable on-ramp for businesses looking to move more of their existing systems to the cloud.

How is This Method Different from Refactoring and Replatforming?

Rehosting, refactoring, and replatforming offer three different approaches to cloud migration. With replatforming, teams will restructure and fine-tune applications to make sure they perform more effectively in the cloud. This can look like making adjustments and enhancements, like making it so that codebases work with cloud-native services and APIs, allowing for greater cloud resilience or scalability.

Refactoring and rearchitecting are the two more comprehensive cloud strategies. With these methods, organizations will move and modify the architecture of an application so that they can use cloud-native features in the best possible ways. More rewriting is likely to be involved with refactoring and rearchitecting.

The Advantages of a Lift and Shift Cloud Migration

It’s not easy to make significant operational shifts in most businesses, especially with well-established on-premises frameworks. Lift and shift cloud migration can be a speedy, simple, and flexible solution for organizations looking to migrate some workloads to the cloud without the project being too involved.

Speed and Cost Efficiency

Because lift and shift cloud migration is a simpler process, it minimizes downtime and upfront investments for businesses. This enables a faster and more cost-effective cloud migration. Generally, the on-premises application can remain operational during the migration so that there’s little to no interruption to the service as well.

Reduced Complexity

Changing code and infrastructure can create a cascade of small pieces that need to be altered when refactoring or replatforming for the cloud. Conversely, performing a lift and shift cloud migration avoids complex code changes and infrastructure modifications, simplifying the process for IT teams.

Flexibility and Scalability

Indeed, doing a lift and shift migration may not allow all applications to work in the cloud perfectly. However, migrating existing applications in this way offers greater scalability and flexibility that can mean more promising future growth compared to maintaining on-premises workloads. The on-demand scalability offered by cloud architecture allows for resource increases and decreases in an instant, plus the flexibility leads to greater cost efficiencies. Lift and shift can also mean improved performance when businesses can run their applications on updated hardware, beyond what they’ve had in their legacy infrastructure.

The Disadvantages of a Lift and Shift Cloud Migration

Shortcuts can bring about changes in more efficient and helpful ways, but they’re not without their disadvantages. Lift and shift migrations can come with issues stemming from technical debt, vendor lock-in, and optimization deficits.

Technical Debt

Technical debt can be either planned or inadvertent. When companies make shortcuts in their software development, for example, this is planned technical debt. When the growth of a business is impeded by technical limitations in legacy applications, this can be an inadvertent consequence of technical debt. Organizations that experience technical limitations because of legacy applications will face performance issues and potentially even security vulnerabilities in the cloud.

Vendor Lock-In

It’s easy to rush into a new cloud migration project, excited about the potential that new infrastructure brings, without thinking about ramifications down the road. Businesses that jump into working with a specific cloud provider may find themselves dependent on that provider for the operation of their workloads, not considering portability options upfront.

Missing Optimization

Because lift and shift is a more stripped-down cloud migration strategy, this approach may not take advantage of all available cloud-native features and optimization opportunities. These capabilities may only be possible through refactoring or replatforming, for example.

When Should You Implement a Lift and Shift Cloud Migration?  

Lift and shift cloud migration is an appropriate solution for businesses looking to move to the cloud quickly or for those looking for a pilot program to rationalize future efforts in the cloud.

Here are a few situations where lift and shift might be the best strategy:

  • Migration Before Modernization: Perhaps you have a loftier goal to modernize your applications down the road, but you’re looking to save on infrastructure costs in the short term. A lift and shift migration can help you move applications to the cloud as a first step before you spend time rebuilding. Sometimes, continuity is a bigger priority over more resource-intensive modernization projects.

  • Contending With Time Constraints: Businesses with hardware failures, expiring leases, or time-sensitive compliance requirements may need to swiftly transition to the cloud. In these cases, lift and shift may be the only viable option. In disaster recovery, time is also of the essence. Lift and shift can help organizations rapidly replicate their on-premises workloads.

  • Proof of Concept: For more successful cloud migration efforts, businesses need to have buy-in at all levels. This may require a proof of concept project. Lift and shift migrations are lower-risk and can help assess cloud performance and boost faith in larger adoption projects later.

  • VM-Centric Replications: Applications that are already using virtual machines work particularly well with lift and shift migration because there isn’t much needed to move them to cloud-based VMs. For example, AWS has tools for seamless VM image transfers between on-premises environments and AWS.

Should You Lift and Shift to the Cloud?

Deciding whether to lift and shift to the cloud will depend on your budget, internal and external resources, what your workloads look like, and your short- and long-term cloud goals. There is no one “best option.” Starting with a lift and shift strategy can serve as a great first step or stopgap for businesses looking to prioritize continuity or develop a proof-of-concept for more cloud migration projects. It may also be all you need if you have cloud-friendly applications that you need to migrate.

In any case, businesses can always consider multiple migration strategies and even mix and match if it’s needed. The lift and shift method is just one of the seven R’s of cloud migration strategy – it’s not uncommon to apply a couple of different methods together to achieve the right migration for your workloads.

Evaluate Your Needs with a Lift and Shift Migration Assessment

A migration assessment should detail all of the factors that may impact or impede the ultimate migration or shape what’s next in your migration journey. Your assessment might include the tools that may be valuable to automate parts of your migration plan; the length of time you need to support the application in each environment; the order of operations and priority for migrations if you plan on moving more than one application; and any compliance issues you are hoping to address or may need to address post-migration.

Create a Lift and Shift Cloud Migration Plan

Each lift and shift cloud migration plan will look different, but every plan will go through the preparation, migration, and post-migration phases.

Including assessment, the preparation phase will include setting up a cloud account, running some pre-migration tests, and applying necessary security controls and access management policies.

During the migration phase, businesses will move data to the cloud using pre-determined methods and tools, deploy applications on cloud infrastructure, and configure them in the cloud with manual or automated tasks.

Post-migration, organizations should perform validation and testing, fine-tune and optimize their cloud resources, and set up ongoing monitoring and support to ensure cloud-based applications continue to perform as expected. 

It’s easy for a project to get out of hand, so be sure you’ve also set good parameters in your plan to address scope creep.

Need Help Building Your Cloud Migration Strategy?

Unlock the full potential of your business with our comprehensive cloud migration solutions. Discover how the “lift and shift” method can seamlessly transition your data and applications to the cloud without the need for extensive modifications. By leveraging this efficient approach, you can minimize downtime and reduce costs, all while maintaining the integrity and performance of your critical systems.

Ready to take the next step in your IT modernization journey? Explore our detailed guide on how to modernize your IT infrastructure and drive growth to learn about the latest strategies and technologies that can help you stay ahead in a competitive market.

Learn more about what to expect on your journey to the cloud in our eBook.

]]>
TierPoint Announces Channel Partner Awards for 2022 Performance https://www.tierpoint.com/blog/tierpoint-announces-channel-partner-awards-for-2022-performance/ Tue, 04 Apr 2023 13:12:39 +0000 https://www.tierpoint.com/?p=15012 With growing success in the Channel, TierPoint continues to invest in enabling its partners and is pleased to recognize the following top performers for 2022.

Distribution Partner of the Year: AVANT Communications

In 2022, the AVANT ecosystem continued its multi-year run as the largest single generator of TierPoint channel bookings. The keys to AVANT’s success include top-notch executive leadership, world-class engineering talent, and best-in-breed sales enablement programs. 

Through its Special Forces Summit and regional Bootcamp events, AVANT draws hundreds of trusted advisors for training and networking opportunities. The company also provides unique value through its proprietary Pathfinder software, offering dynamic insights and on-target intelligence for both trusted advisors and solution providers.

Services & Solutions Partner of the Year: CDW

CDW was TierPoint’s top-producing services and solutions partner across all regions in 2022. Their dedication to building strong customer relationships helped open the door to exciting new opportunities and positioned us repeatedly to win big together.   

With in-depth knowledge of TierPoint’s product portfolio, CDW’s Integrated Services Engagement (ISE) team worked closely with our engineers to develop customized solutions that delivered the business outcomes sought by our shared clients.

Alliance Partner of the Year: Zerto

As they have for multiple years, the Zerto team in 2022 proved yet again the power of collaboration, supporting the continued adoption and robust growth of TierPoint’s market-leading Disaster Recovery as a Service (DRaaS) solutions.  

Across the board – from cloud architects to sales representatives –the Zerto team has proven remarkably easy to work with. Consistently, they have brought to the table a keen understanding of what it takes to win in today’s marketplace and helped our mutual clients achieve their resiliency goals.

Breakout Partner of the Year: Bridgepointe

Bridgepointe is recognized for driving strategic alignment between our companies, expanding our reach in several markets, and closing a major, six-figure monthly billing account. Their commitment and focus in 2022 were instrumental to building a strong and successful partnership.  

Our companies have collaborated on several marketing initiatives and co-sponsored events that strengthened our brands and increased our visibility, promising great upside in the future.

Pre-Sales Team of the Year: Telarus

This award recognizes Telarus for providing exceptional, pre-sales support, leading to more wins and revenue.

Telarus engineering resources exemplify greatness and have proven crucial to the success of our partnership. Consistently, they have brought to the table outstanding communication skills and tailored solutions that meet our shared customers’ needs, stayed up to date on industry trends, and demonstrated an unparalleled commitment to continuous learning and improvement. 

]]>
Fireside Chat with Brian Krebs: Recap https://www.tierpoint.com/blog/brian-krebs-fireside-chat-recap/ Tue, 29 Nov 2022 17:08:50 +0000 https://www.tierpoint.com/?p=12148 Hundreds of IT professionals, business leaders, and Chief Information Security Officers joined us for a Fireside Chat with renowned Cybersecurity Journalist Brian Krebs on Thursday, November 17. During this 45-minute webinar hosted by Andrew Baird, TierPoint’s VP of Marketing, and moderated by Paul Mazzucco, TierPoint’s CISO, Krebs provided insight into a multitude of questions relating to cybersecurity.

Miss this virtual event or need a refresher? Here’s a quick recap of the Q&A.

Pressing headline relating to cybersecurity

Paul and Brian began the webinar with a big topic in tech: the recent executive conviction after the Uber data breach.

The Uber data breach conviction

  • How did the data breach impact the modern CISO and who is ultimately at fault? According to Krebs, this primarily impacted CISOs by making them feel a little more hesitant in this role. Overall, it was a good reminder of the importance of maintaining transparency as a leader as “it’s the cover up” that gets companies and individuals in trouble.

    All in all, it’s hard to pinpoint exactly who was to blame for this breach because there’s still so many unknowns. For example, how good was the person in charge of security at the organization when it came to the documentation of security challenges, what was broken, and the timeline for resolving issues? How effective were they at communicating these known challenges and plans to key stakeholders and leaders? How did the leaders make decisions based on the information they received from the CISO?

    In Krebs’ opinion, “It’s not the job of the CISO to assume all the cyber risk of the organization, but to inform higher-ups of the risks,” as well as make business and risk cases for additional investments in security.
  • Will this conviction make organizations outsource their CISO? Krebs noted that he wouldn’t be surprised if this is one of the outcomes, however, many organizations were already outsourcing at least some of these job functions before the 2022 Uber breach.

During this segment, Krebs provided his thoughts on a critical topic and growing threat for many organizations: ransomware.

  • How has ransomware changed in the last year? According to his research, Krebs noted that ransomware has mainly changed by “groups shifting to data exfiltration as their main source of revenue,” however, they’re still interested in getting as much access to sensitive data as possible.

    Interestingly, Russia’s war against Ukraine has also caused some shifts in the cybercrime industry. How? Many hackers based in these areas have fled to neighboring countries and sanctions have made it more difficult for them to get paid.
  • What is the future of ransomware? In 2023, Krebs forecasts that we will see a rise in destructive attacks, such as data deletion and data corruption/manipulation rather than simply data encryption. It’s possible that often, after companies gain access to their data after a ransomware attack, they may question the integrity of their data files.

Responsibilities of the modern CISO

These days, CISOs are adopting more business-related responsibilities and tend to be responsible for providing education in the cyber security space, business growth, and securing stakeholders.

  • What do you think the future role of a CISO might look like? According to Krebs, “CISOs need to understand that part of their job is being a translator;” and this will be a primary function in the future as discussions around security improvements and cost justifications continue to grow alongside the rise of cybercrime.

    In his opinion, one of the best ways to discuss cyber security is by shifting the conversation away from security to resilience and availability; and convert the risk of downtime to monetary figures. How much would is cost the organization if they were unable to access data or use equipment for a week vs. How much do they need to spend on security-related investments?

After the primary fireside chat, Krebs and Mazzucco also covered a few of the most pressing questions submitted by attendees.

Stay tuned for more webinars in the future

Thanks to Brian Krebs for joining us and providing insight into such important topics! We’re also appreciative to everyone who tuned in. Be sure to keep an eye out on LinkedIn, Twitter, and Facebook for our upcoming events.

]]>
6 Ways 2020 Accelerated Cloud Computing in Healthcare https://www.tierpoint.com/blog/6-ways-2020-accelerated-cloud-computing-in-healthcare/ Wed, 24 Mar 2021 19:04:12 +0000 https://tierpointdev.wpengine.com/blog/6-ways-2020-accelerated-cloud-computing-in-healthcare/ In 2018, Accenture estimated that only 35% of healthcare IT workloads were housed in a public cloud environment. Later that year, a survey of hospital CIOs further showed just how reluctant healthcare leaders were to trust the cloud with sensitive patient data. Only about 18% said that more than half of their current software infrastructure was in the cloud. Although 60% listed moving more workloads to the cloud as a top 10 priority, less than a third had a transition plan in place.

Much has changed since 2018 and 2020 accelerated the adoption of cloud technology in healthcare.

Delivering Modern Healthcare -Virtualizing healthcare IT for better patient outcomes

How 2020 changed the way healthcare uses technology

Back in 2018, most people in IT thought the cloud migration would be a slow march for healthcare. After all, healthcare organizations are managing some of the most sensitive data there is. Amounts vary, but medical records can command even higher prices on the dark web than personal financial data.

What we couldn’t foresee was the impact 2020 – and the COVID-19 pandemic – would have on the healthcare industry. To protect the health of vulnerable patients while providing greater services, healthcare providers significantly increased their adoption of several key technologies.

#1 Online prescreening

No doubt, anyone who’s contacted a healthcare provider in recent months has run across an online prescreening questionnaire. Providers need to determine whether the patient’s symptoms indicate a possible COVID-19 infection so they could appropriately protect themselves and non-infected patients while providing proper care.

#2 Increased telehealth visits

Some healthcare providers were already providing telehealth services in certain situations, e.g., helping nervous new parents through their baby’s first fever. But telehealth wasn’t something most providers or patients saw as a replacement for in-person visits.

By June of 2020, however, respondents to a survey of healthcare providers conducted by McKinsey said they were conducting 50 to 175 times the number of telehealth visits than they did prior to COVID-19. Apparently, with positive results as 57% noted that they now view telehealth more favorably.

#3 Stepped up online patient engagement

In 2018, a survey of nearly 1,800 healthcare organizations conducted by the Medical Group Management Association (MGMA) found that 90% were already offering patient portal access. Unfortunately, according to data from the government accountability office, less than a third of patients were using these portals. The numbers aren’t in from 2020 yet, but with patients increasingly willing to use telehealth services to avoid face-to-face contact, we expect they’ll be making greater use of patient portal services as well.

#4 Follow up questionnaires

Follow up visits have long been recognized as a strategy for decreasing revisits and improving outcomes. Now, they’re also a way to collect more data on the symptoms, spread, and long-term effects of COVID-19. This data has also lead to an increased acceptance of web-based data analytics among healthcare providers.

#5 Remote monitoring

Remote monitoring of patient vital signs was already on the rise, but some providers were skeptical of using the data from consumer devices as part of a comprehensive health care plan. COVID-19 seems to have broken down those barriers.

For example, a hospital in New York has launched a program monitoring orthopedic patients through the Apple Watch. The FDA has approved a mHealth (mobile health) app to treat those that suffer from traumatic nightmares. Even MIT researchers have gotten into the act with a mHealth app that detects signs of COVID in the mobile device wearer’s cough patterns.

#6 Better collaboration

The healthcare industry has been steadily pushing the digitization of health records as a way to improve outcomes. For example, if a patient sees one physician for treatment of a foot ulcer, it’d be helpful for the provider to know the patient has also been seeing a primary care physician for a pre-diabetic condition.

However, with so many disparate systems, the dream of one central repository for patient data has proven elusive. The global pandemic has renewed interest in addressing the interoperability issue that has plagued EHR (electronic health records) initiatives.

But what’s the underlying technology behind all six of these key patient solutions? Cloud computing.

Cloud Computing is essential to modernizing Healthcare

How and Where the Cloud is Transforming Healthcare Infographic

It’ll be some time before we truly understand the effects of COVID-19 on our culture and our businesses, but many are already speculating about what those lasting impacts might be. No doubt, as with telehealth services, many providers and patients will grow accustomed to electronic healthcare services. If these services help improve outcomes and lower costs, the change could happen much faster than we think.

IDC predicted that the quantity of healthcare data would see a compounded annual growth rate (CAGR) of 36% through 2025. It’ll be interesting to see how these figures are adjusted as healthcare providers take advantage of new technologies to provide care during the pandemic. We wouldn’t be surprised to see a dramatic spike once the 2020-21 numbers are in.

There is one technology underlying it all that isn’t so new: cloud computing. Healthcare providers need a secure, accessible place to store all that data. They also need it to be cost-effective, a goal the traditional on-premises data center typically fails to achieve.

It’ll be especially interesting to see the new data on cloud adoption by healthcare organizations over the next year. If our healthcare customers are any indication, we could be on the cusp of a pretty dramatic industry transformation.

Adopt the cloud with a trusted provider

Moving to the cloud doesn’t need to be difficult. Working with a managed services provider can help you move to the cloud effectively while allowing your internal IT resources to focus on enhancing patient digital experiences. A managed services provider can also help you manage rapid data growth, secure patient data, streamline your IT infrastructure, and more. Learn more about Healthcare IT solutions and data recovery with TierPoint.

How and Where the Cloud is Transforming Healthcare

]]>
Cloud Migration Woes: Two Examples of Unforeseen Pitfalls https://www.tierpoint.com/blog/cloud-migration-woes-two-examples-of-unforeseen-pitfalls/ Wed, 29 Jul 2020 17:40:01 +0000 https://tierpointdev.wpengine.com/blog/cloud-migration-woes-two-examples-of-unforeseen-pitfalls/ Many people are all too familiar with the term “buyer’s remorse.” It typically happens when a customer buys a big-ticket item that they aren’t sure they can afford. Initially caught up in the excitement of the transaction, they start regretting their decision when reality sets in. Buyer’s remorse isn’t something that just happens to consumers. In fact, you could say that many IT departments are experiencing a type of buyer’s remorse when it comes to their cloud migration projects.

According to the 2019 Enterprise Cloud Index, a survey by Nutanix and Vanson Bourne, nearly three-quarters of respondents reported migrating applications from the public cloud back to a private cloud. When IDC studied the issue, they found the top repatriation drivers to be security (19%), performance (14%), cost (12%), control (12%), and the reduction of shadow IT (11%).

But the results from both the Nutanix and the IDC study can be a bit misleading if taken at face value. For example, IDC found security to be the top repatriation driver, and only 9% of the respondents in the Nutanix survey said the public cloud was “the most secure” operating model. But is the public cloud really less secure?

Like a lot of buyer’s remorse situations, the problem doesn’t lie in the product or service itself, so much as it does in the organization’s decision making processes or IT maturity. For example, the public cloud need not be less secure than any other type of cloud. It’s only the organization’s inability to deploy and manage workloads securely that makes it so. The same can be said of other repatriation drivers, such as performance and cost.

2 examples of unforeseen cloud migration woes

What such high repatriation numbers suggest is that organizations aren’t doing enough due diligence before migrating workloads to the cloud. The cloud migration team is either over-estimating the cloud platform’s ability to handle an organization’s needs or over-estimating the organization’s ability to handle the cloud platform. Let’s take a look at a couple of recent examples from our own customer files to illustrate the point.

We’re paying how much for AWS?!

One of the most common reasons to move to the cloud is to reduce CapEx expenditures in favor of OpEx. But that advantage quickly dissolves if the organization doesn’t keep a close eye on their monthly cloud spend.

In a recent webinar, Delivering Successful Outcomes with Cloud, Nutanix’s senior solutions marketing manager, Kong Yang related the story of a company that was spending tens of millions of dollars every month on AWS. Even in a large organization, you’d think that kind of invoice would raise all sorts of red flags.

Webinar - Delivering Successful Business Outcomes with Cloud

In this case, it went largely unnoticed because there wasn’t just one invoice. There were hundreds of smaller invoices, often for a few thousand dollars or less. The overarching costs weren’t readily apparent until all those individual expenditures were rolled up. So long as the individual budget managers didn’t overspend, no one questioned it.

This kind of spending issue combined with a lack of governance can be a particular problem with public cloud infrastructure like AWS. It’s easy to spin up instances in development or for temporary projects and forget to spin them back down when the resources are no longer needed. This organization didn’t have the protocols in place to manage their spend nor the tools in place to give management an overall picture of their spend.

To add insult to injury, the organization had decided not to discontinue its on-prem data center. When we analyzed their resource utilization, we found it to be roughly 2%. So, for all they were spending every month for AWS, they were still paying the high overhead of maintaining an on-prem data center and managing all of their existing infrastructure.

Our cloud environments are way too complex

Gartner predicted that, in 2020, 75% of organizations will deploy a multicloud or hybrid cloud approach to the cloud. Migrating applications to one cloud is a difficult task, managing multiple migrations, maintaining multiple clouds, and ensuring interoperability is even harder.

IT departments often find themselves asking:

  • Can I manage another migration with a shrinking budget and staff?
  • Does my staff have the expertise to successfully migrate and manage multiple clouds?
  • What type of cloud environment is best for my workloads: public or private, software as a service, platform as a service or infrastructure as a service?
  • How do I protect or backup my workloads once I migrate to the cloud?

A concerning trend shows that original cloud migrations are taking longer and cost more than expected when proper planning and expertise is missing. One of the biggest culprits is the lack if cloud expertise and experience. Experts say cloud platforms are and will continue to evolve rapidly.

Also read: Is Your Cloud Migration Strategy Helping or Hurting Your Business?

We can help you with cloud migration due diligence

On average, an IT professional may see one or two data center or cloud migrations in their career. It’s unrealistic to expect people to be experts in something they will see/experience so infrequently. A managed cloud provider might be able to help you with your cloud migration or repatriation efforts, and introduce you to new technologies, like Hyperconverged Infrastructure, to help you avoid some of those biggest pitfalls that come with adopting the cloud. Because we’ve managed hundreds of migration projects, chances are good we will spot pitfalls your internal team can’t see. If you’d like to learn more about our cloud migration services, visit us on the web or reach out to one of our cloud migration advisors.

IT Strategy Workshop - when an important decision needs to be made about Cloud, Security, or Disaster Recovery. Learn more...

]]>
5 Key Types of DDoS Attacks & How to Mitigate Them https://www.tierpoint.com/blog/5-key-ddos-attacks-how-to-mitigate-them/ Thu, 02 Jul 2020 20:15:31 +0000 https://tierpointdev.wpengine.com/blog/5-key-ddos-attacks-how-to-mitigate-them/ Since the first Denial-of-Service (DoS) attack was launched in 1974, Distributed Denial-of-Service (DDoS) attacks have remained among the most persistent and damaging cyber-attacks. This year, we’ve already seen two massive DDoS (Distributed Denial of Service) volumetric attacks that dwarf previous attacks of their type.

In February of 2020, an attacker used vulnerable third-party servers to attempt to flood Amazon Web Services with traffic at a rate of 2.3Tbps. The previous record was an attack in 2018 of 1.7Tbps. Then on June 21, an attack that reached a peak volume of 809 million packets per second (pps) was launched against a large European bank. Previously, the record was held by a 500 million pps attack that happened in January of last year.

Understanding the different types of DDoS attacks is critical to strengthening your defenses. In this post, we’ll cover five types of attacks–not all of them relying on volume–of which you need to be aware.

DDoS attack type #1: Advanced Persistent DoS (APDoS):

APDoS attacks involve massive network-layer DDoS attacks and focused application layer (HTTP) floods, followed by repeated SQLI and XSS attacks occurring at varying intervals. Typically, perpetrators simultaneously use five to eight attacks vectors involving up to tens of millions of requests per second, often accompanied by large SYN floods. These attacks can persist for several weeks.

It becomes clear that APDoS requires an array of technologies to stop these threats, including those that manifest into SMTP attacks (a relatively new vector) and secure-SMTP such as TLS over SMTP.

To successfully mitigate these threats, organizations must understand what they are dealing with and take certain precautions. As the next generation of DDoS threats emerge, organizations must become obsessive about removing risks and compulsive about action.

DDoS attack type #2: DNS Water Torture Attack

A DNS NXDOMAIN flood attack, which is also known as a water torture attack, targets an organization’s DNS servers. This type of attack involves a flood of maliciously crafted, DNS lookup requests. Intermediate resolvers also experience delays and timeouts while waiting for the end target’s authoritative name server to respond to the requests. These requests consume network, bandwidth and storage resources. They can also tie up network connections, causing timeouts.

By understanding the threat, an organization can comprehend two of the largest problems in solving this attack vector:

  • The attacker is coming from a known legitimate source and can’t realistically be blocked while still maintaining healthy DNS resolution operations over the long term
  • The attacker source is actually also querying legitimate requests at the same time illegitimate requests are being sent.

To counter this resource-draining threat, organizations should monitor their recursive DNS servers, keeping a keen eye out for anomalous behavior such as spikes in the number of unique sub-domains being queried or spikes in the number of timeouts or delayed responses from a given name server.

Any DNS attack mitigation tool must meet unique challenges. Beyond a limited set of vendors, there is no real automated solution to mitigate this threat, as the tool must contain the following attributes:

  • A deep knowledge of DNS traffic behavior
  • Ability to alleviate a high rate of DNS packets
  • Mitigation accuracy
  • Deliver the best quality of experience even under attack

Also read: Secure Cloud Computing: Today’s Biggest Roadblocks

DDoS attack type #3: SSL-Based Cyber Attacks

More companies are wisely encrypting both their internal network traffic, but this may be leaving them with a false sense of security. Gartner expects as much as 70% of malware attacks in 2020 to leverage encryption. These SSL-based attacks take many forms, including encrypted SYN floods, SSL renegotiation, HTTPS floods and encrypted web application attacks.

In the same way SSL and encryption protect the integrity of legitimate communications, they effectively obfuscate many of the attributes used to determine if traffic is malicious or legitimate. Most cyber-attack solutions struggle mightily to identify potentially malicious traffic and isolate it for further analysis.

The other major advantage that SSL attacks offer to attackers is the ability to put significant computing stress on network and application infrastructures they target.

Even the most advanced mitigation technologies have gaps in their encryption-based protections. Few of these solutions can be deployed out-of-path, which is a necessity for providing protection while limiting the impact on legitimate users. Many solutions that can do some level of decryption tend to rely on rate-limiting requests, thereby resulting in dropped legitimate traffic. Finally, many solutions require the customer to share actual server certificates, which complicates implementation and certificate management, and forces customers to share private keys for protection in the cloud.

To provide effective protection, solutions need to deliver full attack vector coverage, high scalability and innovative ways to handle management of encryption technologies in a manner that can be operationalized effectively and efficiently.

DDoS attack type #4: PDoS – Permanent Denial of Service

A permanent denial-of-service (PDoS) attack, also known as phlashing, is an attack that damages a system so badly that it requires replacement or re-installation of hardware.  By exploiting security flaws or misconfigurations, PDoS can destroy the firmware and/or basic functions of a system.

One method PDoS uses to accomplish its damage is via remote or physical administration on the management interfaces of the victim’s hardware, such as routers, printers, or other networking hardware. In the case of firmware attacks, the attacker may use vulnerabilities to replace a device’s basic software with a modified, corrupt, or defective firmware image—a process which when done legitimately is known as flashing. This therefore “bricks” the device, rendering it unusable for its original purpose until it can be repaired or replaced.  Other attacks include overloading the battery or power systems.

Permanent denial-of-service (PDoS) attacks have been around for a long time; however, this type of attack shows itself to the public from time to time only.

BrickerBot, which Radware discovered in 2017, is still probably the most well-known example. Over a four-day period, BrickerBot launched thousands of PDoS attempts from various locations leveraging Telnet vulnerabilities to breach a victim’s devices.

Assessing risks & taking action

The following behaviors and trends may increase the risk of a PDoS attack targeting your organization:

  • Running a highly virtualized environment that leverages a few hardware devices, but powerfully overloads software functions.
  • Organizations highly dependent on IoT
  • Organizations with centralized security gateways
  • Organizations that are considered critical infrastructure

The clear action to take is to conduct an audit of the type of technology you are running at or below the operating system level. Develop a clear understanding of the different firmware versions, binaries, chip-level software (like ASICs and FPGA) and technology that is in use in your environment. Also consider batteries, power systems and fan system vulnerabilities.

DDoS Attack Type #5: IoT Botnets and the economics of DDoS protection

Botnets entered the cybersecurity scene in 2016. Today, they are one of the fastest growing and fluid threats, especially as organizations connect more and more devices to the internet.

The appeal of Internet of Things (IoT) devices

For hackers, IoT devices are attractive targets for several reasons:

  • IoT devices usually fall short when it gets to endpoint protection implementation
  • Unlike PCs and servers, there are no regulations or standards for secure use of IoT devices
  • IoT devices operate 24×7 and can be in use at any moment

Botnets: making use of different attack vectors

The Mirai botnet provides a perfect example of the various attack vectors one IoT botnet can unleash on its victims. We can all thank a user named “Anna-senpai” for publishing the Mirai source code to an easily accessible, public forum. The code spread to numerous locations, including several GitHub repositories, where hackers began inspecting it. Since then, the Mirai botnet has been infecting hundreds of thousands of IoT devices—turning them into a “zombie army” capable of launching powerful volumetric DDoS attacks. Security researchers estimate that there are millions of vulnerable IoT devices actively taking part in these coordinated bot attacks.

The economics of botnets

While much has been discussed around Mirai, IoT, “the rise of the machines” and other catchy buzz-phrases, we believe one of the most disruptive changes is the new economics model of IoT botnets.

Not so long ago, hackers were investing a great deal of money, time and effort to scan the Internet for vulnerable servers, build their zombie army and then safeguard it against other hackers. All the while, hackers would keep continual watch for new infection targets.

Now with IoT botnets, instead of spending months of effort and hundreds of dollars, bot masters can take control of millions of IoT devices with near zero cost.

Also read: Forbes Tech Council: Can 5G Networks Stand Up To 4th-Gen Bots?

Knowledge is power when it comes to DDoS attacks

To stay ahead of the threat landscape, knowledge is power. No doubt, hackers will continue to evolve these five threats, and 2020 will bring about a new array of attack vectors that seek to undermine cyber defenses and take advantage of application and network vulnerabilities. Leveraging both the in-house expertise of your organization’s cybersecurity team, in addition to the know-how of your DDoS vendor will be key to staying ahead of the threat. Here at TierPoint, we specialize with helping businesses create effective IT security strategies to combat all modern cyber threats. Contact us to learn more.

Strategic Guide to IT Security_2020 edition

]]>
A Pragmatic Approach to Cybersecurity https://www.tierpoint.com/blog/a-pragmatic-approach-to-cybersecurity/ Tue, 07 May 2019 19:12:14 +0000 https://tierpointdev.wpengine.com/blog/a-pragmatic-approach-to-cybersecurity/ Cybersecurity is vital to businesses looking to protect their mission-critical systems and customer data. To understand cybersecurity more, we interviewed Darren Carroll, TierPoint’s Director of Security / Product Management, to get his take on cybersecurity fundamentals, new technology related to IT security, how to improve your IT security, and more. In the first in this two-part series, we dig into his views on leveraging Artificial Intelligence (AI) and machine learning to enhance your IT security position.

Thoughts on AI and machine learning for cybersecurity

Q: It seems like everywhere you look, someone’s talking about using AI and machine learning to counteract increasingly sophisticated cybersecurity threats. What are your views on the subject?

As a technologist, I love machine learning and AI. I also think this is the direction we need to be headed. There are just too many bad actors out there hijacking our IoT devices and creating armies of bad bots. We’ll never have the people power necessary to protect ourselves.

I think we’re in a bit of an infatuation period, though. Artificial intelligence and machine learning are buzzwords with great SEO value. Marketing departments use them all the time, and I’m not just talking about the vendors in the IT security space. When you look closely at the “intelligence” embedded into their products, it’s often just an advanced analytics application that’s capable of analyzing an ever-increasing number of data points. Sometimes it’s not even that.

I’d recommend focusing less on the future of the truly artificially intelligent response to threats and more on how you can leverage the increasingly sophisticated security products and services that are on the market right now to improve your security position. That’s probably going to mean automating against the known threats and bubbling up information to a highly qualified human who can analyze what the algorithms are telling us and choose an appropriate response.

Also read: The Increased Role of Artificial Intelligence in Data Security

Advice on cybersecurity products

Q: What advice would you give an organization looking at some of these advanced security products?

It depends on the organization, but in a lot of cases, I’d say “get help.” That probably sounds a bit self-serving, so let me explain a bit.

As you probably know, we just released a product called CleanIP. This is a next generation firewall that includes not only the firewall functionality itself, but also deployment services as well as 24X7 monitoring by our security engineers.

Managed Next-Generation Firewall: CleanIP by TierPoint

The backbone of this product is the Fortinet FortiGate Next-Generation Firewall platform. And yes, a business could choose to buy this product and deploy it themselves. But for most businesses, that’s a waste of time and can leave them vulnerable. Most of your small and mid-tier enterprises will buy a next-generation firewall and only enable the historic firewall features and functions. They won’t turn on all that next-gen firewall goodness because they don’t know what’s there, and they don’t have time or the expertise to figure it out.

Also read: Next-Gen Firewalls Provide Advanced Cybersecurity Protection

How CleanIP works

Q: Tell us more about that. Once CleanIP is set up, what do your security engineers do for the customer then?

There’s two sides to that equation. So, we set up CleanIP taking into account the current threat landscape and your current situation – your technologies, regulatory requirements, business scenarios and what have you. But as they say, there’s nothing as constant as change. That’s certainly true in IT security.

There’s always the next threat – ransomware, cryptojacking, new variants of malware… The customer’s environment is just as dynamic. Maybe there’s a new technology they want to deploy or they’re increasing their mobile footprint or acquiring another business. It might even be just a challenge controlling the apps your sales team is installing on their mobile devices.

Every time you turn on something new, it introduces new vulnerabilities. But today’s businesses can’t just sit still. IT has become one of the best competitive weapons a business has. You just need to ensure your IT security policies and practices can keep up with your business aspirations.

Want to learn more about cybersecurity trends and fundamentals?

Stay tuned for the next part of our interview with Darren Carroll where we will dive deeper into the greatest threats to cybersecurity.

In the meantime, read our Strategic Guide on IT Security where we cover topics from data security fundamentals to the latest cybersecurity trends.

Strategic Guide to IT Security

]]>