Rate Limiting
What is the goal of rate limiting?
Rate limiting serves as a form of application-level DDoS protection. It helps safeguard your origin / data center from application-level DDoS attacks and allows you to manage the volume of traffic reaching your infrastructure. This prevention is essential for preventing your origin from being overwhelmed, ensuring availability, and managing costs, particularly if you operate on scalable cloud infrastructure.
Compared to WAF, which analyzes the content of HTTP requests to identify malicious traffic, rate limiting doesn't examine content. Instead, it focuses on the number of requests for different traffic segments. This approach allows you to ensure that your service deals with real customers rather than DDoS attacks targeting your origin / data center.
With rate limiting, you gain control over both the total traffic reaching your origin and the traffic specific to different segments. For example, you can set limits for login requests. By defining granular rate limiting rules, you create more refined DDoS protection for your application, ensuring that your end users' behavior aligns with your expectations. This helps prevent your origin or data center from handling more traffic than it's designed for and allows you to eliminate attacks while serving legitimate traffic without disruptions.
Rate limiting can be configured at different levels of your application, either at the CDN level or the origin level. Today, many companies prefer to implement this control at the CDN level, as close to the end user as possible, to avoid dealing with these attacks at the origin.
Rate limiting in Multi-CDN architecture
However, adopting a multi-CDN strategy presents challenges for rate limiting. Why? Because each CDN in a multi-CDN setup only sees a portion of the traffic and controls that specific segment. It lacks visibility into other CDNs' activities. Implementing the same rate limiting on all CDNs isn't effective because there's no real control over how much traffic each CDN serves at any given time. As a result, these limits may be either inefficient or too strict.
In a multi-CDN architecture, you have two options for effective rate limiting:
- Add an additional tier with rate limiting capabilities, which negatively impacts performance, adds a point of failure, and isn't cost-efficient.
- Use IO River.
IO River is the only solution that enables global rate limiting for your traffic in a multi-CDN architecture without the need for an additional tier. It implements rate limiting across the CDNs, providing visibility into the traffic passing through each CDN and applying your rules to this traffic in parallel across all CDNs. This approach ensures global rate limiting, even when traffic dynamically spans multiple CDNs. IO River's global rate limiting doesn't add an extra tier, doesn't introduce latency, doesn't create a single point of failure, and it's cost-efficient.
Rate limiting configuration in IO River
Let's explore how to configure rate limiting in IO River. Within IO River, you can define a list of rate limiting rules, each of which handles different traffic segments.
When setting up a new rule:
You should provide a name for it and specify the traffic slice you want to configure rate limiting for.
You can segment traffic based on different parts of the HTTP request, such as the URL path. You can also create complex conditions that combine criteria from multiple parts of the HTTP request to identify specific traffic slices.
Next, define the counters. Specify how many requests are allowed within a set time frame. For instance, you might establish a condition for login requests, stating that your infrastructure supports 1,000 login requests per minute.
After that, outline what actions should be taken when rate limiting is exceeded. You have two options: you can either block incoming requests or log them. For example, if a login request surpasses the defined limit, it can be blocked, preventing it from reaching your origin. Additionally, you can determine the duration for which this action applies to incoming requests. For instance, if the number of requests exceeds the limit, you can block additional requests for 10 seconds. Lastly, save your changes and create the rule.
Monitoring of rate limiting impact
To monitor how your traffic is affected by your rate limiting rules, you can use the monitoring screen, which displays all events and statistics related to blocked traffic, reasons for blocking, and the specific rule responsible. IO River provides comprehensive capabilities for understanding how your DDoS protection affects your traffic and for analyzing cases in which legitimate traffic is blocked.