Modern attacks are often automated: bots, scrapers, AI crawlers, API abuse, and traffic that can spike load or cause downtime. Bad bot traffic has grown steadily, and OWASP classifies threats like scraping (OAT-011) and account abuse as automated threats. If you care about privacy and EU data residency, choose a bot and scraper protection layer that is EU-based and keeps your data in the EU. This article outlines a layered approach: edge/server bot check, frontend behaviour, invisible challenge, API security, and alignment with OWASP Web and API Top 10.
Modern attacks you need to address
- Automated bots – Credential stuffing, fake signups, form spam, inventory scraping.
- Scrapers and crawlers – AI crawlers, competitor scraping, content and price theft (OAT-011 Scraping).
- API abuse – Unrestricted resource consumption, abuse of signup/trial/payment flows (OWASP API Security Top 10 API4, API6).
- DDoS-style load – High-volume or distributed traffic that can exhaust resources or cause downtime.
Who is affected: News and publishers (scraping, availability), ecommerce (price and product scraping, checkout abuse), SaaS (trial and API abuse), and communities (fake signups, spam). A single layered strategy can protect websites and APIs across these use cases.
Five layers of protection
1. Edge/server bot check
Before serving the page (or the API response), call a check-request API (Bot Protection) from your server or CDN. Send client IP, path, and optionally headers; get allow, challenge, or block. Apply immediately: redirect to challenge or block when needed; otherwise continue. Use a short timeout and fail open so real users are never locked out. This stops many bots and scrapers before they reach your origin and reduces load.
2. Frontend behaviour (SDK)
Load a lightweight script on your pages that sends behaviour and device signals (Bot Detection) to your security API. This improves classification (human vs bot, good crawler vs scraper) and powers analytics. Use it together with the edge check: the edge check gives a fast decision; the on-page data refines future decisions and dashboards.
3. Invisible challenge
When the request check returns “challenge,” send the visitor to a challenge page (e.g. proof-of-work or text challenge). Use a solution that is invisible or low-friction for real users and only shown to suspicious traffic - for example taCAPTCHA (Frictionless proof-of-work or Code Captcha). That way you filter bots and scrapers without hurting user experience.
4. API security
Secure APIs with authentication, authorization, rate limiting, and input validation. For sensitive or high-value endpoints, call the same (or a dedicated) check-request/abuse API to block or challenge bots and automated abuse. This addresses OWASP API Security (e.g. API4 Unrestricted Resource Consumption, API6 Unrestricted Access to Sensitive Business Flows).
5. OWASP alignment
- OWASP Top 10 Web (2021) – Broken Access Control, Injection, Security Misconfiguration, etc. Apply secure coding and configuration.
- OWASP API Security Top 10 – Address authorization, auth, resource consumption, and business-flow abuse as above.
- OWASP Automated Threats – Use request checks and behaviour to mitigate scraping, account abuse, and resource exhaustion.
How the flow works
Flow: Visitor → Your server or CDN → Check-request API (allow / challenge / block). On allow: page loads → SDK sends behaviour data to your security API. On challenge or block: user is redirected to the challenge or block page.
- Visitor hits your server or CDN.
- Server calls the check-request API; gets allow, challenge, or block.
- On allow, the page loads; the SDK sends behaviour data for ongoing security and analytics.
- On challenge or block, the user is redirected to the appropriate page; after challenge, they can return.
Tools and solutions (application security)
- WAF and DDoS – Cloudflare, AWS WAF, Fastly, Imperva – protect against attacks and volumetric traffic.
- Bot and scraper protection – Trusted Accounts (EU-based, data in the EU; Bot Protection check-request, Bot Detection SDK, optional taCAPTCHA; allow/challenge/block at your server or any CDN), DataDome, Cloudflare Bot Management, HUMAN Security – stop bots and scrapers before they reach your origin.
- WordPress – Trusted Accounts WordPress plugin for bot protection; see Top 5 CAPTCHA Solutions for WordPress in 2026.
Summary
Protect your website and APIs from modern attacks by combining: (1) edge/server bot check, (2) frontend behaviour (SDK), (3) invisible challenge when needed, (4) API security (auth, rate limit, bot check), and (5) OWASP alignment. That addresses automated bots, scrapers, API abuse, and resource exhaustion in one coherent strategy.
Further reading
- Website Security Best Practices for 2026 – HTTPS, WAF, bot protection, API security, monitoring; tools and further reading.
- How to Protect your Website in 2026 – Actionable steps: edge protection, crawler detection, invisible challenges, API hardening.
- Professional API Security Best Practices in 2026 – Auth, rate limiting, bot protection for APIs; how to protect endpoints with a check-request API.
- OWASP API Security: What you should know – OWASP API Security Top 10 and mitigations.
- Bot Protection - Top 7 Tools for 2026 – Compare bot protection tools; server vs client, WordPress, API.
- Cloudflare Alternative 2026 – EU-based bot protection, data in Europe.
- DataDome Alternative 2026 – EU-based, affordable bot protection; self-service.
- How to Defend your Platform against Spammers, Bots and Trolls – Four layers of defence and use cases by industry.
- Stop Bots and Abusive Users: WordPress – Protect your WordPress site with the Trusted Accounts plugin.
Try Trusted Accounts – EU-based bot and scraper protection with Bot Protection (allow/challenge/block in under ~50 ms), Bot Detection (frontend SDK), and optional taCAPTCHA (proof-of-work and Code Captcha). Real users stay unaffected; data stays in the EU.


