# Crawling services

External Importer supports multiple crawling services that automatically rotate IP addresses to prevent blocking and rate-limit issues.\
The currently supported services are:

* [Scrapingdog](https://www.keywordrush.com/go/scrapingdog)
* Scrapeowl.com
* [Scraperapi](https://keywordrush.com/go/scraperapi)
* Crawlbase

{% hint style="warning" %}
These services are paid, but each typically provides **around 1,000 free requests per month**.
{% endhint %}

<figure><img src="https://2204606725-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MJHhS3qgDA1lCM6b1Nw%2Fuploads%2FxHV95h1nehXFMUe5YabP%2Fimage.png?alt=media&#x26;token=f9922192-9a18-4b3d-8e78-669f9754bd6e" alt=""><figcaption><p>You can view the crawler service used for the last request on the product import page, in the right-hand panel.</p></figcaption></figure>

### How to Route Requests Through a Scraping Service

#### 1. Add API Keys

1. Open **External Importer → Settings → Extractor**
2. Enter your API keys for the providers you want to use

You may enable one or multiple providers at the same time.

<figure><img src="https://2204606725-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MJHhS3qgDA1lCM6b1Nw%2Fuploads%2FfUyq5IOQhAXfRwGptrU6%2Fimage.png?alt=media&#x26;token=1326d017-4830-4252-9bfe-423d0bdd2fa9" alt="" width="563"><figcaption></figcaption></figure>

#### 2. Configure Routing Rules

Under the **Routing rules** table:

1. Click **Add rule**
2. Enter a domain or pattern
3. Select the scraping provider
4. (Optional) Add additional parameters for the provider API

<figure><img src="https://2204606725-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-MJHhS3qgDA1lCM6b1Nw%2Fuploads%2FJYdnFHmPoGNXRAfAuaM8%2Fimage.png?alt=media&#x26;token=5411c9d0-963d-45a2-8735-37551d850dbd" alt=""><figcaption></figcaption></figure>

#### Pattern Examples

You can use **simple domain names** or advanced patterns:

* `example.com`
* `*.example.com` (any subdomain)
* `example.com/path/*` (match only specific URL paths)

#### Additional Parameters

Additional parameters are appended to the provider API request.\
Each provider uses its **own parameter names** and accepted values.

**ScraperAPI Parameters**

* `country_code=us` — Geotargeting
* `premium=true` — Premium residential/mobile IPs
* `ultra_premium=true` — Advanced bypass mechanism
* `render=true` — JavaScript rendering

**Scrapingdog Parameters**

* `country=de` — Geotargeting
* `premium=true` — Premium residential proxies
* `dynamic=true` — JavaScript rendering

**Combining Parameters**

Multiple parameters can be joined using `&`:

```
country_code=us&premium=true&render=true
```

### Rule Priority

Routing rules are evaluated **from top to bottom**.\
The **first rule that matches** a URL is selected, and the chosen provider will handle the request.
