Skip to content

Commit 96949be

Browse files
authored
feat: Unify Apify and Scrapy to use single event loop & remove nest-asyncio (#390)
### Description - Apify (asyncio) and Scrapy (Twisted) now run on a single event loop. - `nest-asyncio` has been completely removed. - It seems that this change also improved the performance. - The `ApifyScheduler`, which is synchronous, now executes asyncio coroutines (communication with RQ) in a separate thread with its own asyncio event loop. - Logging setup has to be adjusted and I moved to a dedicated file in the SDK. - The try-import functionality for optional dependecies from Crawlee was added to `scrapy` subpackage. - A new integration test for Scrapy Actor has been added. ### Issues - Closes: #148 - Closes: #176 - Closes: #392 - Relates: apify/actor-templates#303 - This issue will be closed once the corresponding PR in `actor-templates` is merged. ### Tests - A new integration test for Scrapy Actor has been added. - And of course, it was tested manually using the Actor from guides/templates. ### Next steps - Update Scrapy Actor template in `actor-templates`. - Update [Actor Scrapy Books Example](https://github.com/apify/actor-scrapy-books-example). - Add HTTP cache storage for KVS, @honzajavorek will provide his implementation. ### Follow-up issues - There are still a few issues to be resolved. - #391 - #395
1 parent 89c230a commit 96949be

30 files changed

+591
-471
lines changed

docs/02_guides/05_scrapy.mdx

Lines changed: 53 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -7,90 +7,101 @@ import CodeBlock from '@theme/CodeBlock';
77
import Tabs from '@theme/Tabs';
88
import TabItem from '@theme/TabItem';
99

10-
import UnderscoreMainExample from '!!raw-loader!./code/scrapy_src/__main__.py';
11-
import MainExample from '!!raw-loader!./code/scrapy_src/main.py';
12-
import ItemsExample from '!!raw-loader!./code/scrapy_src/items.py';
13-
import SettingsExample from '!!raw-loader!./code/scrapy_src/settings.py';
14-
import TitleSpiderExample from '!!raw-loader!./code/scrapy_src/spiders/title.py';
10+
import UnderscoreMainExample from '!!raw-loader!./code/scrapy_project/src/__main__.py';
11+
import MainExample from '!!raw-loader!./code/scrapy_project/src/main.py';
12+
import ItemsExample from '!!raw-loader!./code/scrapy_project/src/items.py';
13+
import SpidersExample from '!!raw-loader!./code/scrapy_project/src/spiders/title.py';
14+
import SettingsExample from '!!raw-loader!./code/scrapy_project/src/settings.py';
1515

16-
[Scrapy](https://scrapy.org/) is an open-source web scraping framework written in Python. It provides a complete set of tools for web scraping, including the ability to define how to extract data from websites, handle pagination and navigation.
16+
[Scrapy](https://scrapy.org/) is an open-source web scraping framework for Python. It provides tools for defining scrapers, extracting data from web pages, following links, and handling pagination. With the Apify SDK, Scrapy projects can be converted into Apify [Actors](https://docs.apify.com/platform/actors), integrated with Apify [storages](https://docs.apify.com/platform/storage), and executed on the Apify [platform](https://docs.apify.com/platform).
1717

18-
:::tip
18+
## Integrating Scrapy with the Apify platform
1919

20-
Our CLI now supports transforming Scrapy projects into Apify Actors with a single command! Check out the [Scrapy migration guide](https://docs.apify.com/cli/docs/integrating-scrapy) for more information.
20+
The Apify SDK provides an Apify-Scrapy integration. The main challenge of this is to combine two asynchronous frameworks that use different event loop implementations. Scrapy uses [Twisted](https://twisted.org/) for asynchronous execution, while the Apify SDK is based on [asyncio](https://docs.python.org/3/library/asyncio.html). The key thing is to install the Twisted's `asyncioreactor` to run Twisted's asyncio compatible event loop. This allows both Twisted and asyncio to run on a single event loop, enabling a Scrapy spider to run as an Apify Actor with minimal modifications.
2121

22-
:::
22+
<CodeBlock className="language-python" title="__main.py__: The Actor entry point ">
23+
{UnderscoreMainExample}
24+
</CodeBlock>
2325

24-
Some of the key features of Scrapy for web scraping include:
26+
In this setup, `apify.scrapy.initialize_logging` configures an Apify log formatter and reconfigures loggers to ensure consistent logging across Scrapy, the Apify SDK, and other libraries. The `apify.scrapy.run_scrapy_actor` bridges asyncio coroutines with Twisted's reactor, enabling the Actor's main coroutine, which contains the Scrapy spider, to be executed.
2527

26-
- **Request and response handling** - Scrapy provides an easy-to-use interface for making HTTP requests and handling responses,
27-
allowing you to navigate through web pages and extract data.
28-
- **Robust Spider framework** - Scrapy has a spider framework that allows you to define how to scrape data from websites,
29-
including how to follow links, how to handle pagination, and how to parse the data.
30-
- **Built-in data extraction** - Scrapy includes built-in support for data extraction using XPath and CSS selectors,
31-
allowing you to easily extract data from HTML and XML documents.
32-
- **Integration with other tool** - Scrapy can be integrated with other Python tools like BeautifulSoup and Selenium for more advanced scraping tasks.
28+
Make sure the `SCRAPY_SETTINGS_MODULE` environment variable is set to the path of the Scrapy settings module. This variable is also used by the `Actor` class to detect that the project is a Scrapy project, triggering additional actions.
3329

34-
## Using Scrapy template
30+
<CodeBlock className="language-python" title="main.py: The Actor main coroutine">
31+
{MainExample}
32+
</CodeBlock>
3533

36-
The fastest way to start using Scrapy in Apify Actors is by leveraging the [Scrapy Actor template](https://apify.com/templates/categories/python). This template provides a pre-configured structure and setup necessary to integrate Scrapy into your Actors seamlessly. It includes: setting up the Scrapy settings, `asyncio` reactor, Actor logger, and item pipeline as necessary to make Scrapy spiders run in Actors and save their outputs in Apify datasets.
34+
Within the Actor's main coroutine, the Actor's input is processed as usual. The function `apify.scrapy.apply_apify_settings` is then used to configure Scrapy settings with Apify-specific components before the spider is executed. The key components and other helper functions are described in the next section.
3735

38-
## Manual setup
36+
## Key integration components
3937

40-
If you prefer not to use the template, you will need to manually configure several components to integrate Scrapy with the Apify SDK.
38+
The Apify SDK provides several custom components to support integration with the Apify platform:
4139

42-
### Event loop & reactor
40+
- [`apify.scrapy.ApifyScheduler`](https://docs.apify.com/sdk/python/reference/class/ApifyScheduler) - Replaces Scrapy's default [scheduler](https://docs.scrapy.org/en/latest/topics/scheduler.html) with one that uses Apify's [request queue](https://docs.apify.com/platform/storage/request-queue) for storing requests. It manages enqueuing, dequeuing, and maintaining the state and priority of requests.
41+
- [`apify.scrapy.ActorDatasetPushPipeline`](https://docs.apify.com/sdk/python/reference/class/ActorDatasetPushPipeline) - A Scrapy [item pipeline](https://docs.scrapy.org/en/latest/topics/item-pipeline.html) that pushes scraped items to Apify's [dataset](https://docs.apify.com/platform/storage/dataset). When enabled, every item produced by the spider is sent to the dataset.
42+
- [`apify.scrapy.ApifyHttpProxyMiddleware`](https://docs.apify.com/sdk/python/reference/class/ApifyHttpProxyMiddleware) - A Scrapy [middleware](https://docs.scrapy.org/en/latest/topics/downloader-middleware.html) that manages proxy configurations. This middleware replaces Scrapy's default `HttpProxyMiddleware` to facilitate the use of Apify's proxy service.
4343

44-
The Apify SDK is built on Python's asynchronous [`asyncio`](https://docs.python.org/3/library/asyncio.html) library, whereas Scrapy uses [`twisted`](https://twisted.org/) for its asynchronous operations. To make these two frameworks work together, you need to:
44+
Additional helper functions in the [`apify.scrapy`](https://github.com/apify/apify-sdk-python/tree/master/src/apify/scrapy) subpackage include:
4545

46-
- Set the [`AsyncioSelectorReactor`](https://docs.scrapy.org/en/latest/topics/asyncio.html#installing-the-asyncio-reactor) in Scrapy's project settings: This reactor is `twisted`'s implementation of the `asyncio` event loop, enabling compatibility between the two libraries.
47-
- Install [`nest_asyncio`](https://pypi.org/project/nest-asyncio/): The `nest_asyncio` package allows the asyncio event loop to run within an already running loop, which is essential for integration with the Apify SDK.
46+
- `apply_apify_settings` - Applies Apify-specific components to Scrapy settings.
47+
- `to_apify_request` and `to_scrapy_request` - Convert between Apify and Scrapy request objects.
48+
- `initialize_logging` - Configures logging for the Actor environment.
49+
- `run_scrapy_actor` - Bridges asyncio and Twisted event loops.
4850

49-
By making these adjustments, you can ensure collaboration between `twisted`-based Scrapy and the `asyncio`-based Apify SDK.
51+
## Create a new Apify-Scrapy project
5052

51-
### Other components
53+
The simplest way to start using Scrapy in Apify Actors is to use the [Scrapy Actor template](https://apify.com/templates/python-scrapy). The template provides a pre-configured project structure and setup that includes all necessary components to run Scrapy spiders as Actors and store their output in Apify datasets. If you prefer manual setup, refer to the example Actor section below for configuration details.
5254

53-
We also prepared other Scrapy components to work with Apify SDK, they are available in the [`apify/scrapy`](https://github.com/apify/apify-sdk-python/tree/master/src/apify/scrapy) sub-package. These components include:
55+
## Wrapping an existing Scrapy project
5456

55-
- `ApifyScheduler`: A Scrapy scheduler that uses the Apify Request Queue to manage requests.
56-
- `ApifyHttpProxyMiddleware`: A Scrapy middleware for working with Apify proxies.
57-
- `ActorDatasetPushPipeline`: A Scrapy item pipeline that pushes scraped items into the Apify dataset.
57+
The Apify CLI supports converting an existing Scrapy project into an Apify Actor with a single command. The CLI expects the project to follow the standard Scrapy layout (including a `scrapy.cfg` file in the project root). During the wrapping process, the CLI:
5858

59-
The module contains other helper functions, like `apply_apify_settings` for applying these components to Scrapy settings, and `to_apify_request` and `to_scrapy_request` for converting between Apify and Scrapy request objects.
59+
- Creates the necessary files and directories for an Apify Actor.
60+
- Installs the Apify SDK and required dependencies.
61+
- Updates Scrapy settings to include Apify-specific components.
62+
63+
For further details, see the [Scrapy migration guide](https://docs.apify.com/cli/docs/integrating-scrapy).
6064

6165
## Example Actor
6266

63-
Here is an example of a Scrapy Actor that scrapes the titles of web pages and enqueues all links found on each page. This example is identical to the one provided in the Apify Actor templates.
67+
The following example demonstrates a Scrapy Actor that scrapes page titles and enqueues links found on each page. This example aligns with the structure provided in the Apify Actor templates.
6468

6569
<Tabs>
6670
<TabItem value="__main__.py" label="__main.py__">
6771
<CodeBlock className="language-python">
6872
{UnderscoreMainExample}
6973
</CodeBlock>
7074
</TabItem>
71-
<TabItem value="main.py" label="main.py" default>
75+
<TabItem value="main.py" label="main.py">
7276
<CodeBlock className="language-python">
7377
{MainExample}
7478
</CodeBlock>
7579
</TabItem>
76-
<TabItem value="items.py" label="items.py" default>
80+
<TabItem value="settings.py" label="settings.py">
7781
<CodeBlock className="language-python">
78-
{ItemsExample}
82+
{SettingsExample}
7983
</CodeBlock>
8084
</TabItem>
81-
<TabItem value="settings.py" label="settings.py" default>
85+
<TabItem value="items.py" label="items.py">
8286
<CodeBlock className="language-python">
83-
{SettingsExample}
87+
{ItemsExample}
8488
</CodeBlock>
8589
</TabItem>
86-
<TabItem value="spiders/title.py" label="spiders/title.py" default>
90+
<TabItem value="spiders/title.py" label="spiders/title.py">
8791
<CodeBlock className="language-python">
88-
{TitleSpiderExample}
92+
{SpidersExample}
8993
</CodeBlock>
9094
</TabItem>
9195
</Tabs>
9296

9397
## Conclusion
9498

95-
In this guide you learned how to use Scrapy in Apify Actors. You can now start building your own web scraping projects
96-
using Scrapy, the Apify SDK and host them on the Apify platform. See the [Actor templates](https://apify.com/templates/categories/python) to get started with your own scraping tasks. If you have questions or need assistance, feel free to reach out on our [GitHub](https://github.com/apify/apify-sdk-python) or join our [Discord community](https://discord.com/invite/jyEM2PRvMU). Happy scraping!
99+
In this guide you learned how to use Scrapy in Apify Actors. You can now start building your own web scraping projects using Scrapy, the Apify SDK and host them on the Apify platform. See the [Actor templates](https://apify.com/templates/categories/python) to get started with your own scraping tasks. If you have questions or need assistance, feel free to reach out on our [GitHub](https://github.com/apify/apify-sdk-python) or join our [Discord community](https://discord.com/invite/jyEM2PRvMU). Happy scraping!
100+
101+
## Additional resources
102+
103+
- [Apify CLI: Integrating Scrapy projects](https://docs.apify.com/cli/docs/integrating-scrapy)
104+
- [Apify: Run Scrapy spiders on Apify](https://apify.com/run-scrapy-in-cloud)
105+
- [Apify templates: Pyhon Actor Scrapy template](https://apify.com/templates/python-scrapy)
106+
- [Apify store: Scrapy Books Example Actor](https://apify.com/vdusek/scrapy-books-example)
107+
- [Scrapy: Official documentation](https://docs.scrapy.org/)
File renamed without changes.
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
from __future__ import annotations
2+
3+
from twisted.internet import asyncioreactor
4+
5+
# Install Twisted's asyncio reactor before importing any other Twisted or Scrapy components.
6+
asyncioreactor.install() # type: ignore[no-untyped-call]
7+
8+
import os
9+
10+
from apify.scrapy import initialize_logging, run_scrapy_actor
11+
12+
# Import your main Actor coroutine here.
13+
from .main import main
14+
15+
# Ensure the location to the Scrapy settings module is defined.
16+
os.environ['SCRAPY_SETTINGS_MODULE'] = 'src.settings'
17+
18+
19+
if __name__ == '__main__':
20+
initialize_logging()
21+
run_scrapy_actor(main())
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
from __future__ import annotations
2+
3+
from scrapy import Field, Item
4+
5+
6+
class TitleItem(Item):
7+
"""Represents a title item scraped from a web page."""
8+
9+
url = Field()
10+
title = Field()
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
from __future__ import annotations
2+
3+
from scrapy.crawler import CrawlerRunner
4+
from scrapy.utils.defer import deferred_to_future
5+
6+
from apify import Actor
7+
from apify.scrapy import apply_apify_settings
8+
9+
# Import your Scrapy spider here.
10+
from .spiders import TitleSpider as Spider
11+
12+
13+
async def main() -> None:
14+
"""Apify Actor main coroutine for executing the Scrapy spider."""
15+
async with Actor:
16+
# Retrieve and process Actor input.
17+
actor_input = await Actor.get_input() or {}
18+
start_urls = [url['url'] for url in actor_input.get('startUrls', [])]
19+
allowed_domains = actor_input.get('allowedDomains')
20+
proxy_config = actor_input.get('proxyConfiguration')
21+
22+
# Apply Apify settings, which will override the Scrapy project settings.
23+
settings = apply_apify_settings(proxy_config=proxy_config)
24+
25+
# Create CrawlerRunner and execute the Scrapy spider.
26+
crawler_runner = CrawlerRunner(settings)
27+
crawl_deferred = crawler_runner.crawl(
28+
Spider,
29+
start_urls=start_urls,
30+
allowed_domains=allowed_domains,
31+
)
32+
await deferred_to_future(crawl_deferred)
File renamed without changes.
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
BOT_NAME = 'titlebot'
2+
DEPTH_LIMIT = 1
3+
LOG_LEVEL = 'INFO'
4+
NEWSPIDER_MODULE = 'src.spiders'
5+
ROBOTSTXT_OBEY = True
6+
SPIDER_MODULES = ['src.spiders']
7+
TELNETCONSOLE_ENABLED = False
8+
TWISTED_REACTOR = 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
from .title import TitleSpider
File renamed without changes.

docs/02_guides/code/scrapy_src/spiders/title.py renamed to docs/02_guides/code/scrapy_project/src/spiders/title.py

Lines changed: 27 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
1-
# ruff: noqa: TID252, RUF012
2-
31
from __future__ import annotations
42

5-
from typing import TYPE_CHECKING
3+
from typing import TYPE_CHECKING, Any
64
from urllib.parse import urljoin
75

86
from scrapy import Request, Spider
@@ -16,28 +14,44 @@
1614

1715

1816
class TitleSpider(Spider):
19-
"""Scrapes title pages and enqueues all links found on the page."""
20-
21-
name = 'title_spider'
17+
"""A spider that scrapes web pages to extract titles and discover new links.
2218
23-
# The `start_urls` specified in this class will be merged with the `start_urls` value from your Actor input
24-
# when the project is executed using Apify.
25-
start_urls = ['https://apify.com/']
19+
This spider retrieves the content of the <title> element from each page and queues
20+
any valid hyperlinks for further crawling.
21+
"""
2622

27-
# Scrape only the pages within the Apify domain.
28-
allowed_domains = ['apify.com']
23+
name = 'title_spider'
2924

3025
# Limit the number of pages to scrape.
3126
custom_settings = {'CLOSESPIDER_PAGECOUNT': 10}
3227

28+
def __init__(
29+
self,
30+
start_urls: list[str],
31+
allowed_domains: list[str],
32+
*args: Any,
33+
**kwargs: Any,
34+
) -> None:
35+
"""A default costructor.
36+
37+
Args:
38+
start_urls: URLs to start the scraping from.
39+
allowed_domains: Domains that the scraper is allowed to crawl.
40+
*args: Additional positional arguments.
41+
**kwargs: Additional keyword arguments.
42+
"""
43+
super().__init__(*args, **kwargs)
44+
self.start_urls = start_urls
45+
self.allowed_domains = allowed_domains
46+
3347
def parse(self, response: Response) -> Generator[TitleItem | Request, None, None]:
3448
"""Parse the web page response.
3549
3650
Args:
3751
response: The web page response.
3852
3953
Yields:
40-
Yields scraped TitleItem and Requests for links.
54+
Yields scraped `TitleItem` and new `Request` objects for links.
4155
"""
4256
self.logger.info('TitleSpider is parsing %s...', response)
4357

@@ -46,7 +60,7 @@ def parse(self, response: Response) -> Generator[TitleItem | Request, None, None
4660
title = response.css('title::text').extract_first()
4761
yield TitleItem(url=url, title=title)
4862

49-
# Extract all links from the page, create Requests out of them, and yield them
63+
# Extract all links from the page, create `Request` objects out of them, and yield them.
5064
for link_href in response.css('a::attr("href")'):
5165
link_url = urljoin(response.url, link_href.get())
5266
if link_url.startswith(('http://', 'https://')):

0 commit comments

Comments
 (0)