在线程里运行scrapy的方法
发布时间:2020-05-25 17:00:29 所属栏目:Python 来源:互联网
导读:在线程里运行scrapy的方法
|
下面是脚本之家 jb51.cc 通过网络收集整理的代码片段。 脚本之家小编现在分享给大家,也给大家做个参考。 # When you run the Scrapy crawler from a program,the code blocks until the Scrapy crawler is finished. This is due to how Twisted (the underlying asynchronous network library) works. This prevents using the Scrapy crawler from scripts or other code.
#
# To circumvent this issue you can run the Scrapy crawler in a thread with this code.
#
# Keep in mind that this code is mainly for illustrative purposes and far from production ready.
#
# Also the code was only tested with Scrapy 0.8,and will probably need some adjustments for newer versions (since the core API isn't stable yet),but you get the idea.
"""
Code to run Scrapy crawler in a thread - works on Scrapy 0.8
"""
import threading,Queue
from twisted.internet import reactor
from scrapy.xlib.pydispatch import dispatcher
from scrapy.core.manager import scrapymanager
from scrapy.core.engine import scrapyengine
from scrapy.core import signals
class CrawlerThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.running = False
def run(self):
self.running = True
scrapymanager.configure(control_reactor=False)
scrapymanager.start()
reactor.run(installSignalHandlers=False)
def crawl(self,*args):
if not self.running:
raise RuntimeError("CrawlerThread not running")
self._call_and_block_until_signal(signals.spider_closed,
scrapymanager.crawl,*args)
def stop(self):
reactor.callFromThread(scrapyengine.stop)
def _call_and_block_until_signal(self,signal,f,*a,**kw):
q = Queue.Queue()
def unblock():
q.put(None)
dispatcher.connect(unblock,signal=signal)
reactor.callFromThread(f,**kw)
q.get()
# Usage example below:
import os
os.environ.setdefault('SCRAPY_SETTINGS_MODULE','myproject.settings')
from scrapy.xlib.pydispatch import dispatcher
from scrapy.core import signals
from scrapy.conf import settings
from scrapy.crawler import CrawlerThread
settings.overrides['LOG_ENABLED'] = False # avoid log noise
def item_passed(item):
print "Just scraped item:",item
dispatcher.connect(item_passed,signal=signals.item_passed)
crawler = CrawlerThread()
print "Starting crawler thread..."
crawler.start()
print "Crawling somedomain.com...."
crawler.crawl('somedomain.com) # blocking call
print "Crawling anotherdomain.com..."
crawler.crawl('anotherdomain.com') # blocking call
print "Stopping crawler thread..."
crawler.stop()
以上是脚本之家(jb51.cc)为你收集整理的全部代码内容,希望文章能够帮你解决所遇到的程序开发问题。 如果觉得脚本之家网站内容还不错,欢迎将脚本之家网站推荐给程序员好友。 (编辑:安卓应用网) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |
