Return to Snippet

Revision: 58215
at July 1, 2012 18:09 by hlongmore


Updated Code
# Standard Python library imports

# 3rd party modules
import pymongo

from scrapy import log
from scrapy.conf import settings
from scrapy.exceptions import DropItem


class MongoDBPipeline(object):
    def __init__(self):
        self.server = settings['MONGODB_SERVER']
        self.port = settings['MONGODB_PORT']
        self.db = settings['MONGODB_DB']
        self.col = settings['MONGODB_COLLECTION']
        connection = pymongo.Connection(self.server, self.port)
        db = connection[self.db]
        self.collection = db[self.col]

    def process_item(self, item, spider):
        err_msg = ''
        for field, data in item.items():
            if not data:
                err_msg += 'Missing %s of poem from %s\n' % (field, item['url'])
        if err_msg:
            raise DropItem(err_msg)
        self.collection.insert(dict(item))
        log.msg('Item written to MongoDB database %s/%s' % (self.db, self.col),
                level=log.DEBUG, spider=spider)
        return item

Revision: 58214
at July 1, 2012 18:06 by hlongmore


Initial Code
# TODO: finish after posting, to get around the capthca not showing up.

Initial URL

                                

Initial Description
In connection with my [poetry spider](http://snipplr.com/view/65893/a-simple-spider-using-scrapy/), this scrapy pipeline class facilitates storing the scraped data to a MongoDB database.

Initial Title
Scrapy pipeline class to store scraped data in MongoDB

Initial Tags

                                

Initial Language
Python