scrapy怎么多次向一个url请求信息,然后获取返回数据
1个回答
2017-07-10
展开全部
有些网站的数据是通过ajax请求获取的,或者提供了json格式的api。
比如对于如下的数据:
[javascript] view plain copy
{
{
"url": "http://www.techbrood.com/news/1",
"author": "iefreer",
"title": "techbrood Co. test 1"
},
{
"url": "http://www.techbrood.com/news/2",
"author": "ryan.chen",
"title": "techbrood Co. test 2"
}
}
在Scrapy里,只要简单改写下parse函数就行:
[python] view plain copy
def parse(self, response):
sites = json.loads(response.body_as_unicode())
for site in sites:
print site['url']
调用body_as_unicode()是为了能处理unicode编码的数据。
比如对于如下的数据:
[javascript] view plain copy
{
{
"url": "http://www.techbrood.com/news/1",
"author": "iefreer",
"title": "techbrood Co. test 1"
},
{
"url": "http://www.techbrood.com/news/2",
"author": "ryan.chen",
"title": "techbrood Co. test 2"
}
}
在Scrapy里,只要简单改写下parse函数就行:
[python] view plain copy
def parse(self, response):
sites = json.loads(response.body_as_unicode())
for site in sites:
print site['url']
调用body_as_unicode()是为了能处理unicode编码的数据。
推荐律师服务:
若未解决您的问题,请您详细描述您的问题,通过百度律临进行免费专业咨询