scrapy的一些事

保存html源码方法:

        with open("a,html",'w',encoding='utf-8') as f:f.write(response.body.decode())

下载网页图片:

from urllib import request
request.urlretrieve('网址','xxx.jpg')

对内容进行去除空格:

def process_content(self,content):content = [re.sub(r'\s|\xa0','',i) for i in content]#将换行什么的符号替换成空格content = [i for i in content if len(i) > 0]#将空格号去掉return content

若爬取的url地址不完整:

import urllib
item["href"] is not None:item["href"] = urllib.parse.urljoin(response.url,item["href"])

登录可以有以下操作:

  1. cookies:`
    def start_requests(self):cookies = "_s_tentry=-; Apache=1769401044437.5996.1588400454551; SINAGLOBAL=1769401044437.5996.1588400454551; ULV=1588400454765:1:1:1:1769401044437.5996.1588400454551:; login_sid_t=8111d554c0a2d6dcf9228ebc58708bdf; cross_origin_proto=SSL; Ugrow-G0=9ec894e3c5cc0435786b4ee8ec8a55cc; YF-V5-G0=7a7738669dbd9095bf06898e71d6256d; UOR=,,www.baidu.com; wb_view_log=1366*7681; SUB=_2A25zutvVDeRhGeRK7lIS8CfJwj6IHXVQzkodrDV8PUNbmtAKLVjNkW9NU2bIGGUcalCxA0p7h9RIP8zDZ5jkAOnZ; SUBP=0033WrSXqPxfM725Ws9jqgMF55529P9D9W5U.g5UpcseWH5HDGKF_78r5JpX5KzhUgL.FozXSK50eh.f1Kz2dJLoI7yWwJyadJMXSBtt; SUHB=0P19BEYcGR28ZI; ALF=1621090026; SSOLoginState=1589554053; wvr=6; wb_view_log_2450309592=1366*7681; YF-Page-G0=e44a6a701dd9c412116754ca0e3c82c3|1589556569|1589556569; webim_unReadCount=%7B%22time%22%3A1589556679580%2C%22dm_pub_total%22%3A0%2C%22chat_group_client%22%3A0%2C%22chat_group_notice%22%3A0%2C%22allcountNum%22%3A0%2C%22msgbox%22%3A0%7D"cookies = {i.split("=")[0]:i.split("=")[1] for i in cookies.split(";")}yield scrapy.Request(self.start_urls[0],callback = self.parse,cookies = cookies)
  1. post登录
        authenticity_token = response.xpath('//div[@class = "auth-form px-3"]/form/input[1]/@value').extract_first()ga_id = response.xpath('//div[@class = "auth-form px-3"]/form/input[2]/@value').extract_first()commit = response.xpath('//input[@name = "commit"]/@value').extract_first()post_data = dict(login='Azhong-github',password= 'WUzhong961028',authenticity_token = authenticity_token,ga_id = str(ga_id),commit = commit
  1. scrapy模拟登录之自动登录
def parse(self, response):yield scrapy.FormRequest.from_response(response,#自动的从respon中寻找form表单formdata={"login":"Azhong-github","password":"WUzhong961028"},callback = self.after_login)

get请求:模拟输入(如百度搜索框):
要善于发现get请求的规律,下面?是分隔符

kw={"kw":"电子科技大学"}       kw = parse.urlencode(kw,encoding = "utf-8"("gb2312")) #kw:wd=%e7%94%b5%e5%ad%90%e7%a7%91%e6%8a%80%e5%a4%a7%e5%ad%a6url = response.url + "?" + kwprint(type(url))yield scrapy.Request(url,callback = self.parse_detail)


本文来自互联网用户投稿,文章观点仅代表作者本人,不代表本站立场,不承担相关法律责任。如若转载,请注明出处。 如若内容造成侵权/违法违规/事实不符,请点击【内容举报】进行投诉反馈!

相关文章

立即
投稿

微信公众账号

微信扫一扫加关注

返回
顶部