糖尿病康复,内容丰富有趣,生活中的好帮手!
糖尿病康复 > 百度网盘爬虫(如何爬取百度网盘)

百度网盘爬虫(如何爬取百度网盘)

时间:2024-02-27 20:04:51

相关推荐

百度网盘爬虫(如何爬取百度网盘)

因为要做去转盘网(分类模式点我),所以一定要爬取网盘资源,本来想自己写一个爬虫挺不容易的,不想分享出来,但最后还是决定了拿给大家一起看吧,毕竟有交流才有进步,有兴趣的朋友也可以看看我写的其他日志或者关注我,会发现去转盘网的大部分技术现在可以说是公开状态,如有对你有帮助还是认真读读吧,下面是爬虫代码,我立马公开:

ps:不会python的孩子先去学学python,代码是python写的

我附上点资料:点我下载1点我下载2

其实还有个磁力站,不过暂时技术不想公开出来,之后也想公开,喜欢的看看:ok搜搜

Tell:很高心该博客得到大量的网友好评,为了回馈大家,本人今天又公开了百度图片爬虫代码,查分包制作,下面是链接,喜欢的可以看看

百度图片爬虫

python 差分包制作

#coding:utf8"""author:haoningcreatetime:-8-15"""importre#正则表达式模块importurllib2#获取URLs的组件importtimefromQueueimportQueueimportthreading,errno,datetimeimportjsonimportrequests#RequestsisanApache2LicensedHTTPlibraryimportMySQLdbasmdbDB_HOST='127.0.0.1'DB_USER='root'DB_PASS=''#以下是正则匹配规则re_start=pile(r'start=(\d+)')#\d表示0-9任意一个数字后面有+号说明这个0-9单个数位出现一到多次比如21312314re_uid=pile(r'query_uk=(\d+)')#查询编号re_urlid=pile(r'&urlid=(\d+)')#url编号ONEPAGE=20#一页数据量ONESHAREPAGE=20#一页分享连接量#缺少专辑列表URL_SHARE='/pcloud/feed/getsharelist?auth_type=1&start={start}&limit=20&query_uk={uk}&urlid={id}'#获得分享列表"""{"feed_type":"share","category":6,"public":"1","shareid":"1541924625","data_id":"2418757107690953697","title":"\u5723\u8bde\u58c1\u7eb8\u5927\u6d3e\u9001","third":0,"clienttype":0,"filecount":1,"uk":1798788396,"username":"SONYcity03","feed_time":1418986714000,"desc":"","avatar_url":"http:\/\/\/sys\/portrait\/item\/1b6bf333.jpg","dir_cnt":1,"filelist":[{"server_filename":"\u5723\u8bde\u58c1\u7eb8\u5927\u6d3e\u9001","category":6,"isdir":1,"size":1024,"fs_id":870907642649299,"path":"%2F%E5%9C%A3%E8%AF%9E%E5%A3%81%E7%BA%B8%E5%A4%A7%E6%B4%BE%E9%80%81","md5":"0","sign":"1221d7d56438970225926ad552423ff6a5d3dd33","time_stamp":1439542024}],"source_uid":"871590683","source_id":"1541924625","shorturl":"1dDndV6T","vCnt":34296,"dCnt":7527,"tCnt":5056,"like_status":0,"like_count":60,"comment_count":19},public:公开分享title:文件名称uk:用户编号"""URL_FOLLOW='/pcloud/friend/getfollowlist?query_uk={uk}&limit=20&start={start}&urlid={id}'#获得订阅列表"""{"type":-1,"follow_uname":"\u597d\u55e8\u597d\u55e8\u554a","avatar_url":"http:\/\/\/sys\/portrait\/item\/979b832f.jpg","intro":"\u9700\u8981\u597d\u8d44\u6599\u52a0994798392","user_type":0,"is_vip":0,"follow_count":2,"fans_count":2276,"follow_time":1415614418,"pubshare_count":36,"follow_uk":2603342172,"album_count":0},follow_uname:订阅名称fans_count:粉丝数"""URL_FANS='/pcloud/friend/getfanslist?query_uk={uk}&limit=20&start={start}&urlid={id}'#获取关注列表"""{"type":-1,"fans_uname":"\u62e8\u52a8\u795e\u7684\u5fc3\u7eea","avatar_url":"http:\/\/\/sys\/portrait\/item\/d5119a2b.jpg","intro":"","user_type":0,"is_vip":0,"follow_count":8,"fans_count":39,"follow_time":1439541512,"pubshare_count":15,"fans_uk":288332613,"album_count":0}avatar_url:头像fans_uname:用户名"""QNUM=1000hc_q=Queue(20)#请求队列hc_r=Queue(QNUM)#接收队列success=0failed=0defreq_worker(inx):#请求s=requests.Session()#请求对象whileTrue:req_item=hc_q.get()#获得请求项req_type=req_item[0]#请求类型,分享?订阅?粉丝?url=req_item[1]#urlr=s.get(url)#通过url获得数据hc_r.put((r.text,url))#将获得数据文本和url放入接收队列print"req_worker#",inx,url#inx线程编号;url分析了的urldefresponse_worker():#处理工作dbconn=mdb.connect(DB_HOST,DB_USER,DB_PASS,'baiduyun',charset='utf8')dbcurr=dbconn.cursor()dbcurr.execute('SETNAMESutf8')dbcurr.execute('setglobalwait_timeout=60000')#以上皆是数据库操作whileTrue:"""#正则备注match()决定RE是否在字符串刚开始的位置匹配search()扫描字符串,找到这个RE匹配的位置findall()找到RE匹配的所有子串,并把它们作为一个列表返回finditer()找到RE匹配的所有子串,并把它们作为一个迭代器返回百度页面链接:/share/link?shareid=3685432306&uk=1798788396&from=hotrecuk其实用户id值"""metadata,effective_url=hc_r.get()#获得metadata(也就是前面的r.text)和有效的url#print"response_worker:",effective_urltry:tnow=int(time.time())#获得当前时间id=re_urlid.findall(effective_url)[0]#获得re_urlid用户编号start=re_start.findall(effective_url)[0]#获得start用户编号ifTrue:if'getfollowlist'ineffective_url:#type=1,也就是订阅类follows=json.loads(metadata)#以将文本数据转化成json数据格式返回uid=re_uid.findall(effective_url)[0]#获得re_uid,查询编号if"total_count"infollows.keys()andfollows["total_count"]>0andstr(start)=="0":#获得订阅数量foriinrange((follows["total_count"]-1)/ONEPAGE):#开始一页一页获取有用信息try:dbcurr.execute('INSERTINTOurlids(uk,start,limited,type,status)VALUES(%s,%s,%s,1,0)'%(uid,str(ONEPAGE*(i+1)),str(ONEPAGE)))#存储url编号,订阅中有用户编号,start表示从多少条数据开始获取,初始status=0为未分析状态exceptExceptionasex:print"E1",str(ex)passif"follow_list"infollows.keys():#如果订阅者也订阅了,即拥有follow_listforiteminfollows["follow_list"]:try:dbcurr.execute('INSERTINTOuser(userid,username,files,status,downloaded,lastaccess)VALUES(%s,"%s",0,0,0,%s)'%(item['follow_uk'],item['follow_uname'],str(tnow)))#存储订阅这的用户编号,用户名,入库时间exceptExceptionasex:print"E13",str(ex)passelse:print"delete1",uid,startdbcurr.execute('deletefromurlidswhereuk=%sandtype=1andstart>%s'%(uid,start))elif'getfanslist'ineffective_url:#type=2,也就是粉丝列表fans=json.loads(metadata)uid=re_uid.findall(effective_url)[0]if"total_count"infans.keys()andfans["total_count"]>0andstr(start)=="0":foriinrange((fans["total_count"]-1)/ONEPAGE):try:dbcurr.execute('INSERTINTOurlids(uk,start,limited,type,status)VALUES(%s,%s,%s,2,0)'%(uid,str(ONEPAGE*(i+1)),str(ONEPAGE)))exceptExceptionasex:print"E2",str(ex)passif"fans_list"infans.keys():foriteminfans["fans_list"]:try:dbcurr.execute('INSERTINTOuser(userid,username,files,status,downloaded,lastaccess)VALUES(%s,"%s",0,0,0,%s)'%(item['fans_uk'],item['fans_uname'],str(tnow)))exceptExceptionasex:print"E23",str(ex)passelse:print"delete2",uid,startdbcurr.execute('deletefromurlidswhereuk=%sandtype=2andstart>%s'%(uid,start))else:#type=0,也即是分享列表shares=json.loads(metadata)uid=re_uid.findall(effective_url)[0]if"total_count"inshares.keys()andshares["total_count"]>0andstr(start)=="0":foriinrange((shares["total_count"]-1)/ONESHAREPAGE):try:dbcurr.execute('INSERTINTOurlids(uk,start,limited,type,status)VALUES(%s,%s,%s,0,0)'%(uid,str(ONESHAREPAGE*(i+1)),str(ONESHAREPAGE)))exceptExceptionasex:print"E3",str(ex)passif"records"inshares.keys():foriteminshares["records"]:try:dbcurr.execute('INSERTINTOshare(userid,filename,shareid,status)VALUES(%s,"%s",%s,0)'%(uid,item['title'],item['shareid']))#item['title']恰好是文件名称#返回的json信息:exceptExceptionasex:#print"E33",str(ex),itempasselse:print"delete0",uid,startdbcurr.execute('deletefromurlidswhereuk=%sandtype=0andstart>%s'%(uid,str(start)))dbcurr.execute('deletefromurlidswhereid=%s'%(id,))mit()exceptExceptionasex:print"E5",str(ex),iddbcurr.close()dbconn.close()#关闭数据库defworker():globalsuccess,faileddbconn=mdb.connect(DB_HOST,DB_USER,DB_PASS,'baiduyun',charset='utf8')dbcurr=dbconn.cursor()dbcurr.execute('SETNAMESutf8')dbcurr.execute('setglobalwait_timeout=60000')#以上是数据库相关设置whileTrue:#dbcurr.execute('select*fromurlidswherestatus=0orderbytypelimit1')dbcurr.execute('select*fromurlidswherestatus=0andtype>0limit1')#type>0,为非分享列表d=dbcurr.fetchall()#每次取出一条数据出来#printdifd:#如果数据存在id=d[0][0]#请求url编号uk=d[0][1]#用户编号start=d[0][2]limit=d[0][3]type=d[0][4]#哪种类型dbcurr.execute('updateurlidssetstatus=1whereid=%s'%(str(id),))#状态更新为1,已经访问过了url=""iftype==0:#分享url=URL_SHARE.format(uk=uk,start=start,id=id).encode('utf-8')#分享列表格式化#query_ukuk查询编号#start#urlididurl编号eliftype==1:#订阅url=URL_FOLLOW.format(uk=uk,start=start,id=id).encode('utf-8')#订阅列表格式化eliftype==2:#粉丝url=URL_FANS.format(uk=uk,start=start,id=id).encode('utf-8')#关注列表格式化ifurl:hc_q.put((type,url))#如果url存在,则放入请求队列,type表示从哪里获得数据#通过以上的url就可以获得相应情况下的数据的json数据格式,如分享信息的,订阅信息的,粉丝信息的#print"processed",urlelse:#否则从订阅者或者粉丝的引出人中获得信息来存储,这个过程是爬虫树的下一层扩展dbcurr.execute('select*fromuserwherestatus=0limit1000')d=dbcurr.fetchall()ifd:foritemind:try:dbcurr.execute('insertintourlids(uk,start,limited,type,status)values("%s",0,%s,0,0)'%(item[1],str(ONESHAREPAGE)))#uk查询号,其实是用户编号#start从第1条数据出发获取信息#dbcurr.execute('insertintourlids(uk,start,limited,type,status)values("%s",0,%s,1,0)'%(item[1],str(ONEPAGE)))dbcurr.execute('insertintourlids(uk,start,limited,type,status)values("%s",0,%s,2,0)'%(item[1],str(ONEPAGE)))dbcurr.execute('updateusersetstatus=1whereuserid=%s'%(item[1],))#做个标志,该条数据已经访问过了#跟新了分享,订阅,粉丝三部分数据exceptExceptionasex:print"E6",str(ex)else:time.sleep(1)mit()dbcurr.close()dbconn.close()defmain():print'startingat:',now()foriteminrange(16):t=threading.Thread(target=req_worker,args=(item,))t.setDaemon(True)t.start()#请求线程开启,共开启16个线程s=threading.Thread(target=worker,args=())s.setDaemon(True)s.start()#worker线程开启response_worker()#response_worker开始工作print'allDoneat:',now()

本人建个qq群,欢迎大家一起交流技术, 群号:512245829 喜欢微博的朋友关注:转盘娱乐即可

如果觉得《百度网盘爬虫(如何爬取百度网盘)》对你有帮助,请点赞、收藏,并留下你的观点哦!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。