大数据比赛网站建设,图片扫一扫在线识别照片,ie 常用网站,app网站如何做推广1.明确目标
爬取某省2019年的疫情数据#xff1a; 网站#xff1a;www.ncovdata.spbeen.com
2.项目创建#xff1a;
要求说明
使用scrapy命令新建爬虫项目#xff0c;名称为ncovdata,打开终端#xff0c;在终端创建项目 scrapy startproject ncovdata3.创建spider文件…1.明确目标爬取某省2019年的疫情数据网站www.ncovdata.spbeen.com2.项目创建要求说明使用scrapy命令新建爬虫项目名称为ncovdata,打开终端在终端创建项目scrapy startproject ncovdata3.创建spider文件要求说明使用scrapy命令新建爬虫文件名称为ncov:scrapy genspider-t basic ncov www.ncovdata.spbeen.com4.启动爬虫使用scrapy命令在项目内执行爬虫文件创建Scrapy项目中的启动脚本方便后续调试在终端执行scrapy crawl ncov在顶级目录下创建一个脚本run.py:fromscrapy.cmdlineimportexecute#execute([,,])execute(scrapy crawl ncov.split())5.分析Scrapy的输出日志日志结构说明顶部版本信息项目结构信息中部日志输出例如下载器组建的请求记录底部爬虫过程的统计记录包含请求和失败的次数#顶部#Scrapy版本2025-12-2215:27:14[scrapy.utils.log]INFO:Scrapy2.12.0started(bot:ncovdata)#依赖的库2025-12-2215:27:14[scrapy.utils.log]INFO:Versions:lxml5.4.0.0,libxml22.11.9,cssselect1.3.0,parsel1.10.0,w3lib2.3.1,Twisted24.11.0,Python3.10.3(tags/v3.10.3:a342a49,Mar162022,13:07:40)[MSC v.192964bit(AMD64)],pyOpenSSL25.0.0(OpenSSL3.4.111Feb2025),cryptography44.0.2,Platform Windows-10-10.0.19045-SP02025-12-2215:27:14[scrapy.addons]INFO:Enabled addons:[]#异步请求库2025-12-2215:27:14[asyncio]DEBUG:Using selector:SelectSelector#队列信息调度器内容2025-12-2215:27:14[scrapy.utils.log]DEBUG:Using reactor:twisted.internet.asyncioreactor.AsyncioSelectorReactor2025-12-2215:27:14[scrapy.utils.log]DEBUG:Using asyncio event loop:asyncio.windows_events._WindowsSelectorEventLoop2025-12-2215:27:14[scrapy.utils.log]DEBUG:Using reactor:twisted.internet.asyncioreactor.AsyncioSelectorReactor2025-12-2215:27:14[scrapy.utils.log]DEBUG:Using asyncio event loop:asyncio.windows_events._WindowsSelectorEventLoop#cd4613b32575bd40这个是telnet的密码2025-12-2215:27:14[scrapy.extensions.telnet]INFO:Telnet Password:cd4613b32575bd40#启动的拓展插件2025-12-2215:27:15[scrapy.middleware]INFO:Enabled extensions:[scrapy.extensions.corestats.CoreStats,scrapy.extensions.telnet.TelnetConsole,scrapy.extensions.logstats.LogStats]#项目设置信息2025-12-2215:27:15[scrapy.crawler]INFO:Overridden settings:{BOT_NAME:ncovdata,FEED_EXPORT_ENCODING:utf-8,NEWSPIDER_MODULE:ncovdata.spiders,SPIDER_MODULES:[ncovdata.spiders],TWISTED_REACTOR:twisted.internet.asyncioreactor.AsyncioSelectorReactor}# 下载器中间件2025-12-2215:27:15[scrapy.middleware]INFO:Enabled downloader middlewares:[scrapy.downloadermiddlewares.offsite.OffsiteMiddleware,scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware,scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware,scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware,scrapy.downloadermiddlewares.useragent.UserAgentMiddleware,scrapy.downloadermiddlewares.retry.RetryMiddleware,scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware,scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware,scrapy.downloadermiddlewares.redirect.RedirectMiddleware,scrapy.downloadermiddlewares.cookies.CookiesMiddleware,scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware,scrapy.downloadermiddlewares.stats.DownloaderStats]#代表爬虫中间件2025-12-2215:27:15[scrapy.middleware]INFO:Enabled spider middlewares:[scrapy.spidermiddlewares.httperror.HttpErrorMiddleware,scrapy.spidermiddlewares.referer.RefererMiddleware,scrapy.spidermiddlewares.urllength.UrlLengthMiddleware,scrapy.spidermiddlewares.depth.DepthMiddleware]2025-12-2215:27:15[scrapy.middleware]INFO:Enabled item pipelines:#代表管道[]#spider opened到Closing spider(finshed)为中间过程2025-12-2215:27:15[scrapy.core.engine]INFO:Spider opened#打开2025-12-2215:27:15[scrapy.extensions.logstats]INFO:Crawled0pages(at0pages/min),scraped0items(at0items/min)#这行的作用是定期输出一条日志INFO后的信息为一分钟爬取了多少页一分钟几页2025-12-2215:27:15[scrapy.extensions.telnet]INFO:Telnet console listening on127.0.0.1:6023#它代表telnet的一个服务终端用于拓展插件用的2025-12-2215:27:15[scrapy.core.engine]DEBUG:Crawled(200)GET http://ncovdata.spbeen.com/(referer:None)#请求网址#finished代表停止表示爬虫结束2025-12-2215:27:15[scrapy.core.engine]INFO:Closing spider(finished)#关闭#底部# 这个大括号为状态收集器收集的整体信息2025-12-2215:27:15[scrapy.statscollectors]INFO:Dumping Scrapy stats:{downloader/request_bytes:220,downloader/request_count:1,downloader/request_method_count/GET:1,downloader/response_bytes:57244,downloader/response_count:1,downloader/response_status_count/200:1,elapsed_time_seconds:0.432231,finish_reason:finished,finish_time:datetime.datetime(2025,12,22,7,27,15,641880,tzinfodatetime.timezone.utc),items_per_minute:None,log_count/DEBUG:6,log_count/INFO:10,response_received_count:1,responses_per_minute:None,scheduler/dequeued:1,scheduler/dequeued/memory:1,scheduler/enqueued:1,scheduler/enqueued/memory:1,start_time:datetime.datetime(2025,12,22,7,27,15,209649,tzinfodatetime.timezone.utc)}# 日志都是日期时间信息的结构中括号中的代表组件INFO为日志的类型分号后的为具体信息2025-12-2215:27:15[scrapy.core.engine]INFO:Spider closed(finished)默认情况下下载器中间件和爬虫中间件都不需要改重点是管道从管道开始到爬虫中部内容会非常多如果运行失败底部收集器的内容总结不会出现6.编写并运行爬虫测试任务说明请求接口并拿到接口数据多多益善解析接口数据转换成python的字典对象从目标json内容中提取出所需数据数据要求说明获取某个省10天连续数据单条数据{“江西省”{“xx”:“xx”,“xx”:“xx”}}重新构建start_urls函数importscrapyimportdatetimeclassNcovSpider(scrapy.Spider):namencovallowed_domains[ncovdata.spbeen.com]#start_urls [http://ncovdata.spbeen.com/]base_urlhttp://ncovdata.spbeen.com/apis/get_china_provinces/?query{}defstart_requests(self):start_date_str2020-01-26start_datedatetime.datetime.strptime(start_date_str,%Y-%m-%d)foriinrange(0,15):current_datestart_datedatetime.timedelta(daysi)current_urlself.base_url.format(current_date.strftime(%Y-%m-%d))print(i,current_url)yieldscrapy.Request(urlcurrent_url,callbackself.parse)defparse(self,response):pass7.获取数据并用item保存importscrapyimportdatetimeimportjsonclassNcovSpider(scrapy.Spider):namencovallowed_domains[ncovdata.spbeen.com]# start_urls [# http://ncovdata.spbeen.com/apis/get_china_provinces/?query_date2020-01-31,# ]base_urlhttp://ncovdata.spbeen.com/apis/get_china_provinces/?query_date{}defstart_requests(self):start_date_str2020-01-26start_datedatetime.datetime.strptime(start_date_str,%Y-%m-%d)foriinrange(0,15):current_datestart_datedatetime.timedelta(daysi)current_urlself.base_url.format(current_date.strftime(%Y-%m-%d))# return scrapy.Request(urlcurrent_url)yieldscrapy.Request(urlcurrent_url)defparse(self,response):province江西data_listresponse.json().get(data,[])forprovince_dictindata_list:# k, v province_dict.items()ifprovinceinstr(province_dict):itemprovince_dict item[source]response.textbreakelse:item{}item[source]# print(item)returnitem8.管道文件存储数据任务说明配置settings.py代码启用管道文件使用管道文件接收item,并输出将item的数据进行存储到本地的txt文件中1.在settings文件中打开管道进入pipelines.py文件输出item查看内容输出结果新建一个data文件夹将数据写入文件夹fromitemadapterimportItemAdapterimportdatetimeimportjsonclassNcovdataPipeline:current_date_strdatetime.datetime.now().strftime(%Y%m%d%H%M%S)defprocess_item(self,item,spider):withopen(data/{}.txt.format(self.current_date_str),modea,encodingutf8)asfile:file.write(json.dumps(item,ensure_asciiFalse))file.write(\n)returnitem