Python之路——爬虫

    xiaoxiao2025-02-01  55

    同事老钟在闲暇之余,喜欢研究股票,最近叫我帮忙爬取一个股票网站的数据。于是,我的第一个爬虫程序就此诞生。

    需求:获取投股票行情网 最新个股推荐 机构评级一览表全部数据(https://www.tou18.cn/gegu/)知识点:接收网页数据的编码方式为 ISO-8859-1,导致中文显示乱码。解决方法如下: #!/usr/bin/env python # -*- coding:utf-8 -*- # Author:NewBo import requests,sys from bs4 import BeautifulSoup header = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 \ (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"} url = "https://www.baidu.com" try: resp = requests.get(url, headers=header) # resp.encoding = "gbk2312" soup = BeautifulSoup(resp.text, "lxml") except requests.exceptions.ConnectionError: print("网页出错!") print(sys.getdefaultencoding()) print(resp.encoding) print(soup.find("title").text) ''' 爬取百度,中文显示乱码如下: utf-8 ISO-8859-1 ç¾åº¦ä¸ä¸ï¼ä½ å°±ç¥é 将resp.encoding设置为gbk2312后,问题解决: utf-8 gbk2312 百度一下,你就知道 '''

    总结:用Python写爬虫代码非常简单,但这仅仅是针对一般的网站。复杂的爬虫可能会遇到各种各样的问题,需要一步一步的进行功能完善,比如:代理设置、Cooker、SSL认证等等。最后上完整代码:

    #!/usr/bin/env python # -*- coding:utf-8 -*- # Author:NewBo import requests from bs4 import BeautifulSoup from openpyxl import Workbook # 模拟网页访问User-Agent:google浏览器开发者模式>network>User-Agent headers = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 \ (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"} def get_url(): url = "https://www.tou18.cn/gegu/?page=" url_list = [] for n in range(1,18): #该网站数据共计分18页,返回所有数据页的url列表 url_list.append(url + str(n)) return url_list def run_data(url): resp = requests.get(url,headers=headers) #get方式获取网站数据 resp.encoding = "gbk2312" #解决中文乱码显示问题 soup = BeautifulSoup(resp.text, "lxml") td_list = soup.select("div.listbox > table > tr") num_1 = [n.find_all("td")[0].text for n in td_list] num_2 = [n.find_all("td")[2].text for n in td_list] num_3 = [n.find_all("td")[3].text for n in td_list] num_4 = [n.find_all("td")[4].text for n in td_list] num_5 = [n.find_all("td")[5].text for n in td_list] num_6 = [n.find_all("td")[6].text for n in td_list] rece_data = [] for each in zip(num_1,num_2,num_3,num_4,num_5,num_6): rece_data.append(each) return rece_data if __name__ == "__main__": wb = Workbook() ws = wb.active tree = True for each in get_url(): list_page = run_data(each) for everyone in list_page: if list_page.index(everyone) == 0: if tree: tree = False else: continue ws.append(everyone) wb.save("gupiao.xlsx")

    参考文章:

    https://www.yukunweb.com/2017/5/python-spider-basic/

     

    最新回复(0)