任务:爬取豆瓣最受欢迎的250个电影的资料
链接:豆瓣电影 Top 250
用浏览器打开后,使用F12或鼠标右键--检查,查看网页的源代码,分析网页结构,如下图所示:
分析后得知:
1.电影名位于class为hd的div中,里边嵌套中一层a标签和span标签,最终目的地为<span class="title"></span> ;
2.电影评分位于class_="rating_num"的span中;
3. 电影资料位于class_="bd"的div中;
参见下图:
下面通过编写python代码,爬取最受欢迎250部电影的名字,评分和电影资料,并导出到movies.txt文件中, 下面简要说下编码实现过程:
1.导入2个第三方库:requests用于发送请求,bs4用于将复杂的网页代码结构解析成可读性强的书籍目录索引结构;
2.headers必不可少,很多网页都有反爬机制,使用headers能在一定程度绕开反爬机制,
3.按照上面分析出,提取出电影名,评分和电影资料,并存储到各自的集合中;
4.找出总面数,在for循环中逐页提取;
5.最后把所有页码中提取出来的信息输出到movies.txt文件中。
import bs4
import requestsheaders = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}def get_response(url):response = requests.get(url, headers=headers)return responsedef find_movies(response):soup = bs4.BeautifulSoup(response.text, 'html.parser')#电影名movies = []targets = soup.find_all("div", class_="hd")for target in targets:movies.append(target.a.span.text)#电影评分ranks = []targets = soup.find_all("span", class_="rating_num")for target in targets:ranks.append('评分:%s'% target.text)#电影资料messages = []targets = soup.find_all("div", class_="bd")for target in targets:try:messages.append(target.p.text.split('\n')[1].strip()+target.p.text.split('\n')[2].strip())except:continueresult = []length = len(movies)for i in range(length):result.append((movies[i]+ranks[i]+messages[i]+'\n'))return result# 找出一共有多少页
def find_pages(response):soup = bs4.BeautifulSoup(response.text, 'html.parser')# 找到总页数total_pages = soup.find("span", class_="next").previous_sibling.previous_sibling.textreturn int(total_pages)# 爬取所有电影
def crawl_movies():url = 'https://movie.douban.com/top250'res = get_response(url)total_pages = find_pages(res)movies = []for i in range(total_pages):page_url = url + '?start=' + str(i*25)page_res = get_response(page_url)movies.extend(find_movies(page_res))with open('movies.txt', 'w', encoding='utf-8') as f:for movie in movies:f.write(movie)if __name__ == '__main__':crawl_movies()
温馨提示:爬虫一定要遵守网站的robot协议,友好爬取,别把对方的网站爬崩了。