、什么是爬蟲?
它是指向網站發起請求,獲取資源后分析并提取有用數據的程序;
爬蟲的步驟:
1、發起請求
使用http庫向目標站點發起請求,即發送一個Request
Request包含:請求頭、請求體等
2、獲取響應內容
如果服務器能正常響應,則會得到一個Response
Response包含:html,json,圖片,視頻等
3、解析內容
解析html數據:正則表達式(RE模塊),第三方解析庫如Beautifulsoup,pyquery等
解析json數據:json模塊
解析二進制數據:以wb的方式寫入文件
4、保存數據
數據庫(MySQL,Mongdb、Redis)文件
二、本次選擇爬蟲的數據來源于鏈家,因為本人打算搬家,想觀察一下近期的鏈家租房數據情況,所以就直接爬取了鏈家數據,相關的代碼如下:
from bs4 import BeautifulSoup as bs
from requests.exceptions import RequestException
import requests
import re
from DBUtils import DBUtils
def main(response): #web頁面數據提取與入庫操作
html=bs(response.text, 'lxml')
for data in html.find_all(name='div',attrs={"class":"content__list--item--main"}):
try:
print(data)
Community_name=data.find(name="a", target="_blank").get_text(strip=True)
name=str(Community_name).split(" ")[0]
sizes=str(Community_name).split(" ")[1]
forward=str(Community_name).split(" ")[2]
flood=data.find(name="span",class_="hide").get_text(strip=True)
flood=str(flood).replace(" ","").replace("/","")
sqrt=re.compile("\d\d+㎡")
area=str(data.find(text=sqrt)).replace(" ","")
maintance=data.find(name="span",class_="content__list--item--time oneline").get_text(strip=True)
maintance=str(maintance)
price=data.find(name="span",class_="content__list--item-price").get_text(strip=True)
price=str(price)
print(name,sizes,forward,flood,maintance,price)
insertsql="INSERT INTO test_log.`information`(Community_name,size,forward,area,flood,maintance,price) VALUES " \
"('"+name+"','"+sizes+"','"+forward+"','"+area+"','"+flood+"','"+maintance+"','"+price+"');"
insert_sql(insertsql)
except:
print("have an error!!!")
def insert_sql(sql): #數據入庫操作
dbconn=DBUtils("test6")
dbconn.dbExcute(sql)
def get_one_page(urls): #獲取web頁面數據
try:
headers={"Host": "bj.lianjia.com",
"Connection": "keep-alive",
"Cache-Control": "max-age=0",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
"Sec-Fetch-Site": "none",
"Sec-Fetch-Mode": "navigate",
"Sec-Fetch-User": "?1",
"Sec-Fetch-Dest": "document",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "zh-CN,zh;q=0.9",
"Cookie": "lianjia_uuid=fa1c2e0b-792f-4a41-b48e-78531bf89136; _smt_uid=5cfdde9d.cbae95b; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%2216b3fad98fc1d1-088a8824f73cc4-e353165-2710825-16b3fad98fd354%22%2C%22%24device_id%22%3A%2216b3fad98fc1d1-088a8824f73cc4-e353165-2710825-16b3fad98fd354%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E8%87%AA%E7%84%B6%E6%90%9C%E7%B4%A2%E6%B5%81%E9%87%8F%22%2C%22%24latest_referrer%22%3A%22https%3A%2F%2Fwww.baidu.com%2Flink%22%2C%22%24latest_referrer_host%22%3A%22www.baidu.com%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC%22%7D%7D; _ga=GA1.2.1891741852.1560141471; UM_distinctid=17167f490cb566-06c7739db4a69e-4313f6b-100200-17167f490cca1e; Hm_lvt_9152f8221cb6243a53c83b956842be8a=1588171341; lianjia_token=2.003c978d834648dbbc2d3aa4b226145cd7; select_city=110000; lianjia_ssid=fc20dfa1-6afb-4407-9552-2c4e7aeb73ce; CNZZDATA1253477573=1893541433-1588166864-https%253A%252F%252Fwww.baidu.com%252F%7C1591157903; CNZZDATA1254525948=1166058117-1588166331-https%253A%252F%252Fwww.baidu.com%252F%7C1591154084; CNZZDATA1255633284=1721522838-1588166351-https%253A%252F%252Fwww.baidu.com%252F%7C1591158264; CNZZDATA1255604082=135728258-1588168974-https%253A%252F%252Fwww.baidu.com%252F%7C1591153053; _jzqa=1.2934504416856578000.1560141469.1588171337.1591158227.3; _jzqc=1; _jzqckmp=1; _jzqy=1.1588171337.1591158227.1.jzqsr=baidu.-; _qzjc=1; _gid=GA1.2.1223269239.1591158230; _qzja=1.1313673973.1560141469311.1588171337488.1591158227148.1591158227148.1591158233268.0.0.0.7.3; _qzjto=2.1.0; srcid=eyJ0Ijoie1wiZGF0YVwiOlwiMThmMWQwZTY0MGNiNTliNTI5OTNlNGYxZWY0ZjRmMmM3ODVhMTU3ODNhNjMwODhlZjlhMGM2MTJlMDFiY2JiN2I4OTBkODA0M2Q0YTM0YzIyMWE0YzIwOTBkODczNTQwNzM0NTc1NjBlM2EyYTc3NmYwOWQ3OWQ4OWJjM2UwYzAwY2RjMTk3MTMwNzYwZDRkZTc2ODY0OTY0NTA5YmIxOWIzZWQyMWUzZDE3ZjhmOGJmMGNmOGYyMTMxZTI1MzIxMGI4NzhjNjYwOGUyNjc3ZTgxZjA2YzUzYzE4ZjJmODhmMTA1ZGVhOTMyZTRlOTcxNmNiNzllMWViMThmNjNkZTJiMTcyN2E0YzlkODMwZWIzNmVhZTQ4ZWExY2QwNjZmZWEzNjcxMjBmYWRmYjgxMDY1ZDlkYTFhMDZiOGIwMjI2NTg1ZGU4NTQyODBjODFmYTUyYzI0NDg5MjRlNWI0N1wiLFwia2V5X2lkXCI6XCIxXCIsXCJzaWduXCI6XCI2Yzk3M2U5M1wifSIsInIiOiJodHRwczovL2JqLmxpYW5qaWEuY29tL2RpdGllenVmYW5nL2xpNDY0NjExNzkvcnQyMDA2MDAwMDAwMDFsMSIsIm9zIjoid2ViIiwidiI6IjAuMSJ9"}
response=requests.get(url=urls, headers=headers)
main(response)
except RequestException:
return None
if __name__=="__main__":
for i in range(64): #遍歷翻頁
if(i==0):
urls="https://bj.lianjia.com/ditiezufang/li46461179/rt200600000001l1/"
get_one_page(urls)
else:
urls="https://bj.lianjia.com/ditiezufang/li46461179/rt200600000001l1/".replace("rt","pg"+str(i))
get_one_page(urls)
說明:本代碼中使用了《Python之mysql實戰》的那篇文章,請注意結合著一起來看。
三、以下是獲取到的數據入庫后的結果圖
結論:爬蟲是獲取數據的重要方式之一,我們需要掌握多種方式去獲取數據。機器學習是基于數據的學習,我們需要為機器學習做好數據的準備,大家一起加油!
提:
mysql自己的參數生成html,不盡人意......
mysql -h$host -u$user -p$pass -H --skip-column-names $database -e "source get_query.sql" > /tmp/query.html
1、熱身一下,巧用awk構造mysql測試數據
#/bin/sh
echo "line" > file.dat
echo "" > insert.sql
awk 'BEGIN{
system("echo \"create table sysbench (col1 INT, col2 INT,col3 INT);\" | mysql -uroot -p123456 taobao")
for (i=1; i<=100; i++)
{
print " insert into sysbench values (" i "," i*i "," i*i*i ");" >> "insert.sql"
}
}
END{
system("mysql -uroot -p123456 -D taobao < insert.sql")
}' file.dat
2、
1)主要使用pt-query-digest來完成入庫操作
192.168.0.2為慢查詢入庫的機器
pt-query-digest --user=dba --password=123456 --review h=192.168.0.2,D=slow_query,t=global_query_review --history h=192.168.0.2,D=slow_query,t=global_query_review_history --no-report --filter=" $event->{Bytes}=length($event->{arg}) and $event->{hostname}=\"`ifconfig eth1|grep "inet addr"|awk '{print }'|awk -F':' '{print ":3306"}'`\" " /data/3306/slow_query.log
2)主要涉及到兩個表global_query_review和global_query_review_history,去percona官網copy創建或者pt工具自動創建
計算下慢查詢排行,根據Query_time_pct_95來排序,并且生成html格式
#/bin/sh
mysql -uroot -p123456 slow_query -N -e "select db_max as DBname,ifnull(sum(ts_cnt),0) as Ts_cnt,sum(cast(Query_time_sum AS SIGNED )) as Query_time_sum,avg(cast(Query_time_pct_95 AS SIGNED )) as Avg_Query_time_pct_95,sum(cast(Lock_time_sum AS SIGNED )) as Lock_time_sum,sum(cast(Rows_sent_sum AS SIGNED )) as Rows_sent_sum,sum(cast(Rows_examined_sum AS SIGNED )) as Rows_examined_sum,sum(cast(Rows_affected_sum AS SIGNED )) as Rows_affected_sum,sum(ifnull(cast(Tmp_table_sum AS SIGNED ),0)) as Tmp_table_sum,sum(ifnull(cast(Filesort_sum AS SIGNED ),0)) as Filesort_sum,sum(ifnull(cast(Full_scan_sum AS SIGNED ),0)) as Full_scan_sum from global_query_review a,global_query_review_history b where a.checksum=b.checksum and db_max not in ('information_schema') and b.Query_time_pct_95 >=1 group by db_max order by Avg_Query_time_pct_95 desc;" >slow.txt
echo "hello everybody:<table border=1><tr><td>Number</td><td>DBname</td><td>Ts_cnt</td><td>Query_time_sum</td><td>Avg_Query_time_pct_95</td><td>Lock_time_sum</td><td>Rows_sent_sum</td><td>Rows_examined_sum</td><td>Rows_affected_sum</td><td>Tmp_table_sum</td><td>Filesort_sum</td><td>Full_scan_sum</td>" >./query.html
awk '{ print FNR " " >awk '{ print FNR " " $0 }' ./slow.txt | awk '{if (NR % 2==1) print "<tr style=\"background:#58ACFA\"><td>" $1 "</td><td align=\"left\">" $2 "</td><td align=\"left\">" $3 "</td><td align=\"left\">" $4 "</td><td align=\"left\">" $5 "</td><td align=\"left\">" $6 "</td><td align=\"left\">" $7 "</td><td align=\"left\">" $8 "</td><td align=\"left\">" $9 "</td><td align=\"left\">" $10 "</td><td align=\"left\">" $11 "</td><td align=\"left\">" $12 "</td></tr>";else print "<tr style=\"background:#F3E2A9\"><td>" $1 "</td><td align=\"left\">" $2 "</td><td align=\"left\">" $3 "</td><td align=\"left\">" $4 "</td><td align=\"left\">" $5 "</td><td align=\"left\">" $6 "</td><td align=\"left\">" $7 "</td><td align=\"left\">" $8 "</td><td align=\"left\">" $9 "</td><td align=\"left\">" $10 "</td><td align=\"left\">" $11 "</td><td align=\"left\">" $12 "</td></tr>";}' >> ./query.html< }' ./slow.txt | awk '{if (NR % 2==1) print "<tr style=\"background:#58ACFA\"><td>" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td></tr>";else print "<tr style=\"background:#F3E2A9\"><td>" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td><td align=\"left\">" "</td></tr>";}' >> ./query.html
echo "</table>" >>./query.html
cat query.html:
hello everybody:<table border=1><tr><td>Number</td><td>DBname</td><td>Ts_cnt</td><td>Query_time_sum</td><td>Avg_Query_time_pct_95</td><td>Lock_time_sum</td><td>Rows_sent_sum</td><td>Rows_examined_sum</td><td>Rows_affected_sum</td><td>Tmp_table_sum</td><td>Filesort_sum</td><td>Full_scan_sum</td>
<tr style="background:#58ACFA"><td>1</td><td align="left">dba</td><td align="left">0</td><td align="left">18</td><td align="left">6.0000</td><td align="left">0</td><td align="left">0</td><td align="left">2999532</td><td align="left">2999532</td><td align="left">0</td><td align="left">0</td><td align="left">0</td></tr>
</table>
3)最后一步,發送郵件,大功告成。
/usr/bin/python sendmail.py aa@gmail.com Mysql Slowquery by Query_time_pct_95 >=1 ./query.html
這樣就可以發送html郵件了。
覺得不錯的朋友可以讀完關注下,每周都有定期更新的精選內容分享給大家,感謝您的支持!
秒殺系統難做的原因:庫存只有一份,所有人會在集中的時間讀和寫這些數據。例如小米手機每周二的秒殺,可能手機只有1萬部,但瞬時進入的流量可能是幾百幾千萬。又例如12306搶票,亦與秒殺類似,瞬時流量更甚。
主要需要解決的問題有兩個:
對于第一個問題,已經很容易想到用緩存來處理搶購,避免直接操作數據庫,例如使用Redis。重點在于第二個問題,常規寫法:
查詢出對應商品的庫存,看是否大于0,然后執行生成訂單等操作,但是在判斷庫存是否大于0處,如果在高并發下就會有問題,導致庫存量出現負數
流量到了億級別,常見站點架構如上:
1.將請求盡量攔截在系統上游:傳統秒殺系統之所以掛,請求都壓倒了后端數據層,數據讀寫鎖沖突嚴重,并發高響應慢,幾乎所有請求都超時,流量雖大,下單成功的有效流量甚小【一趟火車其實只有2000張票,200w個人來買,基本沒有人能買成功,請求有效率為0】
2.充分利用緩存:這是一個典型的讀多寫少的應用場景【一趟火車其實只有2000張票,200w個人來買,最多2000個人下單成功,其他人都是查詢庫存,寫比例只有0.1%,讀比例占99.9%】,非常適合使用緩存。
4.1.瀏覽器層請求攔截
點擊了“查詢”按鈕之后,系統那個卡呀,進度條漲的慢呀,作為用戶,我會不自覺的再去點擊“查詢”,繼續點,繼續點,點點點。。。有用么?平白無故的增加了系統負載(一個用戶點5次,80%的請求是這么多出來的),怎么整?
如此限流,80%流量已攔。
4.2.站點層請求攔截與頁面緩存
瀏覽器層的請求攔截,只能攔住小白用戶(不過這是99%的用戶喲),高端的程序員根本不吃這一套,寫個for循環,直接調用你后端的http請求,怎么整?
如此限流,又有99%的流量會被攔截在站點層
4.3.服務層請求攔截與數據緩存
站點層的請求攔截,只能攔住普通程序員,高級黑客,假設他控制了10w臺肉雞(并且假設買票不需要實名認證),這下uid的限制不行了吧?怎么整?
如此限流,只有非常少的寫請求,和非常少的讀緩存mis的請求會透到數據層去,又有99.9%的請求被攔住了
4.4.數據層閑庭信步
到了數據這一層,幾乎就沒有什么請求了,單機也能扛得住,還是那句話,庫存是有限的,小米的產能有限,透這么多請求來數據庫沒有意義。
4.5.mysql批量入庫提高INSERT效率
使用redis隊列(list),push和pop操作保證了原子性的實現。即使有很多用戶同時到達,也是依次執行。(mysql事務在高并發下性能下降很厲害)
先將商品庫存存入隊列:
<?php $store=1000; //商品庫存 $redis=new Redis(); $result=$redis->connect('127.0.0.1',6379); $res=$redis->llen('goods_store'); for($i=0; $i<$store; $i++){ $redis->lpush('goods_store',1); } echo $redis->llen('goods_store'); ?>
客戶執行下單操作:
$redis=new Redis(); $result=$redis->connect('127.0.0.1',6379); $count=$redis->lpop('goods_store'); if(!$count){ echo '搶購失敗!'; return; }
緩存也是可以應對寫請求的,比如我們就可以把數據庫中的庫存數據轉移到Redis緩存中,所有減庫存操作都在Redis中進行,然后再通過后臺進程把Redis中的用戶秒殺請求同步到數據庫中
沒什么總結了,上文應該描述的非常清楚了,對于秒殺系統,再次重復下兩個架構優化思路
讀到這的朋友可以轉發支持下,同時可以關注我,每周都有定期更新的精選內容分享給大家!
*請認真填寫需求信息,我們會在24小時內與您取得聯系。