파충노트(2):urllib

5889 단어

이것은python이 자체로 가지고 있는 HTTP 요청 라이브러리입니다


1)urllib.request:요청 라이브러리

urllib.request.urlopen(url,data=None,[timeout,]*,cafile=None,capath=None,cadefault=False,context=None)

# :
#url: 
#data:post 
#[timeout]* 
# CA( ) 

example: a)get 방법 가져오기:
import urllib.request

response = urllib.request.urlopen('http://www.baidu.com')
print(response.read().decode('utf-8'))

b) post 메서드 가져오기:
1.  data 
2.  bytes urllib.parse urlopen data


import urllib.parse
import urllib.request

data = bytes(urllib.parse.urlencode({'word':'hello'}),encoding='utf8')
response = urllib.request.urlopen('http://httpbin.org/post',data=data)
print(response.read())

c) 시간 초과(timeout) 기능은 다음과 같습니다.
import socket
import urllib.request
import urllib.error

try:
    response = urllib.request.urlopen('http://httpbin.org/get',timeout=0.1)
except urllib.error.URLError as e:
    if isinstance(e.reason,socket.timeout):
        print('TIME OUT')

d) URL과 데이터 (폼) 를 분리해서 Request를 사용합니다.Request() 패키지 요청 객체
from urllib import request,parse

url = 'http://httpbin.org/post'
headers = {
    'User-Agent':'Mozilla/4.0(compatible;MSIE 5.5;Windows NT)',
    'Host':'httpbin.org'
}
dict = {
    'name':'Kim'
}
data = bytes(parse.urlencode(dict),encoding='utf8')
req = request.Request(url=url,data=data,headers=headers,method='POST')
response = urllib.request.urlopen(req)
print(response.read().decode('utf-8'))

e)add_헤더 () 방법
from urllib import request,parse

url = 'http://httpbin.org/post'
dict = {
    'name':'Kim'
}
data = bytes(parse.urlencode(dict),encoding='utf8')
req = request.Request(url=url,data=data,method='POST')
req.add_header( 'User-Agent','Mozilla/4.0(compatible;MSIE 5.5;Windows NT)')
response = urllib.request.urlopen(req)
print(response.read().decode('utf-8'))

Handler 고급 작업: urllib.request.build_Opener(handler)는handler로 요청을 보냅니다.


a) 프록시 설정: 동일한 IP 액세스를 제한하는 웹 사이트에 접속할 경우 프록시가 제한을 통과해야 합니다.
1.  urllib.request.ProxyHandler() 
2.  proxy_handler urllib.request.build_opener() 
3.    .open() 。

import urllib.request

proxy_handler = urllib.request.ProxyHandler({
    'http':'http://180.125.137.126:8000',
    'https':'http://106.112.169.216:808'
})
opener = urllib.request.build_opener(proxy_handler)
try:
    response = opener.open('http://httpbin.org/get')
except urllib.error.URLError as e:
    if isinstance(e.reason,socket.timeout):
        print('TIME OUT')
print(response.read().decode('utf-8'))

b) 쿠키에 대한 작업: 로그인해야 볼 수 있는 페이지 보기
 http.cookiejar
1. http.cookiejar.MozillaCookieJar() cookie
2. urllib.request.HTTPCookieProcessor() handler
3.  urllib.request.build_opener() opener
4.  opener.open()
import http.cookiejar,urllib.request

cookie = http.cookiejar.CookieJar()
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
for item in cookie:
    print(item.name+"="+item.value)

-------------------------------------------------------------------------------------
import http.cookiejar,urllib.request

filename='cookie.txt'
cookie = http.cookiejar.MozillaCookieJar(filename)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
cookie.save(ignore_discard=True,ignore_expires=True)

-------------------------------------------------------------------------------------
 cookie (cookie.load)
import http.cookiejar,urllib.request

cookie = http.cookiejar.MozillaCookieJar()
cookie.load('cookie.txt',ignore_discard=True,ignore_expires=True)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
print(response.read().decode('utf-8'))


2)urllib.error: 오류를 가져와서 파충류 프로그램의 건장함을 유지합니다.


* HTTPError 이후 URLError
from urllib import request,error

try:
    response = request.urlopen('http://cuiqingcai.com/index.htm')
except error.URLError as e:
    print(e.reason)
------------------------------------------------------------------------------
 :
from urllib import request,error

try:
    response = request.urlopen('http://cuiqingcai.com/index.htm')
except error.HTTPError as e:
    print(e.reason,e.code,e.headers,sep='
') except error.URLError as e: print(e.reason) else: print('Request Successfully')

3)urllib.parse:url의 해석 모듈(URL 분리)

from urllib.parse import urlparse

result = urlparse('www.baidu.com/index.html;user?id=5#comment',scheme='https')
print(result)

 :ParseResult(scheme='https', netloc='', path='www.baidu.com/index.html', params='user', query='id=5', fragment='comment')

urlunparse:
from urllib.parse import urlunparse

data = ['http', 'www.baidu.com', 'index.html', 'user','a=6', 'comment']
print(urlunparse(data))


 :http://www.baidu.com/index.html;user?a=6#comment

urljoin:
from urllib.parse import urlencode

params = {
    'name':'germey',
    'age':22
}
base_url = 'http://www.baidu.com?'
url = base_url+urlencode(params)
print(url)


 :http://www.baidu.com?name=germey&age=22

좋은 웹페이지 즐겨찾기