ubuntu 14.04 설치 scrapy 채굴 기록

9266 단어 데이터 처리
1.pip,easy 설정설치 국내 소스:
cat ~/.pip/pip.conf 
[global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple
cat ~/.pydistutils.cfg 
[easy_install]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple
find-links=
	https://pypi.tuna.tsinghua.edu.cn/simple

2.scrapy 설치 
sudo apt-get install python-dev
sudo apt-get install build-essential
sudo apt-get install libxml2-dev
sudo apt-get install libxslt1-dev
sudo apt-get install libffi-dev
sudo pip install functools32
sudo pip install pyasn1
위 명령 이 실 행 된 후 sudo pip install scrapy
scrapy 셸 명령 시 다음 과 같은 오 류 를 보고 합 니 다.
Traceback (most recent call last):
  File "/usr/local/bin/scrapy", line 9, in 
    load_entry_point('Scrapy==1.5.1', 'console_scripts', 'scrapy')()
  File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 150, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 90, in _run_print_help
    func(*a, **kw)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 157, in _run_command
    cmd.run(args, opts)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/commands/shell.py", line 65, in run
    crawler = self.crawler_process._create_crawler(spidercls)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 203, in _create_crawler
    return Crawler(spidercls, self.settings)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 55, in __init__
    self.extensions = ExtensionManager.from_crawler(self)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py", line 58, in from_crawler
    return cls.from_settings(crawler.settings, crawler)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py", line 34, in from_settings
    mwcls = load_object(clspath)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/misc.py", line 44, in load_object
    mod = import_module(module)
  File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/extensions/memusage.py", line 16, in 
    from scrapy.mail import MailSender
  File "/usr/local/lib/python2.7/dist-packages/scrapy/mail.py", line 25, in 
    from twisted.internet import defer, reactor, ssl
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/ssl.py", line 230, in 
    from twisted.internet._sslverify import (
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/_sslverify.py", line 15, in 
    from OpenSSL._util import lib as pyOpenSSLlib

scrapy 홈 페이지 에 따라(https://doc.scrapy.org/en/latest/intro/install.html#things-tat-are-good-to-know)의 요구.다음 과 같은 요 구 를 만족 시 켜 야 합 니 다. 
The minimal versions which Scrapy is tested against are:

Twisted 14.0
lxml 3.4
pyOpenSSL 0.14

현재 기계 에 설 치 된 module 버 전 을 확인 하 십시오.다음 과 같 습 니 다.
 pip list | egrep 'Twisted|lxml|pyOpenSSL'
lxml (3.3.3)
pyOpenSSL (0.13)
Twisted (18.9.0)
Twisted-Core (13.2.0)
Twisted-Web (13.2.0)

먼저 지 버 전의 pyOpenSSL 을 마 운 트 해제 하고 새 버 전의 pyOpenSSL 을 설치 합 니 다.
sudo pip uninstall pyOpenSSL
Not uninstalling pyOpenSSL at /usr/lib/python2.7/dist-packages, owned by OS


#  module pyOpenSSL   :
pip show pyOpenSSL
---
Name: pyOpenSSL
Version: 0.13
Location: /usr/lib/python2.7/dist-packages
Requires: 

sudo rm -rf  /usr/lib/python2.7/dist-packages/pyOpenSSL-0.13.egg-info

sudo pip install pyOpenSSL

      :
Downloading/unpacking pyOpenSSL
  Downloading pyOpenSSL-18.0.0-py2.py3-none-any.whl (53kB): 53kB downloaded
Downloading/unpacking cryptography>=2.2.1 (from pyOpenSSL)
  Downloading cryptography-2.4.1.tar.gz (468kB): 468kB downloaded
  Running setup.py (path:/tmp/pip_build_root/cryptography/setup.py) egg_info for package cryptography
    Traceback (most recent call last):
      File "", line 17, in 
      File "/tmp/pip_build_root/cryptography/setup.py", line 28, in 
        "cryptography requires setuptools 18.5 or newer, please upgrade to a "
    RuntimeError: cryptography requires setuptools 18.5 or newer, please upgrade to a newer version of setuptools
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):

  File "", line 17, in 

  File "/tmp/pip_build_root/cryptography/setup.py", line 28, in 

    "cryptography requires setuptools 18.5 or newer, please upgrade to a "

RuntimeError: cryptography requires setuptools 18.5 or newer, please upgrade to a newer version of setuptools


sudo pip install --upgrade setuptools
sudo pip install pyOpenSSL

  : scrapy shell     :
pkg_resources.DistributionNotFound: The 'ipaddress' distribution was not found and is required by cryptography

sudo pip install ipaddress


 상기 절 차 를 거 친 후 실행:
scrapy shell https:/www.baidu.com
scrapy shell https://www.baidu.com
/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/primitives/constant_time.py:26: CryptographyDeprecationWarning: Support for your Python version is deprecated. The next version of cryptography will remove support. Please upgrade to a 2.7.x release that supports hmac.compare_digest as soon as possible.
  utils.DeprecatedIn23,
/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py:163: CryptographyDeprecationWarning: OpenSSL version 1.0.1 is no longer supported by the OpenSSL project, please upgrade. A future version of cryptography will drop support for it.
  utils.CryptographyDeprecationWarning
2018-11-14 00:02:55 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot)
2018-11-14 00:02:55 [scrapy.utils.log] INFO: Versions: lxml 3.3.3.0, libxml2 2.9.1, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 2.7.6 (default, Nov 23 2017, 15:50:55) - [GCC 4.8.4], pyOpenSSL 18.0.0 (OpenSSL 1.0.1f 6 Jan 2014), cryptography 2.4.1, Platform Linux-3.13.0-32-generic-i686-athlon-with-Ubuntu-14.04-trusty
2018-11-14 00:02:55 [scrapy.crawler] INFO: Overridden settings: {'LOGSTATS_INTERVAL': 0, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter'}
2018-11-14 00:02:55 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2018-11-14 00:02:55 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-11-14 00:02:55 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-11-14 00:02:55 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-11-14 00:02:55 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-11-14 00:02:55 [scrapy.core.engine] INFO: Spider opened
2018-11-14 00:02:55 [scrapy.core.engine] DEBUG: Crawled (200)  (referer: None)
[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    
[s]   item       {}
[s]   request    
[s]   response   <200 https://www.baidu.com>
[s]   settings   
[s]   spider     
[s] Useful shortcuts:
[s]   fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s]   fetch(req)                  Fetch a scrapy.Request and update local objects 
[s]   shelp()           Shell help (print this help)
[s]   view(response)    View response in a browser
>>> 

OK! scrapy 설치 성공! 
 
 

좋은 웹페이지 즐겨찾기