TensorFlow2.Docker 컨테이너에 3.1 pip 패키지 구축
10830 단어 TensorFlowtech
Dockerfile
FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04
RUN apt-get update
RUN apt-get install -y --no-install-recommends \
git
RUN curl -OL https://github.com/bazelbuild/bazelisk/releases/download/v1.7.4/bazelisk-linux-amd64 \
&& mv bazelisk-linux-amd64 /usr/local/bin/bazel && chmod +x /usr/local/bin/bazel
WORKDIR /build
RUN git clone https://github.com/tensorflow/tensorflow.git
RUN cd tensorflow && git checkout -b v2.3.1 refs/tags/v2.3.1
COPY requirements.txt /build/tensorflow
RUN apt-get install -y --no-install-recommends python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install setuptools
RUN apt-get install -y --no-install-recommends python3-dev
RUN ln -s /usr/bin/python3.6 /usr/bin/python
RUN pip3 install -r /build/tensorflow/requirements.txt
requirements.txt
는 setup.py의REQUIRED_PACKAGES
에서 옮겨온 것이다. (이것들을 설치해야 한다. 공식'Build from source에 적힌 것처럼.일본어 버전의 정보는 비교적 낡아서 문서와 관련이 있다.py는 마스터 지점이기 때문에 구축할 탭으로 전환해야 하기 때문에 주의해야 합니다.requirements.txt
absl-py >= 0.7.0
astunparse == 1.6.3
gast == 0.3.3
google_pasta >= 0.1.8
h5py >= 2.10.0, < 2.11.0
keras_preprocessing >= 1.1.1, < 1.2
numpy >= 1.16.0, < 1.19.0
opt_einsum >= 2.3.2
protobuf >= 3.9.2
tensorboard >= 2.3.0, < 3
tensorflow_estimator >= 2.3.0, < 2.4.0
termcolor >= 1.1.0
wrapt >= 1.11.1
wheel >= 0.26
six >= 1.12.0
이 Docker file에서 이미지를 만들어 컨테이너를 시작합니다.docker build -t tfbuild .
docker run --gpus 0 --rm -it --shm-size=2g -u root -v "$(pwd):/work" tfbuild bash -l
docker 용기의 bash가 일어설 때 다음과 같이 구축한다.하지만--local_ram_resources
,--local_cpu_resources
환경에 맞춰주세요.PC의 성능 한계를 지정하면 자원을 다 써서 다운되는 경우가 많다.크롬 등 다른 앱을 실수로 가동하면 구축해도 다운될 수 있으니 주의하는 게 좋다.cd tensorflow/
./configure
bazel build --local_ram_resources=1024 --local_cpu_resources=8 --verbose_failures --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /work/pip_package
bazel build
가 끝나면 wheel 파일을 생성하는 데 필요한 build_pip_package
.wheel 파일은 호스트 디렉터리에 마운트된 /work
에서 생성되어 의외의 삭제를 방지합니다../configure
자신의 환경에 따라 대답하면 되지만 nvidia가 만든 gpu를 넣은 PC라면 다음과 같이 대답할 수 있다.$ /build/tensorflow# ./configure
You have bazel 3.1.0 installed.
Please specify the location of python. [Default is /usr/bin/python3]:
Found possible Python library paths:
/usr/local/lib/python3.6/dist-packages
/usr/lib/python3/dist-packages
Please input the desired Python library path to use. Default is [/usr/local/lib/python3.6/dist-packages]
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: N
No OpenCL SYCL support will be enabled for TensorFlow.
Do you wish to build TensorFlow with ROCm support? [y/N]: N
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.
Do you wish to build TensorFlow with TensorRT support? [y/N]: N
No TensorRT support will be enabled for TensorFlow.
Found CUDA 10.2 in:
/usr/local/cuda-10.2/targets/x86_64-linux/lib
/usr/local/cuda-10.2/targets/x86_64-linux/include
Found cuDNN 7 in:
/usr/lib/x86_64-linux-gnu
/usr/include
Please specify a list of comma-separated CUDA compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Each capability can be specified as "x.y" or "compute_xy" to include both virtual and binary GPU code, or as "sm_xy" to only include the binary code.
Please note that each additional compute capability significantly increases your build time and binary size, and that TensorFlow only supports compute capabilities >= 3.5 [Default is: 3.5,7.0]:
Do you want to use clang as CUDA compiler? [y/N]: N
nvcc will be used as CUDA compiler.
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]:
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: N
Not configuring the WORKSPACE for Android builds.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
--config=ngraph # Build with Intel nGraph support.
--config=numa # Build with NUMA support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
--config=v2 # Build TensorFlow 2.x instead of 1.x.
Preconfigured Bazel build configs to DISABLE default on features:
--config=noaws # Disable AWS S3 filesystem support.
--config=nogcp # Disable GCP support.
--config=nohdfs # Disable HDFS support.
--config=nonccl # Disable NVIDIA NCCL support.
Configuration finished
Reference
이 문제에 관하여(TensorFlow2.Docker 컨테이너에 3.1 pip 패키지 구축), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다 https://zenn.dev/nnabeyang/articles/aaacc43c2bbb8e57e452텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념 (Collection and Share based on the CC Protocol.)