DeepLearning 모형 변환 도구가 모두 쌓여 커다란 Docker 환경 구축

1. Introduction


귀찮아.이 세상의 모든 환경 구축은 매우 번거롭다.의존관계 파괴제는 고통스럽다.따라서 Giithub Actions에서 모델 변환에 관련된 환경을 모두 추출docker build하여 거대한 전체 DL 모델 변환 환경을 구축했다.부족한 주변 공구는 각각 추가 설치.컨테이너 내에서 GUI, iGPU/dGPU 및 호스트에 연결된 USB 장치에 액세스할 수 있으므로 실행 환경으로도 직접 사용할 수 있습니다.하지만 어쨌든 Image는 크고 안전성 GABA입니다.GiitHub Actions의 컨테이너 건설 용량 제한에 걸려 넘어지는 술책을 피하기 위해서다.

2. Environment

  • Python 3.6+
  • TensorFlow v2.6.0+
  • PyTorch v1.10.0+
  • TorchVision
  • TorchAudio
  • OpenVINO 2021.4.582+
  • TensorRT 8.2+
  • pycuda 2021.1
  • tensorflowjs
  • coremltools
  • onnx
  • onnxruntime
  • onnx_graphsurgeon
  • onnx-simplifier
  • onnxconverter-common
  • onnx-tensorrt
  • onnx2json
  • json2onnx
  • tf2onnx
  • torch2trt
  • onnx-tf
  • tensorflow-datasets
  • tf_slim
  • edgetpu_compiler
  • tflite2tensorflow
  • openvino2tensorflow
  • gdown
  • pandas
  • matplotlib
  • Intel-Media-SDK
  • Intel iHD GPU (iGPU) support
  • OpenCL
  • Docker
  • CUDA 11.4
  • https://github.com/PINTO0309/openvino2tensorflow
    https://github.com/PINTO0309/tflite2tensorflow

    3. Procedure


    3-1. Doke rile 만들기

    Dockerfile
    FROM nvidia/cuda:11.4.2-cudnn8-devel-ubuntu20.04
    
    ENV DEBIAN_FRONTEND=noninteractive
    ARG OSVER=ubuntu2004
    ARG TENSORFLOWVER=2.6.0
    ARG CPVER=cp38
    ARG OPENVINOVER=2021.4.582
    ARG OPENVINOROOTDIR=/opt/intel/openvino_2021
    ARG TENSORRTVER=cuda11.4-trt8.2.0.6-ea-20210922
    ARG APPVER
    ARG WKDIR=/home/user
    
    # dash -> bash
    RUN echo "dash dash/sh boolean false" | debconf-set-selections \
        && dpkg-reconfigure -p low dash
    COPY bashrc ${WKDIR}/.bashrc
    WORKDIR ${WKDIR}
    
    # Install dependencies (1)
    RUN apt-get update && apt-get install -y \
            automake autoconf libpng-dev nano python3-pip \
            curl zip unzip libtool swig zlib1g-dev pkg-config \
            python3-mock libpython3-dev libpython3-all-dev \
            g++ gcc cmake make pciutils cpio gosu wget \
            libgtk-3-dev libxtst-dev sudo apt-transport-https \
            build-essential gnupg git xz-utils vim \
            libva-drm2 libva-x11-2 vainfo libva-wayland2 libva-glx2 \
            libva-dev libdrm-dev xorg xorg-dev protobuf-compiler \
            openbox libx11-dev libgl1-mesa-glx libgl1-mesa-dev \
            libtbb2 libtbb-dev libopenblas-dev libopenmpi-dev \
        && sed -i 's/# set linenumbers/set linenumbers/g' /etc/nanorc \
        && apt clean \
        && rm -rf /var/lib/apt/lists/*
    
    # python3 -> python
    RUN ln -s /usr/bin/python3 /usr/bin/python
    
    # Install dependencies (2)
    RUN pip3 install --upgrade pip \
        && pip install --upgrade numpy==1.19.5 \
        && pip install --upgrade tensorflowjs \
        && pip install --upgrade coremltools \
        && pip install --upgrade onnx \
        && pip install --upgrade onnxruntime \
        && pip install --upgrade onnx-simplifier \
        && pip install --upgrade onnxconverter-common \
        && pip install --upgrade tf2onnx \
        && pip install --upgrade onnx-tf \
        && pip install --upgrade tensorflow-datasets \
        && pip install --upgrade openvino2tensorflow \
        && pip install --upgrade tflite2tensorflow \
        && pip install --upgrade gdown \
        && pip install --upgrade PyYAML \
        && pip install --upgrade matplotlib \
        && pip install --upgrade tf_slim \
        && pip install --upgrade pandas \
        && pip install --upgrade numexpr \
        && pip install --upgrade onnx2json \
        && pip install --upgrade json2onnx \
        && python3 -m pip install onnx_graphsurgeon \
            --index-url https://pypi.ngc.nvidia.com \
        && pip install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 \
            -f https://download.pytorch.org/whl/cu113/torch_stable.html \
        && pip install pycuda==2021.1 \
        && ldconfig \
        && pip cache purge \
        && apt clean \
        && rm -rf /var/lib/apt/lists/*
    
    # Install sclblonnx non-version check custom .ver
    RUN wget https://github.com/PINTO0309/openvino2tensorflow/releases/download/${APPVER}/sclblonnx-0.1.9_nvc-py3-none-any.whl \
        && pip3 install sclblonnx-0.1.9_nvc-py3-none-any.whl \
        && rm sclblonnx-0.1.9_nvc-py3-none-any.whl \
        && apt clean \
        && rm -rf /var/lib/apt/lists/*
    
    # Install custom tflite_runtime, flatc, edgetpu-compiler
    RUN wget https://github.com/PINTO0309/openvino2tensorflow/releases/download/${APPVER}/tflite_runtime-${TENSORFLOWVER}-${CPVER}-none-linux_x86_64.whl \
        && chmod +x tflite_runtime-${TENSORFLOWVER}-${CPVER}-none-linux_x86_64.whl \
        && pip3 install --force-reinstall tflite_runtime-${TENSORFLOWVER}-${CPVER}-none-linux_x86_64.whl \
        && rm tflite_runtime-${TENSORFLOWVER}-${CPVER}-none-linux_x86_64.whl \
        && wget https://github.com/PINTO0309/openvino2tensorflow/releases/download/${APPVER}/flatc.tar.gz \
        && tar -zxvf flatc.tar.gz \
        && chmod +x flatc \
        && rm flatc.tar.gz \
        && wget https://github.com/PINTO0309/tflite2tensorflow/raw/main/schema/schema.fbs \
        && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
        && echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list \
        && apt-get update \
        && apt-get install edgetpu-compiler \
        && pip cache purge \
        && apt clean \
        && rm -rf /var/lib/apt/lists/*
    
    # Install OpenVINO
    RUN wget https://github.com/PINTO0309/openvino2tensorflow/releases/download/${APPVER}/l_openvino_toolkit_p_${OPENVINOVER}.tgz \
        && tar xf l_openvino_toolkit_p_${OPENVINOVER}.tgz \
        && rm l_openvino_toolkit_p_${OPENVINOVER}.tgz \
        && l_openvino_toolkit_p_${OPENVINOVER}/install_openvino_dependencies.sh -y \
        && sed -i 's/decline/accept/g' l_openvino_toolkit_p_${OPENVINOVER}/silent.cfg \
        && l_openvino_toolkit_p_${OPENVINOVER}/install.sh --silent l_openvino_toolkit_p_${OPENVINOVER}/silent.cfg \
        && source ${OPENVINOROOTDIR}/bin/setupvars.sh \
        && ${INTEL_OPENVINO_DIR}/install_dependencies/install_openvino_dependencies.sh \
        && sed -i 's/sudo -E //g' ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh \
        && sed -i 's/tensorflow/#tensorflow/g' ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/requirements.txt \
        && sed -i 's/numpy/#numpy/g' ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/requirements.txt \
        && sed -i 's/onnx/#onnx/g' ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/requirements.txt \
        && ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh \
        && rm -rf l_openvino_toolkit_p_${OPENVINOVER} \
        && echo "source ${OPENVINOROOTDIR}/bin/setupvars.sh" >> .bashrc \
        && echo "${OPENVINOROOTDIR}/deployment_tools/ngraph/lib/" >> /etc/ld.so.conf \
        && echo "${OPENVINOROOTDIR}/deployment_tools/inference_engine/lib/intel64/" >> /etc/ld.so.conf \
        && pip cache purge \
        && apt clean \
        && rm -rf /var/lib/apt/lists/*
    
    # Install TensorRT additional package
    RUN wget https://github.com/PINTO0309/openvino2tensorflow/releases/download/${APPVER}/nv-tensorrt-repo-${OSVER}-${TENSORRTVER}_1-1_amd64.deb \
        && dpkg -i nv-tensorrt-repo-${OSVER}-${TENSORRTVER}_1-1_amd64.deb \
        && apt-key add /var/nv-tensorrt-repo-${OSVER}-${TENSORRTVER}/7fa2af80.pub \
        && apt-get update \
        && apt-get install -y \
            tensorrt uff-converter-tf graphsurgeon-tf \
            python3-libnvinfer-dev onnx-graphsurgeon \
        && rm nv-tensorrt-repo-${OSVER}-${TENSORRTVER}_1-1_amd64.deb \
        && cd /usr/src/tensorrt/samples/trtexec \
        && make \
        && apt clean \
        && rm -rf /var/lib/apt/lists/*
    
    # Install Custom TensorFlow (MediaPipe Custom OP, FlexDelegate, XNNPACK enabled)
    RUN wget https://github.com/PINTO0309/openvino2tensorflow/releases/download/${APPVER}/tensorflow-${TENSORFLOWVER}-${CPVER}-none-linux_x86_64.whl \
        && pip3 install --force-reinstall tensorflow-${TENSORFLOWVER}-${CPVER}-none-linux_x86_64.whl \
        && rm tensorflow-${TENSORFLOWVER}-${CPVER}-none-linux_x86_64.whl \
        && pip cache purge \
        && apt clean \
        && rm -rf /var/lib/apt/lists/*
    
    # Install onnx-tensorrt
    RUN git clone --recursive https://github.com/onnx/onnx-tensorrt \
        && cd onnx-tensorrt \
        && git checkout 1f041ce6d7b30e9bce0aacb2243309edffc8fb3c \
        && mkdir build && cd build \
        && cmake .. -DTENSORRT_ROOT=/usr/src/tensorrt \
        && make -j$(nproc) && make install
    
    # Install torch2trt
    RUN git clone https://github.com/NVIDIA-AI-IOT/torch2trt \
        && cd torch2trt \
        && git checkout 0400b38123d01cc845364870bdf0a0044ea2b3b2 \
        # https://github.com/NVIDIA-AI-IOT/torch2trt/issues/619
        && wget https://github.com/NVIDIA-AI-IOT/torch2trt/commit/8b9fb46ddbe99c2ddf3f1ed148c97435cbeb8fd3.patch \
        && git apply 8b9fb46ddbe99c2ddf3f1ed148c97435cbeb8fd3.patch \
        && python3 setup.py install
    
    # Download the ultra-small sample data set for INT8 calibration
    RUN mkdir sample_npy \
        && wget -O sample_npy/calibration_data_img_sample.npy https://github.com/PINTO0309/openvino2tensorflow/releases/download/${APPVER}/calibration_data_img_sample.npy
    
    # Clear caches
    RUN apt clean \
        && rm -rf /var/lib/apt/lists/*
    
    # Create a user who can sudo in the Docker container
    ENV USERNAME=user
    RUN echo "root:root" | chpasswd \
        && adduser --disabled-password --gecos "" "${USERNAME}" \
        && echo "${USERNAME}:${USERNAME}" | chpasswd \
        && echo "%${USERNAME}    ALL=(ALL)   NOPASSWD:    ALL" >> /etc/sudoers.d/${USERNAME} \
        && chmod 0440 /etc/sudoers.d/${USERNAME}
    USER ${USERNAME}
    RUN sudo chown ${USERNAME}:${USERNAME} ${WKDIR}\
        && sudo chmod 777 ${WKDIR}/.bashrc
    
    # OpenCL settings - https://github.com/intel/compute-runtime/releases
    RUN cd ${OPENVINOROOTDIR}/install_dependencies/ \
        && yes | sudo -E ./install_NEO_OCL_driver.sh \
        && cd ${WKDIR} \
        && wget https://github.com/intel/compute-runtime/releases/download/21.29.20389/intel-gmmlib_21.2.1_amd64.deb \
        && wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.7862/intel-igc-core_1.0.7862_amd64.deb \
        && wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.7862/intel-igc-opencl_1.0.7862_amd64.deb \
        && wget https://github.com/intel/compute-runtime/releases/download/21.29.20389/intel-opencl_21.29.20389_amd64.deb \
        && wget https://github.com/intel/compute-runtime/releases/download/21.29.20389/intel-ocloc_21.29.20389_amd64.deb \
        && wget https://github.com/intel/compute-runtime/releases/download/21.29.20389/intel-level-zero-gpu_1.1.20389_amd64.deb \
        && sudo dpkg -i *.deb \
        && rm *.deb \
        && sudo apt clean \
        && sudo rm -rf /var/lib/apt/lists/*
    
    # Final processing of onnx-tensorrt install
    RUN echo "export PATH=${PATH}:/usr/src/tensorrt/bin:/onnx-tensorrt/build" >> ${HOME}/.bashrc \
        && echo "cd ${HOME}/onnx-tensorrt" >> ${HOME}/.bashrc \
        && echo "sudo python3 setup.py install" >> ${HOME}/.bashrc \
        && echo "cd ${WKDIR}" >> ${HOME}/.bashrc \
        && echo "cd ${HOME}/workdir" >> ${HOME}/.bashrc
    

    3-2. GitHub workflow YAML 제작


    GiitHub에서 사용하는 패키지를 게시할 때 PyPI의 패키지와 전체 장착 컨테이너를 GiitHub Actions에 자동으로 구축하여 실행합니다.GiitHub Action에 등록하여 사용하십시오.GiitHub Actions의 백엔드에 할당된 저장소의 크기가 고정되어 있기 때문에 구축된 이미지의 저장 용량을 일시적으로 회피할 수 없는 경우No space left on device at System.IO.FileStream.WriteNative에 Abort를 실행합니다.따라서 Check space before cleanup의 부분에서 용량을 절약하기 위해 기본적으로 도입된 스팸 이미지와 불필요한 모듈을 제거한다.이것을 실행하지 않으면 용량이 부족하면 Abort를 진행합니다.docker-image.yml
    name: Docker Image CI
    on:
      release:
        types:
          - published
    jobs:
      build:
        runs-on: ubuntu-latest
        steps:
          - name: Git checkout
            uses: actions/checkout@v2
          - name: downcase REPO
            run: echo "REPO=${GITHUB_REPOSITORY,,}" >> ${GITHUB_ENV}
          - name: Get Tag
            run: echo "TAG=${GITHUB_REF##*/}" >> ${GITHUB_ENV}
    
          - name: Check space before cleanup
            run: df -h
          - name: Clean space as per https://github.com/actions/virtual-environments/issues/709
            run: |
              sudo rm -rf "/usr/local/share/boost"
              sudo rm -rf "$AGENT_TOOLSDIRECTORY"
              docker rmi $(docker image ls -aq)
              df -h
          - name: Setup Python
            uses: actions/setup-python@v2
            with:
              python-version: '3.x'
          - name: Install dependencies
            run: |
              python -m pip install --upgrade pip
              pip install setuptools wheel twine
          - name: Package build and publish
            env:
              TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}
              TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
            run: |
              python setup.py sdist bdist_wheel
              twine upload --repository pypi dist/*
          - name: Enable buildx
            uses: docker/setup-buildx-action@v1
          - name: Login
            uses: docker/login-action@v1
            with:
              registry: ghcr.io
              username: ${{ github.actor }}
              password: ${{ secrets.GITHUB_TOKEN }}
          - name: Docker build/push
            uses: docker/build-push-action@v2
            with:
              context: .
              push: true
              tags: ghcr.io/${{ env.REPO }}:latest
              build-args: APPVER=${{ env.TAG }}
    

    4. 구축된 컨테이너 사용 방법


    $ docker pull ghcr.io/pinto0309/openvino2tensorflow:latest
    
    # If you don't need to access the GUI of 
    # the HostPC and the USB camera.
    $ docker run -it --rm \
      -v `pwd`:/home/user/workdir \
      ghcr.io/pinto0309/openvino2tensorflow:latest
    
    # If conversion to TF-TRT is not required.
    # And if you need to access the HostPC GUI and USB camera.
    $ xhost +local: && \
      docker run -it --rm \
      -v `pwd`:/home/user/workdir \
      -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
      --device /dev/video0:/dev/video0:mwr \
      --net=host \
      -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
      -e DISPLAY=$DISPLAY \
      --privileged \
      ghcr.io/pinto0309/openvino2tensorflow:latest
    
    # If you need to convert to TF-TRT.
    # And if you need to access the HostPC GUI and USB camera.
    $ xhost +local: && \
      docker run --gpus all -it --rm \
      -v `pwd`:/home/user/workdir \
      -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
      --device /dev/video0:/dev/video0:mwr \
      --net=host \
      -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
      -e DISPLAY=$DISPLAY \
      --privileged \
      ghcr.io/pinto0309/openvino2tensorflow:latest
    
    # If you are using iGPU (OpenCL).
    # And if you need to access the HostPC GUI and USB camera.
    $ xhost +local: && \
      docker run -it --rm \
      -v `pwd`:/home/user/workdir \
      -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
      --device /dev/video0:/dev/video0:mwr \
      --net=host \
      -e LIBVA_DRIVER_NAME=iHD \
      -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
      -e DISPLAY=$DISPLAY \
      --privileged \
      ghcr.io/pinto0309/openvino2tensorflow:latest
    

    좋은 웹페이지 즐겨찾기