PyTorch가 시작한 AI 개발(책)이 실패한 일을 해보려고
13424 단어 PyTorch
개시하다
요즘 PyTorch 공부를 통해 AI 공부를 열심히 하고 있습니다.
PyTorch 책도 다음 책만 남았다.
이 책은 2021년 6월에 발매되지만, 이 시대에는 드물게 Google Colab을 지지하지 않는다.그곳에서 환경 구축부터 코드 준비까지 모두 진행됐습니다.역시 Google Colab처럼 어떤 곳은 실행만으로는 움직일 수 없고, 어떤 곳은 원래대로 진행할 수 없기 때문에 총괄하고 싶습니다.
컨디션
문제.
1. ffmpeg 설치
ffmpeg-release-full.zip을 설치한 후에 있었지만 찾지 못했습니다.
ffmpeg-4.2-essentials에서build.zip이 설치되었습니다.
또한 Windows가 PC를 보호한다는 메시지가 특별히 표시되지 않아 두 번 클릭해도 아무런 변화가 없지만 명령 프롬프트에서 아래 명령의 명령 일람이 나타나 문제가 없는 것 같습니다.$ ffmpeg -h
2.chapt02-1.py에서 X, y in data forloader로 오류 발생
(py) C:\Users\jinwa\Desktop\PyTorchではじめるAI開発\chap2>python chapt02_1.py
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to C:\Users\jinwa/.cache\torch\hub\checkpoints\resnet50-19c8e357.pth
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97.8M/97.8M [00:12<00:00, 8.00MB/s]
test #0 lr=0.001 weight=0.1
test #0 lr=0.001 weight=0.1
Traceback (most recent call last):
File "<string>", line 1, in <module>
Traceback (most recent call last):
File "chapt02_1.py", line 163, in <module>
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\spawn.py", line 105, in spawn_main
for X, y in data_loader: # 画像を読み込んでtensorにする
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\utils\data\dataloader.py", line 355, in __iter__
exitcode = _main(fd)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\spawn.py", line 114, in _main
return self._get_iterator()
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\utils\data\dataloader.py", line 301, in _get_iterator
prepare(preparation_data)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\spawn.py", line 225, in prepare
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\utils\data\dataloader.py", line 914, in __init__
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
w.start()
File "C:\Users\jinwa\miniconda3\envs\py\lib\runpy.py", line 263, in run_path
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\process.py", line 112, in start
pkg_name=pkg_name, script_name=fname)
self._popen = self._Popen(self)
File "C:\Users\jinwa\miniconda3\envs\py\lib\runpy.py", line 96, in _run_module_code
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\context.py", line 322, in _Popen
File "C:\Users\jinwa\miniconda3\envs\py\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
return Popen(process_obj)
File "C:\Users\jinwa\Desktop\PyTorchではじめるAI開発\chap2\chapt02_1.py", line 163, in <module>
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
for X, y in data_loader: # 画像を読み込んでtensorにする
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\utils\data\dataloader.py", line 355, in __iter__
reduction.dump(process_obj, to_child)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\reduction.py", line 60, in dump
return self._get_iterator()
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\utils\data\dataloader.py", line 301, in _get_iterator
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\utils\data\dataloader.py", line 914, in __init__
w.start()
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\popen_spawn_win32.py", line 46, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
다음 문장을 참고하여 코드를 수정합니다.
python multiprocessing을 사용하여 병렬 처리
한 마디로 하면if name='main'에서 다중 루틴을 이용한 코드를 실행하십시오
코드가 공개되지 않았기 때문에 올릴 수 없습니다
def 이외의 코드를 넣으세요.if __name__ == '__main__':
(関数とグローバル変数(大文字変数)以外を中に入れる
2.chapt02_2. py에서 Input type(torch.cuda.FlatTensor) 및 d weight type(torch.FlatTensor)should be the same
chapt02_2.py를 실행할 때 다음과 같은 오류가 발생합니다.(py) C:\Users\jinwa\Desktop\PyTorchではじめるAI開発\chap2>python chapt02_2.py
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\jinwa\miniconda3\envs\py\lib\threading.py", line 926, in _bootstrap_inner
self.run()
File "C:\Users\jinwa\miniconda3\envs\py\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "chapt02_2.py", line 81, in detect
batch_result = model(batch_tensor)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torchvision\models\resnet.py", line 249, in forward
return self._forward_impl(x)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torchvision\models\resnet.py", line 232, in _forward_impl
x = self.conv1(x)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\conv.py", line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\conv.py", line 396, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
이 오류가 발생하면 모델을 읽을 수 없기 때문에 식별 결과를 표시할 수 없습니다.
다음 문장을 참고하여 코드에 한 줄을 추가합니다.
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
chapt01_2.py
# 保存しておいたモデルを読み込む
model = models.resnet50(pretrained=False)
model.fc = nn.Linear(2048, 2)
model.load_state_dict(torch.load('chapt02-model1.pth', map_location=torch.device(USE_DEVICE)))
model.cuda() # 追加
# モデルを推論用に設定する
model.eval()
chapt04_1의 nomodule seaburn
책에는 없지만 seabrn을 설치해야 합니다.
Anaconda 프롬프트에서 다음 명령을 사용하여 설치합니다.$ pip install seaborn
chapt04_02에 입력한 모양 오류
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\jinwa\miniconda3\envs\py\lib\threading.py", line 926, in _bootstrap_inner
self.run()
File "C:\Users\jinwa\miniconda3\envs\py\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "chapt04_2.py", line 64, in detect
results = model(img_tensor, size=640)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\jinwa/.cache\torch\hub\ultralytics_yolov5_master\models\common.py", line 317, in forward
return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\jinwa/.cache\torch\hub\ultralytics_yolov5_master\models\yolo.py", line 126, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Users\jinwa/.cache\torch\hub\ultralytics_yolov5_master\models\yolo.py", line 149, in _forward_once
x = m(x) # run
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\jinwa/.cache\torch\hub\ultralytics_yolov5_master\models\common.py", line 48, in forward_fuse
return self.act(self.conv(x))
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\conv.py", line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\conv.py", line 396, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [32, 3, 6, 6], expected input[1, 720, 1280, 3] to have 3 channels, but got 720 channels instea
메일로 물어보는 중
답장이 왔지만 생각한 대로 움직일 수 없고 이후 코드도 움직일 수 없어요.
chapt05_1 coco dataset 해제 불가
왜 라플러스가 해빙을 못했는지 모르겠다.
7zip이면 해동할 수 있어요.
끝말
콜랩이라면 실행만 하면 되지만 이번에는 아나콘다 환경이어서 역시 책처럼 순조롭게 진행되지 못했다.
가능하다면 Colab을 지원하고 싶지만 코드를 다운로드하는 데도 비밀번호가 필요해서 어려울 것 같아요.
이렇게 되면 공부하는 사람도 적어질 거라고 생각하지만,'처음'인 사람이 실패하지 않도록 앞으로도 총결산하겠다.
내가 좌절하지 않고 해낼 수 있을까?걱정돼요.
참고 자료
$ ffmpeg -h
(py) C:\Users\jinwa\Desktop\PyTorchではじめるAI開発\chap2>python chapt02_1.py
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to C:\Users\jinwa/.cache\torch\hub\checkpoints\resnet50-19c8e357.pth
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97.8M/97.8M [00:12<00:00, 8.00MB/s]
test #0 lr=0.001 weight=0.1
test #0 lr=0.001 weight=0.1
Traceback (most recent call last):
File "<string>", line 1, in <module>
Traceback (most recent call last):
File "chapt02_1.py", line 163, in <module>
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\spawn.py", line 105, in spawn_main
for X, y in data_loader: # 画像を読み込んでtensorにする
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\utils\data\dataloader.py", line 355, in __iter__
exitcode = _main(fd)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\spawn.py", line 114, in _main
return self._get_iterator()
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\utils\data\dataloader.py", line 301, in _get_iterator
prepare(preparation_data)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\spawn.py", line 225, in prepare
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\utils\data\dataloader.py", line 914, in __init__
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
w.start()
File "C:\Users\jinwa\miniconda3\envs\py\lib\runpy.py", line 263, in run_path
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\process.py", line 112, in start
pkg_name=pkg_name, script_name=fname)
self._popen = self._Popen(self)
File "C:\Users\jinwa\miniconda3\envs\py\lib\runpy.py", line 96, in _run_module_code
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\context.py", line 322, in _Popen
File "C:\Users\jinwa\miniconda3\envs\py\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
return Popen(process_obj)
File "C:\Users\jinwa\Desktop\PyTorchではじめるAI開発\chap2\chapt02_1.py", line 163, in <module>
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
for X, y in data_loader: # 画像を読み込んでtensorにする
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\utils\data\dataloader.py", line 355, in __iter__
reduction.dump(process_obj, to_child)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\reduction.py", line 60, in dump
return self._get_iterator()
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\utils\data\dataloader.py", line 301, in _get_iterator
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\utils\data\dataloader.py", line 914, in __init__
w.start()
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\popen_spawn_win32.py", line 46, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\jinwa\miniconda3\envs\py\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
if __name__ == '__main__':
(関数とグローバル変数(大文字変数)以外を中に入れる
(py) C:\Users\jinwa\Desktop\PyTorchではじめるAI開発\chap2>python chapt02_2.py
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\jinwa\miniconda3\envs\py\lib\threading.py", line 926, in _bootstrap_inner
self.run()
File "C:\Users\jinwa\miniconda3\envs\py\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "chapt02_2.py", line 81, in detect
batch_result = model(batch_tensor)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torchvision\models\resnet.py", line 249, in forward
return self._forward_impl(x)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torchvision\models\resnet.py", line 232, in _forward_impl
x = self.conv1(x)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\conv.py", line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\conv.py", line 396, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
# 保存しておいたモデルを読み込む
model = models.resnet50(pretrained=False)
model.fc = nn.Linear(2048, 2)
model.load_state_dict(torch.load('chapt02-model1.pth', map_location=torch.device(USE_DEVICE)))
model.cuda() # 追加
# モデルを推論用に設定する
model.eval()
$ pip install seaborn
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\jinwa\miniconda3\envs\py\lib\threading.py", line 926, in _bootstrap_inner
self.run()
File "C:\Users\jinwa\miniconda3\envs\py\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "chapt04_2.py", line 64, in detect
results = model(img_tensor, size=640)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\jinwa/.cache\torch\hub\ultralytics_yolov5_master\models\common.py", line 317, in forward
return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\jinwa/.cache\torch\hub\ultralytics_yolov5_master\models\yolo.py", line 126, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Users\jinwa/.cache\torch\hub\ultralytics_yolov5_master\models\yolo.py", line 149, in _forward_once
x = m(x) # run
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\jinwa/.cache\torch\hub\ultralytics_yolov5_master\models\common.py", line 48, in forward_fuse
return self.act(self.conv(x))
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\conv.py", line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\jinwa\miniconda3\envs\py\lib\site-packages\torch\nn\modules\conv.py", line 396, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [32, 3, 6, 6], expected input[1, 720, 1280, 3] to have 3 channels, but got 720 channels instea
콜랩이라면 실행만 하면 되지만 이번에는 아나콘다 환경이어서 역시 책처럼 순조롭게 진행되지 못했다.
가능하다면 Colab을 지원하고 싶지만 코드를 다운로드하는 데도 비밀번호가 필요해서 어려울 것 같아요.
이렇게 되면 공부하는 사람도 적어질 거라고 생각하지만,'처음'인 사람이 실패하지 않도록 앞으로도 총결산하겠다.
내가 좌절하지 않고 해낼 수 있을까?걱정돼요.
참고 자료
Reference
이 문제에 관하여(PyTorch가 시작한 AI 개발(책)이 실패한 일을 해보려고), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다 https://qiita.com/Sicut_study/items/4a66e84d5bcb65873e61텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념 (Collection and Share based on the CC Protocol.)