PaddlePaddle 최적화 방법에서 EnforceNotMet: Enforce failed 오류 보고
4252 단어 PaddlePaddle문답 전문 구역.
rank
/usr/local/lib/python3.5/dist-packages/paddle/fluid/optimizer.py in minimize(self, loss, startup_program, parameter_list, no_grad_set)
253 """
254 params_grads = append_backward(loss, parameter_list, no_grad_set,
--> 255 [error_clip_callback])
256
257 params_grads = sorted(params_grads, key=lambda x: x[0].name)
/usr/local/lib/python3.5/dist-packages/paddle/fluid/backward.py in append_backward(loss, parameter_list, no_grad_set, callbacks)
588 _rename_grad_(root_block, fwd_op_num, grad_to_var, {})
589
--> 590 _append_backward_vars_(root_block, fwd_op_num, grad_to_var, grad_info_map)
591
592 program.current_block_idx = current_block_idx
/usr/local/lib/python3.5/dist-packages/paddle/fluid/backward.py in _append_backward_vars_(block, start_op_idx, grad_to_var, grad_info_map)
424 # infer_shape and infer_type
425 op_desc.infer_var_type(block.desc)
--> 426 op_desc.infer_shape(block.desc)
427 # ncclInit dones't need to set data_type
428 if op_desc.type() == 'ncclInit':
EnforceNotMet: Enforce failed. Expected dy_dims.size() == rank, but received dy_dims.size():1 != rank:2.
Input(Y@Grad) and Input(X) should have the same rank. at [/paddle/paddle/fluid/operators/cross_entropy_op.cc:82]
PaddlePaddle Call Stacks:
cost = fluid.layers.cross_entropy(input=model, label=label)
optimizer = fluid.optimizer.AdamOptimizer(learning_rate=0.001)
opts = optimizer.minimize(cost)
cost = fluid.layers.cross_entropy(input=model, label=label)
avg_cost = fluid.layers.mean(cost)
optimizer = fluid.optimizer.AdamOptimizer(learning_rate=0.001)
opts = optimizer.minimize(avg_cost)
fetch_list
파라미터가 cost
이 아니라 avg_cost
을 사용하면 훈련 출력도 Batch의 손실치가 된다.그래서 훈련할 때 fetch_list
파라미터의 값은 avg_cost
을 사용하는 것이 가장 좋고 출력하는 것은 평균 손실치이기 때문에 훈련 상황을 관찰하는 데 더욱 편리하다.