ip 변화로 인한ceph모니터 이상 및osd판 붕괴의 총결산

5605 단어
회사 이사, 모든 서버의 IP 변경.ceph 서버에 ip를 설정한 후에 시작했는데 모니터 프로세스가 시작되지 않았습니다. 모니터 프로세스는 항상 이전의 IP 주소로 연결되려고 시도했습니다. 그러면 성공할 수 없습니다.서버의 IP 설정에 문제가 있다고 생각하고hostname,ceph를 바꾸고 있습니다.ff 등 방법이 결과가 없는 후에 점차적으로 분석한 결과 몬맵의 IP 주소인지 예전의 IP인지,ceph는 몬맵을 읽어서 모니터 프로세스를 시작하기 때문에 몬맵을 수정해야 한다.방법은 다음과 같습니다.
#Add the new monitor locations
# monmaptool --create --add mon0 192.168.32.2:6789 --add osd1 192.168.32.3:6789 \
  --add osd2 192.168.32.4:6789 --fsid 61a520db-317b-41f1-9752-30cedc5ffb9a \
  --clobber monmap

#Retrieve the monitor map
# ceph mon getmap -o monmap.bin

#Check new contents
# monmaptool --print monmap.bin

#Inject the monmap
# ceph-mon -i mon0 --inject-monmap monmap.bin
# ceph-mon -i osd1 --inject-monmap monmap.bin
# ceph-mon -i osd2 --inject-monmap monmap.bin

모니터를 다시 시작합니다. 모든 것이 정상입니다.
그러나 지난 글에서 묘사한 osd 디스크가 끊어진 상황이 나타났다.한 바퀴 뒤져서ceph의 홈페이지에서ceph의 버그라고 합니다.복구할 힘이 없어서 이 osd를 삭제하고 다시 설치합니다.
# service ceph stop osd.4
# ceph osd crush remove osd.4
# ceph auth del osd.4
# ceph osd rm 4

# umount /cephmp1
# mkfs.xfs -f /dev/sdc
# mount /dev/sdc /cephmp1
# create osd
# ceph-deploy osd prepare osd2:/cephmp1:/dev/sdf1
# ceph-deploy osd activate osd2:/cephmp1:/dev/sdf1
완료 후 이 osd를 다시 시작하여 성공적으로 실행되었습니다.ceph는 자동으로 데이터를 균형 있게 조정합니다. 마지막 상태는 다음과 같습니다.
[root@osd2 ~]# ceph -s
    cluster 61a520db-317b-41f1-9752-30cedc5ffb9a
     health HEALTH_WARN 9 pgs incomplete; 9 pgs stuck inactive; 9 pgs stuck unclean; 3 requests are blocked > 32 sec
     monmap e3: 3 mons at {mon0=192.168.32.2:6789/0,osd1=192.168.32.3:6789/0,osd2=192.168.32.4:6789/0}, election epoch 76, quorum 0,1,2 mon0,osd1,osd2
     osdmap e689: 6 osds: 6 up, 6 in
      pgmap v189608: 704 pgs, 5 pools, 34983 MB data, 8966 objects
            69349 MB used, 11104 GB / 11172 GB avail
                 695 active+clean
                   9 incomplete

9개의 pg의 incomplete 상태가 발생했습니다.
[root@osd2 ~]# ceph health detail
HEALTH_WARN 9 pgs incomplete; 9 pgs stuck inactive; 9 pgs stuck unclean; 3 requests are blocked > 32 sec; 1 osds have slow requests
pg 5.95 is stuck inactive for 838842.634721, current state incomplete, last acting [1,4]
pg 5.66 is stuck inactive since forever, current state incomplete, last acting [4,0]
pg 5.de is stuck inactive for 808270.105968, current state incomplete, last acting [0,4]
pg 5.f5 is stuck inactive for 496137.708887, current state incomplete, last acting [0,4]
pg 5.11 is stuck inactive since forever, current state incomplete, last acting [4,1]
pg 5.30 is stuck inactive for 507062.828403, current state incomplete, last acting [0,4]
pg 5.bc is stuck inactive since forever, current state incomplete, last acting [4,1]
pg 5.a7 is stuck inactive for 499713.993372, current state incomplete, last acting [1,4]
pg 5.22 is stuck inactive for 496125.831204, current state incomplete, last acting [0,4]
pg 5.95 is stuck unclean for 838842.634796, current state incomplete, last acting [1,4]
pg 5.66 is stuck unclean since forever, current state incomplete, last acting [4,0]
pg 5.de is stuck unclean for 808270.106039, current state incomplete, last acting [0,4]
pg 5.f5 is stuck unclean for 496137.708958, current state incomplete, last acting [0,4]
pg 5.11 is stuck unclean since forever, current state incomplete, last acting [4,1]
pg 5.30 is stuck unclean for 507062.828475, current state incomplete, last acting [0,4]
pg 5.bc is stuck unclean since forever, current state incomplete, last acting [4,1]
pg 5.a7 is stuck unclean for 499713.993443, current state incomplete, last acting [1,4]
pg 5.22 is stuck unclean for 496125.831274, current state incomplete, last acting [0,4]
pg 5.de is incomplete, acting [0,4]
pg 5.bc is incomplete, acting [4,1]
pg 5.a7 is incomplete, acting [1,4]
pg 5.95 is incomplete, acting [1,4]
pg 5.66 is incomplete, acting [4,0]
pg 5.30 is incomplete, acting [0,4]
pg 5.22 is incomplete, acting [0,4]
pg 5.11 is incomplete, acting [4,1]
pg 5.f5 is incomplete, acting [0,4]
2 ops are blocked > 8388.61 sec
1 ops are blocked > 4194.3 sec
2 ops are blocked > 8388.61 sec on osd.0
1 ops are blocked > 4194.3 sec on osd.0
1 osds have slow requests

한 바퀴 돌았지만 열매가 없다.같은 일을 당한 사람의 한마디:
I already tried "ceph pg repair 4.77", stop/start OSDs, "ceph osd lost", "ceph pg force_create_pg 4.77".
Most scary thing is "force_create_pg" does not work. At least it should be a way to wipe out a incomplete PG
without destroying a whole pool.

이상의 방법을 시도해 보았지만 모두 안 된다.당분간 해결이 안 돼서 좀 구덩이가 된 것 같아요.
PS: 일반적인 pg 작업
[root@osd2 ~]# ceph pg map 5.de
osdmap e689 pg 5.de (5.de) -> up [0,4] acting [0,4]
[root@osd2 ~]# ceph pg 5.de query
[root@osd2 ~]# ceph pg scrub 5.de
instructing pg 5.de on osd.0 to scrub
[root@osd2 ~]# ceph pg 5.de mark_unfound_lost revert
pg has no unfound objects
#ceph pg dump_stuck stale
#ceph pg dump_stuck inactive
#ceph pg dump_stuck unclean
[root@osd2 ~]# ceph osd lost 1
Error EPERM: are you SURE?  this might mean real, permanent data loss.  pass --yes-i-really-mean-it if you really do.
[root@osd2 ~]# 
[root@osd2 ~]# ceph osd lost 4 --yes-i-really-mean-it
osd.4 is not down or doesn't exist
[root@osd2 ~]# service ceph stop osd.4
=== osd.4 === 
Stopping Ceph osd.4 on osd2...kill 22287...kill 22287...done
[root@osd2 ~]# ceph osd lost 4 --yes-i-really-mean-it
marked osd lost in epoch 690
[root@osd1 mnt]# ceph pg repair 5.de
instructing pg 5.de on osd.0 to repair
[root@osd1 mnt]# ceph pg repair 5.de
instructing pg 5.de on osd.0 to repair

좋은 웹페이지 즐겨찾기