Some problems and solutions about ceph
5962 단어 cephLittle trouble
# ceph health
HEALTH_WARN application not enabled on 1 pool(s)
solve:
# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
application not enabled on pool 'kube'
use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
# ceph osd pool application enable kube rbd
enabled application 'rbd' on pool 'kube'
# ceph health
HEALTH_OK
2. Question:
# ceph -s
cluster:
id: e781a2e4-097d-4867-858d-bdbd3a264435
health: HEALTH_WARN
clock skew detected on mon.ceph02, mon.ceph03
solve:
#### NTP
# systemctl status ntpd
#### ceph
# vim /etc/ceph/ceph.conf
### global :
mon clock drift allowed = 2
mon clock drift warn backoff = 30
#### mon
# cd /etc/ceph/
# ceph-deploy --overwrite-conf config push ceph{01..03}
#### mon
# systemctl restart ceph-mon.target
# ceph -s
cluster:
id: e781a2e4-097d-4867-858d-bdbd3a264435
health: HEALTH_OK
3. Question:
# rbd map abc/zhijian --id admin
rbd: sysfs write failed
RBD image feature set mismatch. Try disabling features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
solve:
Mapping failed because the kernel does not support some features of block device mirroring
# rbd feature disable abc/zhijian exclusive-lock, object-map, fast-diff, deep-flatten
# rbd info abc/zhijian
rbd image 'zhijian':
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.1011074b0dc51
format: 2
features: layering
flags:
create_timestamp: Sun May 6 13:35:21 2018
# rbd map abc/zhijian --id admin
/dev/rbd0
4. Question:
# ceph osd pool delete cephfs_data
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool cephfs_data. If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it.
# ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
solve:
# tail -n 2 /etc/ceph/ceph.conf
[mon]
mon allow pool delete = true
Push the configuration file to the mon node that needs to be synchronized:
# cd /etc/ceph/
# ceph-deploy --overwrite-conf config push ceph{01..03}
Restart the mon service and verify:
# systemctl restart ceph-mon.target
# ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it
pool 'cephfs_data' removed
5. Question:
# ceph osd pool rm cephfs_data cephfs_data --yes-i-really-really-mean-it
Error EBUSY: pool 'cephfs_data' is in use by CephFS
solve:
# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
# ceph fs rm cephfs --yes-i-really-mean-it
Error EINVAL: all MDS daemons must be inactive before removing filesystem
# systemctl stop ceph-mds.target
# ceph fs rm cephfs
Error EPERM: this is a DESTRUCTIVE operation and will make data in your filesystem permanently inaccessible. Add --yes-i-really-mean-it if you are sure you wish to continue.
# ceph fs rm cephfs --yes-i-really-mean-it
# ceph fs ls
No filesystems enabled
6. Question:
Create a pod with a static PV, the pod is always in the ContainerCreating state:
# kubectl get pod ceph-pod1
NAME READY STATUS RESTARTS AGE
ceph-pod1 0/1 ContainerCreating 0 10s
......
# kubectl describe pod ceph-pod1
Warning FailedMount 41s (x8 over 1m) kubelet, node01 MountVolume.WaitForAttach failed for volume "ceph-pv" : fail to check rbd image status with: (executable file not found in $PATH), rbd output: ()
Warning FailedMount 0s kubelet, node01 Unable to mount volumes for pod "ceph-pod1_default(14e3a07d-93a8-11e8-95f6-000c29b1ec26)": timeout expired waiting for volumes to attach or mount for pod "default"/"ceph-pod1". list of unmounted volumes=[ceph-vol1]. list of unattached volumes=[ceph-vol1 default-token-v9flt]
Solution: The node node installs the latest version of ceph-common to solve the problem, the ceph cluster uses the latest mimic version, and the version of the base source is too old, so this problem occurs7. Question:
Create a dynamic PV, the PVC is always in the pending state:
# kubectl get pvc -n ceph
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ceph-pvc Pending ceph-rbd 2m
# kubectl describe pvc -n ceph
......
Warning ProvisioningFailed 27s persistentvolume-controller Failed to provision volume with StorageClass "ceph-rbd": failed to create rbd image: exit status 1, command output: 2018-07-31 11:10:33.395991 7faa3558b7c0 -1 did not load config file, using default settings.
rbd: extraneous parameter --image-feature
solve: The persistentvolume-controller service runs on the master node and is controlled by kube-controller-manager, so the master node also needs to install the ceph-common package
이 내용에 흥미가 있습니까?
현재 기사가 여러분의 문제를 해결하지 못하는 경우 AI 엔진은 머신러닝 분석(스마트 모델이 방금 만들어져 부정확한 경우가 있을 수 있음)을 통해 가장 유사한 기사를 추천합니다:
cephmdsdmaged는cephfs붕괴의 재난성 회복을 초래한다데이터가 분실되었고cephfs가 서비스를 제공할 수 없습니다. 방법1: 이 방법은 일반적인 상황에서 사용하기 좋으며 osdstuck 상태가 있을 때도 효력을 잃는다.이 때 osd를 다시 시작해야 합니다. stuck을...
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
CC BY-SA 2.5, CC BY-SA 3.0 및 CC BY-SA 4.0에 따라 라이센스가 부여됩니다.