0

I have setup new cluster on single NIC using cephadm but i have added extra nic for replication cluster_network. This is what i did to configure cephadm but it didn't work.

$ ceph config set global cluster_network 192.168.1.0/24

View config

$ ceph config get mon cluster_network 192.168.1.0/24 $ ceph config get mon public_network 10.73.3.0/24 

Validate

$ ceph osd metadata 1 | grep addr "back_addr": "[v2:10.73.3.191:6812/1317996473,v1:10.73.3.191:6813/1317996473]", "front_addr": "[v2:10.73.3.191:6810/1317996473,v1:10.73.3.191:6811/1317996473]", "hb_back_addr": "[v2:10.73.3.191:6816/1317996473,v1:10.73.3.191:6817/1317996473]", "hb_front_addr": "[v2:10.73.3.191:6814/1317996473,v1:10.73.3.191:6815/1317996473]", 

Restarted using daemon

$ ceph orch restart osd.1

Still no impact

$ ceph osd metadata 1 | grep back_addr "back_addr": "[v2:10.73.3.191:6812/1317996473,v1:10.73.3.191:6813/1317996473]", "hb_back_addr": "[v2:10.73.3.191:6816/1317996473,v1:10.73.3.191:6817/1317996473]", 
1

1 Answer 1

0

try reconfiguring daemons

ceph orch daemon reconfig mon ceph orch daemon reconfig osd 

if the issue persist you should run this command

cephadm bootstrap --mon-ip <public_ip> --cluster-network <inetranl_ip_range>

2
  • Is it safe to run bootstrap in production? I thought bootstrap for initial cluster setup. Commented Jan 23, 2023 at 14:05
  • well i haven't done this before i don't think it wipe osds but don't run it in production env. test in lab before you do this if you only have rbd pools you can migrate them easily to new cluster. did you try re-configuring ? Commented Jan 24, 2023 at 5:37

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.