Replies: 3 comments 1 reply
-
Multipath not related to network setup. It is used for FC and iSCSI at the same time. iscsiadm can be configured in 2 way:
Example:cat /etc/network/interfaces
auto ens1f2np2
iface ens1f2np2 inet manual
auto ens1f3np3
iface ens1f3np3 inet manual
auto bond0
iface bond0 inet manual
bond-slaves ens1f2np2 ens1f3np3
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer3+4
mtu 9000
bond_lacp_rate fast
bond_updelay 200
bond_downdelay 200
auto bond0.1
iface bond0.1 inet manual
mtu 9000
auto bond0.2
iface bond0.2 inet manual
mtu 9000
auto bond0.3
iface bond0.3 inet manual
mtu 9000
auto bond0.23
iface bond0.23 inet manual
mtu 9000
auto vmbr1
iface vmbr10 inet static
address xx.xx.1.1/24
bridge-ports bond0.1
bridge-stp off
bridge-fd 0
mtu 9000
auto vmbr2
iface vmbr11 inet static
address xx.xx.2.1/24
bridge-ports bond0.2
bridge-stp off
bridge-fd 0
mtu 9000
auto vmbr3
iface vmbr12 inet static
address xx.xx.3.1/24
bridge-ports bond0.22
bridge-stp off
bridge-fd 0
mtu 9000
auto vmbr4
iface vmbr13 inet static
address xx.xx.4.1/24
bridge-ports bond0.23
bridge-stp off
bridge-fd 0
mtu 9000 PS vmbr can be replaced with alias or virtual interface
It is better to make different networks for each interface it is recommendation. There are many issues if it is one subnet. It is hard for network device to calculate this kind of traffic.
PSS fio -ioengine=libaio -direct=1 -sync=1 -name=test -bs=4k -iodepth=1 -rw=randwrite -runtime=600 -filename=/dev/disk/by-id/wwn-0x624a93702xxxxxxxx --size=100G i'm always testing speed on hypervisor and inside qemu because of this layers and how disk semaphores working PSSS |
Beta Was this translation helpful? Give feedback.
-
if vm is off disk become unmapped. This is made for live migration and to avoid lun limit. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hello, not directly related to the plugin, but maybe something the plugin can help setup.
Currently, when using multipathing, it is only using one available interface to do so. Is there a way to get same-subnet multipathing working easily?
Following this guide: https://www.suse.com/c/configure-multiple-iscsi-ifaces-and-use-multipathing/
(And this) https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/iface-config-software-iscsi#iface-config-software-iscsi
I have successfully got multipath to see both interfaces, but I'm not 100% confident it is actually utilizing them. As IOPS and bandwidth both are the same before and after making the change.
I'd imagine a significant performance increase if this worked. We have a physically identical machine we are using to test ESXi vs Proxmox, and due to the way ESXi can do port binding, it's almost a 2x speed and IOPS increase over Proxmox. Identical hardware and workload tests.
Beta Was this translation helpful? Give feedback.
All reactions