Skip to content

Commit

Permalink
Work doc and bgp scalability test (#5) (sonic-net#9214)
Browse files Browse the repository at this point in the history
What is the motivation for this PR?
The primary motivation for these modifications was to set up a virtual machine (VM) topology and ensure that the 'test_bgp_scalability.py' runs smoothly on both VM and hardware setups.

How did you do it?
These improvements and fixes were achieved by adjusting the indexing in the 'bgp_test_gap_helper.py' file, adding logic to disable the 'EnableDataPlaneEventsRateMonitor' for VM setups, and fixing attribute errors.

How did you verify/test it?
The 'test_bgp_scalability.py' was run on both hardware and VM setups. Both setups passed the test as expected.

Any platform specific information?
IxNetwork Application Version: 9.30.2212.22 on a virtual machine, used for both VM and HW setups.
IxChassis Application Version: 9.21 on a virtual machine and for the hardware setup.
Application Version: 9.30 on two Load Module virtual machines attached to a virtual IxChassis.
DUT SONiC-OS-202205.302215-e8e8c019c for VM setup.
DUT SONiC-OS-202205.262571-f2a687b33 for the hardware setup.

Documentation
Instructions on setting up a VM topology for testing have been added.
docs/testbed/README.testbed.Keysight.md
docs/testbed/README.testbed.Setup.md
  • Loading branch information
tigrmeli authored Aug 3, 2023
1 parent d94858c commit 05529ef
Show file tree
Hide file tree
Showing 5 changed files with 198 additions and 37 deletions.
147 changes: 147 additions & 0 deletions docs/testbed/README.testbed.Keysight.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@ Based on test need there may be multiple topologies possible as shown below :
- Multiple IxNetwork Topology
![](img/multiple-ixnetwork.PNG)

## Virtual Topology
![](img/IxNetwork_Virtual_Topology.png)
## Topology Description

### Ixia Chassis (IxNetwork)
Expand Down Expand Up @@ -91,3 +93,148 @@ Note : The folders within /opt/container/one/ should to be created with read and
```

6. Launch IxNetworkWeb using browser `https://container ip`


## For Virtual Topology next steps are required
### Deploy Ixia_Virtual_Chassis

1. Download Ixia_Virtual_Chassis image from:
https://downloads.ixiacom.com/support/downloads_and_updates/public/IxVM/9.30/9.30.0.328/Ixia_Virtual_Chassis_9.30_KVM.qcow2.tar.bz2
2. Start the VMs:

Example is for the image located in /vms
```
cd /vms
sudo tar xjf Ixia_Virtual_Chassis_9.30_KVM.qcow2.tar.bz2
virt-install --name IxChassis --memory 16000 --vcpus 8 --disk /vms/Ixia_Virtual_Chassis_9.30_KVM.qcow2,bus=sata --import --os-variant centos7.0 --network bridge=br1,model=virtio --noautoconsole
```
3. If a dhcp server is present we can observe the IP assigned
```
Welcome to Ixia Virtual Chassis
CentOS Linux 7
Kernel 3.10 on x86_64
Management IPv4: 10.36.78.217/22
IxOS Version: 9.30.3001.12
IxNetwork Protocol Version: 9.30.2212.1
```
Note: If the Ixia Virtual Chassis dont take the ip from DHCP server this solutions might help you:
- Disable firewall
```
sudo ufw disable
```
- Instead of command in step 2
```
virt-install --name IxChassis --memory 16000 --vcpus 8 --disk /vms/Ixia_Virtual_Chassis_9.30_KVM.qcow2,bus=sata --import --os-variant centos7.0 --network bridge=br1,model=virtio --noautoconsole
```
Try to use this
```
virt-install --name IxChassis --memory 16000 --vcpus 8 --disk /vms/Ixia_Virtual_Chassis_9.30_KVM.qcow2,bus=sata --import --osinfo detect=on,require=off --network bridge=br1,model=virtio --noautoconsole
```

### Deploy two Ixia Virtual Load Module
#### Prerequisite
1. For PCI forwarding the SR-IOV and IOMMU must be enabled in BIOS
2. In ubuntu server the file /etc/default/grub must be edited. Add the arguments "intel_iommu=on iommu=pt" for the GRUB_CMDLINE_LINUX_DEFAULT line
```
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on iommu=pt"
```
Example of file:
```
GRUB_DEFAULT=0
GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=0
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on iommu=pt"
GRUB_CMDLINE_LINUX=""
```

#### Identify the PCI device designated for passthrough to the Load Modules
1. Get the pci number of the device designated for passthrough
```
lspci | grep Ethernet
```
Output example
```
04:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
05:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
21:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
21:00.1 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
```
So in this case the device designated for passthrough to the Load Modules are:

21:00.0 for Load Module 1 (virt-install require the different syntax 21:00.0 -> pci_0000_21_00_0)

21:00.1 for Load Module 2 (virt-install require the different syntax 21:00.1 -> pci_0000_21_00_1)


#### Load Module 1
1. Download Ixia_Load_Module image from:
https://downloads.ixiacom.com/support/downloads_and_updates/public/IxVM/9.30/9.30.0.328/Ixia_Virtual_Load_Module_IXN_9.30_KVM.qcow2.tar.bz2
3. Start the VMs:

Example is for the image located in /vms
```
cd /vms
sudo tar xjf Ixia_Virtual_Load_Module_IXN_9.30_KVM.qcow2.tar.bz2
mv Ixia_Virtual_Load_Module_IXN_9.30_KVM.qcow2 IxLM1.qcow2
sudo virt-install --name IxLM1 \
--ram 4096 \
--vcpus 4 \
--network bridge=br1,model=virtio \
--host-device=pci_0000_21_00_0 \ #Change the pci_0000_21_00_0 to yours from "Identify the PCI device designated for passthrough to the Load Modules" section
--serial pty \
--serial unix,path=/tmp/Virtual_Load_Module_1 \
--disk path=/vms/IxLM1.qcow2,device=disk,bus=sata,format=qcow2 \
--channel unix,target_type=virtio,name=org.qemu.guest_agent.0 \
--boot hd \
--vnc \
--noautoconsole \
--osinfo detect=on,require=off \
--force
```
3. If a dhcp server is present we can observe the IP assigned
```
Welcome to Ixia Virtual Load Module
CentOS Linux 7
Kernel 3.10 on x86_64
Management IPv4: 10.36.78.31/22
IxOS Version: 9.30.3001.12
IxVM Status: Active: activating (start) since Fri 2023-06-16 13:54:35 PDT; 1s ago
```

#### Load Module 2
1. Start the VMs:

Example is for the image located in /vms
```
cd /vms
sudo tar xjf Ixia_Virtual_Load_Module_IXN_9.30_KVM.qcow2.tar.bz2
mv Ixia_Virtual_Load_Module_IXN_9.30_KVM.qcow2 IxLM2.qcow2
sudo virt-install --name IxLM2 \
--ram 4096 \
--vcpus 4 \
--network bridge=br1,model=virtio \
--host-device=pci_0000_21_00_1 \ #Change the pci_0000_21_00_1 to yours from "Identify the PCI device designated for passthrough to the Load Modules" section
--serial pty \
--serial unix,path=/tmp/Virtual_Load_Module_2 \
--disk path=/vms/IxLM2.qcow2,device=disk,bus=sata,format=qcow2 \
--channel unix,target_type=virtio,name=org.qemu.guest_agent.0 \
--boot hd \
--vnc \
--noautoconsole \
--osinfo detect=on,require=off \
--force
```
2. If a dhcp server is present we can observe the IP assigned
```
Welcome to Ixia Virtual Load Module
CentOS Linux 7
Kernel 3.10 on x86_64
Management IPv4: 10.36.78.219/22
IxOS Version: 9.30.3001.12
IxVM Status: Active: activating (start) since Fri 2023-06-16 16:42:40 PDT; 1s ago
```
9 changes: 7 additions & 2 deletions docs/testbed/README.testbed.Setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,11 @@ This document describes the steps to setup the testbed and deploy a topology.
- reboot
- at minimum terminate ssh connection or log out and log back in
- this is needed for the permissions to be update, otherwise next step will fail
- Disable firewall (optional)
```
sudo ufw disable
```
## Download an cEOS VM image
We use EOS-based VMs or SONiC VMs to simulate neighboring devices in both virtual and physical testbeds. You can use vEOS or SONiC image as neighbor devices, this method can be found in [vEOS (KVM-based) image](https://github.com/sonic-net/sonic-mgmt/blob/master/docs/testbed/README.testbed.VsSetup.md#option-1-veos-kvm-based-image) and [SONiC image](https://github.com/sonic-net/sonic-mgmt/blob/master/docs/testbed/README.testbed.VsSetup.md#option-3-use-sonic-image-as-neighboring-devices). But for the physical testbed, we recommend using cEOS for **its less consumption of both memory and interaction with the kernel**. To achieve the use of cEOS as neighbor devices, we need to do serveral steps.
1. Pull debian jessie
Expand Down Expand Up @@ -190,8 +195,8 @@ Managing the testbed and running tests requires various dependencies to be insta
5. Create a `docker-sonic-mgmt` container. Note that you must mount your clone of `sonic-mgmt` inside the container to access the deployment and testing scripts:
```
docker load < docker-sonic-mgmt.gz
docker run -v $PWD:/data -it docker-sonic-mgmt bash
cd /data/sonic-mgmt
docker run -v $PWD:/var/AzDevOps -it docker-sonic-mgmt bash
cd /var/AzDevOps/sonic-mgmt
```
**NOTE: From this point on, all steps are ran inside the `docker-sonic-mgmt` container.**
Expand Down
Binary file added docs/testbed/img/IxNetwork_Virtual_Topology.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -584,6 +584,9 @@ def fanouthosts(ansible_adhoc, conn_graph_facts, creds, duthosts): # noqa F
elif os_type == 'eos':
fanout_user = creds.get('fanout_network_user', None)
fanout_password = creds.get('fanout_network_password', None)
elif os_type == 'snappi':
fanout_user = creds.get('fanout_network_user', None)
fanout_password = creds.get('fanout_network_password', None)
else:
# when os is mellanox, not supported
pytest.fail("os other than sonic and eos not supported")
Expand Down
Loading

0 comments on commit 05529ef

Please sign in to comment.