You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Did you test it on the latest FRRouting/frr master branch?
To Reproduce
The issue is very random and is not easily reproducible.
Expected behavior
We have v4 and v6 BGP peering between two machines. One is a linux based VM running FRR and another is an actual router. We have per VRF BGP router instances created on both of them with two v4 and two v6 peer each. In most of the cases peering works fine and we are able to see all the peers in established state. However, there are rare cases when, after restarting FRR, bgp peering doesn't come up and peers are in "Active/Connected" state. We have tried increasing logging to see what all neighbor events are going on but don't see anything there as well. We have verified that all interfaces are up, peers reachable and configuration is correct.
As part of recovery when we do a restart again, the bgp peering comes up successfully.
Below is the snapshot of the configuration:-
zebra.conf
interface BondHostIf.1784 vrf vrf1
ip address 192.168.40.16/24
ip address fdfd:fca4:3ba0:1784:192:168:40:16/64
!
bgpd.conf
router bgp 65000 vrf vrf1
bgp router-id 192.168.40.16
neighbor 192.168.40.45 remote-as 65000
neighbor 192.168.40.46 remote-as 65000
neighbor fdfd:fca4:3ba0:1784:192:168:40:45 remote-as 65000
neighbor fdfd:fca4:3ba0:1784:192:168:40:46 remote-as 65000
no bgp network import-check
!
address-family ipv4 unicast
redistribute connected
network 100.64.0.0/12
neighbor 192.168.40.45 activate
neighbor 192.168.40.45 prefix-list vrf1_DENY_IN_V4 in
neighbor 192.168.40.45 prefix-list vrf1_ALLOW_OUT_V4 out
neighbor 192.168.40.46 activate
neighbor 192.168.40.46 prefix-list vrf1_DENY_IN_V4 in
neighbor 192.168.40.46 prefix-list vrf1_ALLOW_OUT_V4 out
exit-address-family
!
address-family ipv6 unicast
redistribute connected
network 2001:5b0:9800::/44
neighbor fdfd:fca4:3ba0:1784:192:168:40:45 activate
neighbor fdfd:fca4:3ba0:1784:192:168:40:45 prefix-list vrf1_DENY_IN_V6 in
neighbor fdfd:fca4:3ba0:1784:192:168:40:45 prefix-list vrf1_ALLOW_OUT_V6 out
neighbor fdfd:fca4:3ba0:1784:192:168:40:46 activate
neighbor fdfd:fca4:3ba0:1784:192:168:40:46 prefix-list vrf1_DENY_IN_V6 in
neighbor fdfd:fca4:3ba0:1784:192:168:40:46 prefix-list vrf1_ALLOW_OUT_V6 out
exit-address-family
!
ip prefix-list vrf1_DENY_IN_V4 seq 5 deny any
ip prefix-list vrf1_ALLOW_OUT_V4 seq 5 permit 100.64.0.0/12
!
ipv6 prefix-list vrf1_DENY_IN_V6 seq 5 deny any
ipv6 prefix-list vrf1_ALLOW_OUT_V6 seq 5 permit 2001:5b0:9800::/44
!
Screenshots
Versions
OS Version: AlmaLinux release 8.6 (Sky Tiger)
Kernel: 4.18.0-372.32.1.el8_6.x86_64
FRR Version: FRRouting 8.1
Additional context
The text was updated successfully, but these errors were encountered:
j3sixvmstm01# show bgp vrf vrf1 ipv4 unicast summary failed
BGP router identifier 192.168.42.15, local AS number 65000 vrf-id 69
BGP table version 2
RIB entries 3, using 552 bytes of memory
Peers 4, using 2892 KiB of memory
To Reproduce
The issue is very random and is not easily reproducible.
Expected behavior
We have v4 and v6 BGP peering between two machines. One is a linux based VM running FRR and another is an actual router. We have per VRF BGP router instances created on both of them with two v4 and two v6 peer each. In most of the cases peering works fine and we are able to see all the peers in established state. However, there are rare cases when, after restarting FRR, bgp peering doesn't come up and peers are in "Active/Connected" state. We have tried increasing logging to see what all neighbor events are going on but don't see anything there as well. We have verified that all interfaces are up, peers reachable and configuration is correct.
As part of recovery when we do a restart again, the bgp peering comes up successfully.
Below is the snapshot of the configuration:-
zebra.conf
bgpd.conf
Screenshots
Versions
Additional context
The text was updated successfully, but these errors were encountered: