How to Advertise Static Route From NetScaler to ZebOS
How to Advertise Static Route From NetScaler to ZebOS
来源 https://support.citrix.com/article/CTX203868
Objective
This article describes how to add static route from NetScaler to ZebOS.
Background
Consider the following scenario, there is a static route from the NetScaler to the service provider. User is running OSPF between the NetScaler and the Core switches. User's requirement is to redistribute this static route into OSPF and track the static route to avoid blackhole.
LB1 and LB 2 are Citrix NetScalers.
CS01 and CS02 are core switches.
Instructions
Enable monitoring on the static route so that whenever the state of a route in NetScaler packet engine changes it is immediately communicated to ZebOS by NSPPE.
Complete the following steps:1.To advertise static routes configured on the NetScaler into the OSPF routing domain, run the following command:
enable ns mode sradv
2.Then switch to ZebOS by running the following command on the NetScaler CLI:
vtysh
#conf t
#Config# router ospf
#redistribute kernel
CLI Routing Table:

ZebOS Config:
OSPF database:
===============
Citrix ADC dynamic routing and route health injection
来源 https://www.virtualdesktopdevops.com/netscaler/netscaler-dynamic-routing.html
Route heath injection ?
Route Health Injection (RHI) delegates the control of routing protocol announcement to a server based on the health of a service and the connectivity of the server to the network :
- If the service is alive, a /32 route is injected into the network using a dynamic routing protocol announcement.
- When the service goes down because of a software issue, the /32 host route is withdrawn by the server from the network using a dynamic routing protocol triggered update.
- If the server loses it's connectivity to the network, the upstream router automatically withdraws all the /32 host routes learned from this disconnected peer.
Route Health Injection for IP Anycast deployment
Route Health Injection can be used to advertise the same IP address from different locations geographically dispersed. User is directed to the nearest server from the network perspective. Strong resiliency is achieved for critical services in this IP Anycast.
Route Health Injection in a Layer 3 Citrix ADCHA pair
Citrix ADC High Availability is usually created by pairing two netscaler nodes located in the same management subnet and sharing the same vlans. The primary node hosts the active SNIP and VIP addresses, the secondary node is synchronized with the primary node. If the primary node goes down or loses it's connectivity, the SNIP and VIP addresses move to the secondary node which gets primary. Both Netscaler nodes share the same network configuration and network resiliency is achived by using first hop reduncdancy protocols (HSRP, HSRP vPC, VRRP, ...), vPC, VSS or multi chassis etherchannel, and firewall clustering.
Failure or flapping of one of the redundancy mechanism breaks network connectivity.
In a layer 3 Citrix ADC HA deployment, each ADC instance is deployed in a separate subnet with independant internet connectivity, routers, and firewalls. All those equipments are located in separate network rooms. Connectivity in the Citrix ADC HA pair and between ADC and users / backend servers are routed and rely on dynamic routing protocols.

Dynamic IP routing in Citrix ADC
Citrix ADC support both dynamic and static routing. The main objective of running dynamic routing protocols is to enable route health injection (RHI), to enable upstream network infrastructure to choose the best path among multiple routes to a reach a geographically distributed virtual server.
Multiple routing tables in Citrix ADC
Citrix ADC embeds 3 different routing tables :
- NS Kernel routing table : Used by Citrix ADC in packet forwarding. Holds subnet routes corresponding to the NSIP and to each SNIP and MIP. Holds any static routes added through the CLI. This routing table is configured through the GUI or CLI interface.
- Network service module : Contains the advertisable routes distributed by the dynamic routing protocols to their peers in the network. This routing table is configured using the vtysh mode (ZebOS).
- FreeBSD routing table : Facilitate initiation and termination of management traffic (telnet, ssh, etc.).
Citrix ADC routing suite based on ZebOS® IP Infusion Layer 2, Layer 3, MPLS network stack. ZebOS® can be accessed using vtysh netscaler CLI command. IPv4 & IPv6 supported dynamic routing protocols are :
- Routing Information Protocol (RIP) : version 2 & RIPng for IPv6
- Open Shortest Path First (OSPF) : version 2 & version 3 for IPv6
- Border Gateway Protocol (BGP)
- ISIS Protocol
Routing tables synchronization
NS kernel routing table (GUI & CLI configuration) and Network service module (ZebOS) are ynchronized using the following vtysh commands :
- Use ns-route-install commands to push routes from ZebOS to NS Kernel routing table
- Use redistribute kernel command in ZebOS dynamic routing protocols configuration to advertise routes configured in NS Kernel routing table.
===================
How to Use Bidirectional Forwarding Detection (BFD) in NetScaler
来源 https://support.citrix.com/article/CTX224307/how-to-use-bidirectional-forwarding-detection-bfd-in-netscaler
Objective
This describes how to use Bidirectional Forwarding Detection (BFD) in NetScaler.
Background
Bidirectional Forwarding Detection commonly referred as BFD is a simple hello protocol that provides fast failure detection mechanism between two routers/nodes. It detects failures on any bi-directional forwarding paths, such as direct physical links, tunnels, virtual links, multi-hop paths across network devices. BFD is a mechanism that is independent of media, routing protocol, and data protocol used and validates the operation of the forwarding plane. Since it is not tied to any routing specific protocols it can be used as a generic failure detection method between network devices.
Why BFD?
In Routing protocols, fault resolution happens in order of tens of seconds. Reducing the OSPF and BGP timers can bring it down to four to five seconds to detect failure, whereas BFD can do the trick in less than one second. This helps in faster network convergence, short application interruptions, and enhanced network reliability. Another advantage is that BFD is lightweight and can run completely on the data plane offloading the control plane CPU.
How does BFD work?
BFD session establishment happens in a series of steps. Let us assume the routing protocol used here is OSPF(BGP follows similar steps for other routing protocols).
- OSPF discovers neighbours by sending Hello packets and establishes adjacencies with the neighbouring nodes.
- OSPF notifies BFD of the neighbour to be monitored by including source and destination address.
- BFD uses the information to establish session and sends control packets at regular intervals to the peer nodes.
- Asynchronous - Control packets flow both directions periodically. (Note: This is supported in NetScaler)
- Demand - Periodic control packets are not sent. After BFD session establishment, a node can ask the other system to stop sending BFD Control packets, except when the node feels needs to verify connectivity explicitly. (Note: This is not supported in NetScaler)
- Echo mode - a stream of Echo packets is transmitted in such a way that it loops back through its forwarding path to the same node to verify connectivity. Note: This is not supported in NetScaler.
Failure detection happens in a series of steps as mentioned below,
- A link goes down between NetScaler and adjacent node
- BFD sessions goes down immediately based on timer expiry
- BFD notifies neighbour unreachability to OSPF
- OSPF terminates the neighbour adjacency
Instructions
Enabling BFD for OSPF
The below mentioned configuration has to be done at VTYSH to enable BFD for OSPF.Enable or Disable BFD for all interfaces at ospf router level
ZebOS(config-router)#bfd all-interfaces
ZebOS(config-router)#no bfd all-interfaces
Enable or disable BFD at interface level for OSPF
ZebOS(config-if)#ip ospf bfd
ZebOS(config-if)#ip ospf bfd disable
Enabling BFD for OSPFv3
Enable or Disable BFD for all interfaces at ospfv3 router level
ZebOS(config-router)#bfd all-interfaces
ZebOS(config-router)#no bfd all-interfaces
Enable or disable BFD at interface level for OSPFv3
ZebOS(config-if)#ipv6 ospf bfd
ZebOS(config-if)#ipv6 ospf bfd disable
Enabling BFD for BGP
BGP IPv4 Singlehop-peer
ns(config-router)#neighbor <ipv4addr> fall-over bfd
ns(config-router)#no neighbor <ipv4addr> fall-over bfd
BGP IPv4 Multihop-peer
ns(config-router)#neighbor <mh-neighbor-ipv4addr> fall-over bfd multihop
ns(config-router)#no neighbor <mh-neighbor-ipv4addr> fall-over bfd multihop
BGP IPv6 Singlehop-peer
ns(config-router)#neighbor <ipv6addr> fall-over bfd
ns(config-router)#no neighbor <ipv6addr> fall-over bfd
ns(config-router)#neighbor <mh-neighbor-ipv6addr> fall-over bfd multihop
ns(config-router)#no neighbor <mh-neighbor-ipv6addr> fall-over bfd multihop
Configuring BFD Timers
BFD Single-hop Session Timer
Configure BFD single-hop sessions timer and reception interval in millisecond, and the Hello multiplier in interface mode
bfd singlehop-peer interval <100-30000> minrx <100-30000> multiplier <1-20>
Multiplier indicates the number of packets to be dropped before declaring the interface as down
Minrx is the value that is advertised to other nodes as the time interval for reception by NetScaler
Interval is the time interval with which NetScaler sends hello packets
no bfd singlehop-peer
Ex: ZebOS(config)#interface vlan10
ZebOS(config-if)#bfd singlehop-peer interval 100 minrx 100 multiplier 4
BFD Multihop Peer Timer
Configure BFD multihop-peer timer and reception intervals in milliseconds, and the Hello multiplier in config modebfd multihop-peer A.B.C.D interval <100-30000> minrx <100-30000> multiplier <1-20>
Unset the timers using no command:
no bfd multihop-peer A.B.C.D
BFD IPV6 Multihop Peer Timer
bfd multihop-peer ipv6 X:X::X:X interval <100-30000> minrx <100-30000> multiplier <1-20>
Unset the timers using no command:
no bfd multihop-peer ipv6 X:X::X:X
Ex: ZebOS(config)#bfd multihop-peer ipv6 3001:20::4 interval 100 minrx 100 multiplier 3
BFD Passive mode
NetScaler will not initiate BFD control packets when passive mode is enabled. This is used where the device need not initiate control packets first. Once it receives a control packet, it triggers state change.
BFD Passive mode can be configured as follows in interface mode:
ZebOS(config)#interface <vlanid>
ZebOS(config-if)#bfd passive
To unset the passive setting:
ZebOS(config-if)#no bfd passive
BFD in NetScaler is supported for both IPv4 and IPv6 . It is supported for routing protocols in the context of admin-partitions, on spotted SNIP in the context of cluster and on SNIP-IP in HA_INC mode.
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· SQL Server 2025 AI相关能力初探
· AI编程工具终极对决:字节Trae VS Cursor,谁才是开发者新宠?
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
2019-11-22 igel udc2 config
2018-11-22 vyos 基础配置
2018-11-22 vyatta的fork开源版本vyos
2018-11-22 vyos User Guide
2018-11-22 libuv 简单使用
2018-11-22 深度学习如何入门
2017-11-22 在 QML 中创建 C++ 导入类型的实例