Fabric#
Fabrics → FFabrics
The Fabric is an abstracted representation of a datacenter that is using the Clos architecture. It manages the nodes in their different roles (leafs, spines, borderleafs, ...), the links that interconnect them, and the protocols that facilitate the exchange of routing information.
Upon deployment, the Fabric resource initiates several supporting resources including ISLs (Inter-Switch Links), DefaultRouters, DefaultInterfaces, and DefaultBGPPeers, among others. These resources, in turn, generate node configurations. The operational state of the Fabric is determined by the collective status of these underlying resources.
Fabric nodes#
The Fabric application home page gives an overview of some common fabric topologies. Within an EDA namespace, multiple fabrics may exist, which can be standalone or interconnected. Each Fabric manages the set of nodes that make up the datacenter network, and creates ISL resources for all Links that interconnect these nodes.
LAGs in Fabrics
Typically, datacenters don't use LAGs. If there are multiple links between a pair of nodes (whether for redundancy or for increased bandwidth), a BGP or OSPF session is set up for each of them to exchange routes, relying on ECMP rather than link hashing.
The Fabric resource does not enforce this: an ISL is created for every Link, which in turn connects two Interfaces. The Interface resource references one or more physical ports.
Labels#
Nodes and node roles are identified using labels: this ensures that a Fabric can easily grow over time when new racks are added, simply by onboarding new nodes in EDA and assigning them the label that corresponds with their role. If multiple Fabrics are used, a secondary label should be added to the nodes that identifies the Fabric that the switch belongs to.
Label selectors
Most Fabric resources support multiple labels to be defined. Use a comma (,) to indicate that both labels need to be present on a node before it is selected. For example, the following label selector indicates that a node must have both the leaf role and belong to the London region:
Initial Labeling: If TopoNodes, TopoLinks, and adjacent Fabric instances are labeled before the creation of the Fabric instance, the application will automatically generate all necessary configurations for these components during the transaction associated with the addition of the Fabric instance.
Post-Deployment Labeling: If new labels that match the Fabric's selection criteria are added to TopoNodes, TopoLinks, or adjacent Fabric instances after the Fabric instance has been deployed, these components will automatically be configured by the Fabric application during the transaction that handles the addition of the labels. This ensures that changes in network topology, roles, or Fabric interconnections are dynamically incorporated into the Fabric's configuration.
Fabric nodes#
Different network switches fulfill different roles in the network, with different requirements in terms of port speed capabilities, forwarding throughput, and control plane capabilities. The roles that the Fabric resource supports are listed below.
Leaf nodes#
Leaf nodes are also referred to as top-of-rack switches, or ToR for short, and facilitate connectivity between computes in the same rack. As the name implies, these are typically installed in every rack, and duplicated for redundancy. They interconnect physical computes such as servers and firewalls to the datacenter Fabric, which will facilitate inter- and intra-rack connectivity, as well as connectivity to the WAN.
Each leaf node gets its own system IP address from the system IP allocation pool, and its own autonomous system (AS) number.
Why one ASN per leaf switch?
When the Fabric is configured to use eBGP for the exchange of IP addresses in the underlay, to ensure that these routes are not rejected, each leaf requires its own Autonomous System Number.
Spine nodes#
Spine nodes interconnect leaf nodes and are used to establish inter-rack connectivity. For redundancy, there are two or more spines per pod, and all leaf nodes in the pod are connected to all spines. When iBGP is used for the overlay, the spine nodes are typically used as route reflectors for the EVPN routes.
Each spine node (within a pod) uses the same autonomous system (AS) number.
Why one ASN for all spine switches?
In typical datacenters, there is no crosslink between the spines. Inter-rack traffic may use either spine, and in case of a link failure the affected spine should stop advertising reachability information for the affected leaf. This way, traffic latency is minimized and tromboning is avoided.
As a consequence of spines sharing the same autonomous system number, the spines are not aware of each other, and you won't be able to ping between spines!
Superspine nodes#
Superspines are used in highly scaled datacenters, where it is no longer feasible to connect all leafs to every spine. The Fabric is subdivided into pods, where the spines of each pod are connected to every superspine. This trades off higher scale for increased latency (more hops for inter-pod traffic).
Borderleaf nodes#
Borderleaf nodes are very similar in definition to leaf nodes, but often differ in port capabilities: while the focus for leaf switches is typically on supporting as many different port speeds as possible, the borderleaf is chosen for its high-bandwidth ports and control plane capabilities. It is used to advertise the Fabric to the WAN network (via a DCGW1 or an internet gateway), enabling connectivity between the Fabric and the network elements outside of the datacenter.
In smaller networks, the role of the spine and the borderleaf is often collapsed: spines already have high-throughput ports to interconnect leaf switches, and if the switch can terminate EVPN services (becoming aware of the IP routes used in virtual networking services), it can establish the (MP-)BGP session to the WAN network.
In larger networks, higher throughput requirements mean EVPN capabilities are reduced: if the spine switches are no longer capable of terminating EVPN services, borderleaf nodes are required to connect to the WAN and/or internet.
Inter-switch links#
Once nodes are configured in the Fabric, they need to be interconnected using inter-switch links (ISLs). These ISL resources configure routing protocols such as eBGP and OSPF for the exchange of system IP addresses.
The linkSelectors property of the interSwitchLinks context of the Fabric resource selects all Link resources that are used for inter-switch (underlay) connectivity. If both ends of the Link correspond with a node of the Fabric, an ISL is created for that Link.
Don't forget this property!
If the linkSelector property is omitted, or does not select the right Links, there will be no connectivity between the nodes of your Fabric!
Allocation pools#
Several allocation pools are required to distribute IP addresses to the resources that the Fabric creates:
- System IP addresses for the
SystemInterfaceresource on each node in theFabric, drawn from anIPAllocationPool - Point-to-point IP subnets for the
ISL, drawn from aSubnetAllocationPool - Autonomous System Numbers (ASNs) for the
DefaultRouterresource on each node in theFabric, drawn from anIndexAllocationPool
System IP address pools can be configured globally and/or per role. The system IP address pool configured under the node role context overrides the globally configured pools.
Warning
An IPv4 pool is mandatory: it must always be configured, even if the routes are exchanged exclusively through IPv6
Autonomous system pools can be specified under the underlay protocol and/or per role. The autonomous system pool configured under the node role context overrides the one specified in the underlay section.
Underlay protocols#
The underlay of a datacenter refers to the exchange of reachability information that enables the overlay routes to be exchanged. The Fabric resource uses the underlay protocol for the exchange of system IP addresses, which will be used to establish the MP-BGP session for the exchange of EVPN routes. Currently the following protocols are supported for the exchange of system IP addresses:
- eBGP (iBGP not supported)
- OSPFv2
- OSPFv3
Is exchange of EVPN routes via the underlay supported?
Yes, exchange of EVPN routes is supported over eBGP. Note however that exchanging EVPN routes via OSPF is not possible. There are scale considerations that favor the separation of underlay and overlay (using iBGP), however these discussions are beyond the scope of this article.
MTU considerations for OSPF interoperability
Different network operating systems have different default port MTUs. OSPF is notoriously specific when it comes to MTU, and will not establish a session if the signaled MTU is mismatched.
If the MTU is not set using the DefaultMTU resource, it is important to set the ipMTU property of the interSwitchLinks container in the Fabric resource. An MTU value of 8922 works for most interop scenarios.
Overlay protocols#
The overlay of a datacenter refers to the exchange of service routes. In a datacenter context, EVPN is most commonly used as a way of ensuring traffic isolation between (virtual) distributed networks belonging to different tenants. The inner workings of EVPN are beyond the scope of this article.
Both eBGP and iBGP are supported: in case eBGP is used, service routes are exchanged between the IP addresses of the individual links between the nodes. If iBGP is used, service routes are exchanged between the system IP addresses of the nodes.
Which protocol to choose?
eBGP is easiest to set up, but there are scale implications that should be investigated before choosing this route, which are beyond the scope of this article.
From a technical point of view, iBGP requires the addition of route reflectors. These are typically configured on the spines or superspines, but can also exist outside of the datacenter. The following elements are required when using iBGP for the exchange of service routes:
- Specify the autonomous system number that all switches will use to communicate with the route reflectors.
- If the route reflectors are configured on nodes in the
Fabric, therrNodeSelectorsandclusterIdproperties are required. - If the route reflectors are not configured on nodes in the
Fabric, their IP addresses must be listed in therrIpAddressesproperty. - Select the nodes that will peer with the route reflectors by populating the
rrClientNodeSelectorslabel selector.
Don't forget!
When using iBGP, don't forget to select route reflector clients using the rrClientNodeSelectors label selector! Without it, no overlay BGP sessions will be established.
Routing Policies#
If not explicitly specified, the Fabric will automatically generate the required Policy resources. These policies are used in the BGP peering sessions to ensure IP reachability across the fabric.
If routing policies are defined independently of the Fabric through the importPolicies or exportPolicies properties, they will be used instead.
Route leaking#
Route leaking is used to establish connectivity between the DefaultRouter and virtual networking services. For example, it may be done on the borderleaf nodes to expose an isolated in-band management network to the WAN.
Route leaking relies on routing policies and can be specified globally in the Fabric resource or overridden under each node role container.
Fabric of Fabrics#
To establish connectivity between multiple datacenters, several options may be considered. Each has its own use case:
- Connect each
Fabricto DCGW1 routers using (MP-)BGP - Interconnect each fabric using superspines
- Create a "
Fabricof fabrics" resource
A fabric of fabrics is a separate Fabric resource that interconnects other Fabric resources. It is configured and works largely the same as a regular Fabric resource, with the addition of the fabricSelectors property, which is a label selector that identifies the Fabrics that this resource will interconnect. ISL resources will be created for each Link that is connected to:
- A node in the fabric of fabrics
- A node in a
Fabricselected by thefabricSelectorslabel selector
A note on inter-fabric Links
The Link resources that the linkSelectors property selects must be configured with the remote side pointing to the child Fabrics.
Dependencies#
IPAllocationPool#
IP allocation pools are resource pools that hand out single IP addresses from a pool. The Fabric resource uses them to provision system IP addresses for the nodes in the Fabric.
SubnetAllocationPool#
Subnet allocation pools are resource pools that hand out IP subnets with a specific length from a pool. The Fabric resource uses them to provision point-to-point IP addresses for derived ISL resources.
IndexAllocationPool#
Index allocation pools are resource pools that hand out indices (whole numbers) from a pool. The Fabric resource uses them to assign autonomous system (AS) numbers to the nodes in the Fabric.
Referenced resources#
Policy#
Routing policies determine which IP prefixes are advertised to neighbors. In the Fabric resource, they can optionally be specified:
- In the underlay protocol if BGP is selected
- In the route leaking context
IngressPolicy#
Quality of Service policies can optionally be specified in the interSwitchLinks container of the Fabric resource. An IngressPolicy is used to assign priorities to incoming traffic, and optionally to rate-limit traffic with a particular priority.
EgressPolicy#
Quality of Service policies can optionally be specified in the interSwitchLinks container of the Fabric resource. An EgressPolicy is used to assign packets to Queues depending on their priority and to modify the priority bits in the headers of outgoing traffic.
Examples#
apiVersion: fabrics.eda.nokia.com/v1
kind: Fabric
metadata:
name: fabric
namespace: eda
spec:
interSwitchLinks:
ipMTU: 8922
linkSelectors:
- eda.nokia.com/role = interSwitch
poolIPv4: ipv4-pool
leafs:
asnPool: asn-pool
leafNodeSelectors:
- eda.nokia.com/role = leaf
overlayProtocol:
bgp:
autonomousSystem: 65500
clusterID: 10.0.0.1
rrClientNodeSelectors:
- eda.nokia.com/role=leaf
rrNodeSelectors:
- eda.nokia.com/role=spine
protocol: IBGP
spines:
asnPool: asn-pool
spineNodeSelectors:
- eda.nokia.com/role = spine
systemPoolIPv4: systemipv4-pool
underlayProtocol:
bfd:
desiredMinTransmitIntMs: 1000
detectionMultiplier: 3
enabled: true
requiredMinEchoReceiveIntMs: 1000
requiredMinReceiveIntMs: 1000
bgp:
asnPool: asn-pool
protocols:
- OSPFv2
cat << 'EOF' | kubectl apply -f -
apiVersion: fabrics.eda.nokia.com/v1
kind: Fabric
metadata:
name: fabric
namespace: eda
spec:
interSwitchLinks:
ipMTU: 8922
linkSelectors:
- eda.nokia.com/role = interSwitch
poolIPv4: ipv4-pool
leafs:
asnPool: asn-pool
leafNodeSelectors:
- eda.nokia.com/role = leaf
overlayProtocol:
bgp:
autonomousSystem: 65500
clusterID: 10.0.0.1
rrClientNodeSelectors:
- eda.nokia.com/role=leaf
rrNodeSelectors:
- eda.nokia.com/role=spine
protocol: IBGP
spines:
asnPool: asn-pool
spineNodeSelectors:
- eda.nokia.com/role = spine
systemPoolIPv4: systemipv4-pool
underlayProtocol:
bfd:
desiredMinTransmitIntMs: 1000
detectionMultiplier: 3
enabled: true
requiredMinEchoReceiveIntMs: 1000
requiredMinReceiveIntMs: 1000
bgp:
asnPool: asn-pool
protocols:
- OSPFv2
EOF
Custom Resource Definition#
To browse the Custom Resource Definition go to crd.eda.dev.
Fabric
SPEC
The Fabric defines the desired state of a Fabric resource, enabling the automation and management of data center network fabrics. It includes configurations for IP address allocation pools, network topology roles (Leafs, Spines, SuperSpines, BorderLeafs), inter-switch links, and network protocols (underlay and overlay). The specification allows for detailed control over routing strategies, including ASN allocations for BGP-based protocols, and supports advanced features like BFD.
-
-
Reference to an IndexAllocationPool pool to use for Autonomous System Number allocations. Used when eBGP is configured as an underlay protocol. This reference will take precedence over the spec.underlayProtocol.asnPool.
-
Label selector used to select Toponodes to configure as Borderleaf nodes.
-
Route leaking controlled by routing policies in and out of the DefaultRouters on each node. If specified under the Leafs, Spines, SuperSpines, or BorderLeafs those will take precedence.
-
Reference to a Policy resource to use when evaluating route exports from the DefaultRouter.
-
Reference to a Policy resource to use when evaluating route imports into the DefaultRouter.
-
-
Reference to an IPAllocationPool used to dynamically allocate an IPv4 address to system/lo0 interfaces. This reference will take precedence over the spec.systemPoolIPV4. Both IPv4 and IPv6 pools can be configured simultaneously for dual-stack system/lo0 interfaces.
-
Reference to an IPAllocationPool used to dynamically allocate an IPv6 address to system/lo0 interfaces. This reference will take precedence over the spec.systemPoolIPV6. Both IPv4 and IPv6 pools can be configured simultaneously for dual-stack system/lo0 interfaces.
-
-
Selects Fabric resources when connecting multiple Fabrics together. Only one Fabric needs the selector, typically the upper layer (e.g., Superspine) selecting the lower layer (e.g., a pod fabric of leafs and spines). This helps build complete Fabrics in smaller instances of the Fabric resource. This instance selecting other fabrics must also select the InterSwitchLinks connecting itself to the selected Fabrics.
-
-
Sets the IP MTU for the DefaultInterface.
range: 1280 to 9486 -
Selects TopoLinks to include in this Fabric, creating an ISL resource if both Nodes in the TopoLink are part of this Fabric or a selected Fabric.
-
Reference to an IPAllocationPool used to dynamically allocate an IPv4 address to DefaultInterfaces which are members of the ISLs. Both IPv4 and IPv6 pools can be configured simultaneously for dual-stack DefaultInterfaces.
-
Reference to an IPAllocationPool used to dynamically allocate an IPv6 address to DefaultInterfaces which are members of the ISLs. Both IPv4 and IPv6 pools can be configured simultaneously for dual-stack DefaultInterfaces.
-
Enables unnumbered interfaces on the ISL; for IPv6, only link-local addresses are used unless a PoolIPV6 is also specified. DefaultInterfaces in the ISL are added to the DefaultBGPPeer dynamic neighbor list when using an eBGP underlay.
enum: "IPv6" -
Configures the provided VLAN on the DefaultInterfaces which are members of the ISLs.
range: 1 to 4094
-
-
-
Reference to an IndexAllocationPool pool to use for Autonomous System Number allocations. Used when eBGP is configured as an underlay protocol. This reference will take precedence over the spec.underlayProtocol.asnPool.
-
Label selector used to select Toponodes to configure as Leaf nodes.
-
Route leaking controlled by routing policies in and out of the DefaultRouters on each node. If specified under the Leafs, Spines, SuperSpines, or BorderLeafs those will take precedence.
-
Reference to a Policy resource to use when evaluating route exports from the DefaultRouter.
-
Reference to a Policy resource to use when evaluating route imports into the DefaultRouter.
-
-
Reference to an IPAllocationPool used to dynamically allocate an IPv4 address to system/lo0 interfaces. This reference will take precedence over the spec.systemPoolIPV4. Both IPv4 and IPv6 pools can be configured simultaneously for dual-stack system/lo0 interfaces.
-
Reference to an IPAllocationPool used to dynamically allocate an IPv6 address to system/lo0 interfaces. This reference will take precedence over the spec.systemPoolIPV6. Both IPv4 and IPv6 pools can be configured simultaneously for dual-stack system/lo0 interfaces.
-
-
Set the overlay protocol used
-
Enable BFD on overlay protocol
-
The minimum interval in milliseconds between transmission of BFD control packets.
default: 1000range: 10 to 100000 -
The number of packets that must be missed to declare this session as down.
default: 3range: 3 to 20 -
Enables Biforward Detection.
default: false -
The minimum interval between echo packets the local node can receive.
default: 1000range: 0 to 100000 -
The minimum interval in milliseconds between received BFD control packets that this system should support.
default: 1000range: 10 to 100000 -
Sets custom IP TTL or Hop Limit for multi-hop BFD sessions packets. Not applicable to single-hop BFD sessions.
range: 2 to 255
-
-
Overlay specific BGP properties.
-
Autonomous System used for iBGP peering session, when protocol is set to IBGP providing an autonomousSystem is required.
format: int32 -
Sets the cluster ID used by DefaultRouteReflectors, when protocol is set to IBGP providing a clusterID is required.
-
Reference to a Policy, when left empty or not specified the Fabric will automatically generate a policy for the specified protocols.
-
Reference to a Policy, when left empty or not specified the Fabric will automatically generate a policy for the specified protocols.
-
Keychain to be used for authentication when overlay protocol is IBGP, ignored otherwise
-
Label selector used to select Toponodes to configure as DefaultRouteReflectorClients, these are typically Leaf or Borderleaf nodes. Used on conjunction with rrNodeSelector in order to configure the DefaultBGPPeers for both the DefaultRouteReflectors and DefaultRouteReflectorClients.
-
List of route reflector IP addresses not provisioned by this instance of a Fabric resource. Used with rrClientNodeSelector to configure the DefaultBGPPeers on the selected nodes to peer the list of external route reflector IPs.
-
Label selector used to select Toponodes to configure as DefaultRouteReflectors, these are typically Spine, Superspine or Borderleaf nodes. Used on conjunction with rrClientNodeSelector in order to configure the DefaultBGPPeers for both the DefaultRouteReflectors and DefaultRouteReflectorClients.
-
Timer configurations
-
The time interval in seconds between successive attempts to establish a session with a peer.
range: 1 to 65535 -
The hold-time interval in seconds that the router proposes to the peer in its OPEN message.
range: 0 to 65535 -
The interval in seconds between successive keepalive messages sent to the peer.
range: 0 to 21845 -
The value assigned to the MinRouteAdvertisementIntervalTimer of RFC 4271, for both EBGP and IBGP sessions.
range: 1 to 255
-
-
-
List of routing protocols to used to advertise EVPN routes for overlay services. When EBGP is used, the BGP properties configured under the spec.underlayProtocol will be used.
enum: "IBGP", "EBGP"
-
-
Route leaking controlled by routing policies in and out of the DefaultRouters on each node. If specified under the Leafs, Spines, SuperSpines, or BorderLeafs those will take precedence.
-
Reference to a Policy resource to use when evaluating route exports from the DefaultRouter.
-
Reference to a Policy resource to use when evaluating route imports into the DefaultRouter.
-
-
-
Reference to an IndexAllocationPool pool to use for Autonomous System Number allocations. Used when eBGP is configured as an underlay protocol. This reference will take precedence over the spec.underlayProtocol.asnPool.
-
Route leaking controlled by routing policies in and out of the DefaultRouters on each node. If specified under the Leafs, Spines, SuperSpines, or BorderLeafs those will take precedence.
-
Reference to a Policy resource to use when evaluating route exports from the DefaultRouter.
-
Reference to a Policy resource to use when evaluating route imports into the DefaultRouter.
-
-
Label selector used to select Toponodes to configure as Spine nodes.
-
Reference to an IPAllocationPool used to dynamically allocate an IPv4 address to system/lo0 interfaces. This reference will take precedence over the spec.systemPoolIPV4. Both IPv4 and IPv6 pools can be configured simultaneously for dual-stack system/lo0 interfaces.
-
Reference to an IPAllocationPool used to dynamically allocate an IPv6 address to system/lo0 interfaces. This reference will take precedence over the spec.systemPoolIPV6. Both IPv4 and IPv6 pools can be configured simultaneously for dual-stack system/lo0 interfaces.
-
-
-
Reference to an IndexAllocationPool pool to use for Autonomous System Number allocations. Used when eBGP is configured as an underlay protocol. This reference will take precedence over the spec.underlayProtocol.asnPool.
-
Route leaking controlled by routing policies in and out of the DefaultRouters on each node. If specified under the Leafs, Spines, SuperSpines, or BorderLeafs those will take precedence.
-
Reference to a Policy resource to use when evaluating route exports from the DefaultRouter.
-
Reference to a Policy resource to use when evaluating route imports into the DefaultRouter.
-
-
Label selector used to select Toponodes to configure as Superspine nodes.
-
Reference to an IPAllocationPool used to dynamically allocate an IPv4 address to system/lo0 interfaces. This reference will take precedence over the spec.systemPoolIPV4. Both IPv4 and IPv6 pools can be configured simultaneously for dual-stack system/lo0 interfaces.
-
Reference to an IPAllocationPool used to dynamically allocate an IPv6 address to system/lo0 interfaces. This reference will take precedence over the spec.systemPoolIPV6. Both IPv4 and IPv6 pools can be configured simultaneously for dual-stack system/lo0 interfaces.
-
-
Reference to an IPAllocationPool used to dynamically allocate an IPv4 address to system/lo0 interfaces. If specified under the Leaf/Spine/Superspine/Borderleaf those will take precedence. Both IPv4 and IPv6 pools can be configured simultaneously for dual-stack system/lo0 interfaces.
-
Reference to an IPAllocationPool used to dynamically allocate an IPv6 address to system/lo0 interfaces. If specified under the Leaf/Spine/Superspine/Borderleaf those will take precedence. Both IPv4 and IPv6 pools can be configured simultaneously for dual-stack system/lo0 interfaces.
-
Set the underlay protocol used
-
Enable BFD on underlay protocol
-
The minimum interval in milliseconds between transmission of BFD control packets.
default: 1000range: 10 to 100000 -
The number of packets that must be missed to declare this session as down.
default: 3range: 3 to 20 -
Enables Biforward Detection.
default: false -
The minimum interval between echo packets the local node can receive.
default: 1000range: 0 to 100000 -
The minimum interval in milliseconds between received BFD control packets that this system should support.
default: 1000range: 10 to 100000 -
Sets custom IP TTL or Hop Limit for multi-hop BFD sessions packets. Not applicable to single-hop BFD sessions.
range: 2 to 255
-
-
Underlay specific BGP properties.
-
Reference to an IndexAllocationPool pool to use for Autonomous System Number allocations. Used when eBGP is configured as an underlay protocol. If specified under the Leaf/Spine/Superspine/Borderleaf those will take precedence.
-
Reference to a Policy, when left empty or not specified the Fabric will automatically generate a policy for the specified protocols.
-
Reference to a Policy, when left empty or not specified the Fabric will automatically generate a policy for the specified protocols.
-
Keychain to be used for authentication
-
Timer configurations
-
The time interval in seconds between successive attempts to establish a session with a peer.
range: 1 to 65535 -
The hold-time interval in seconds that the router proposes to the peer in its OPEN message.
range: 0 to 65535 -
The interval in seconds between successive keepalive messages sent to the peer.
range: 0 to 21845 -
The value assigned to the MinRouteAdvertisementIntervalTimer of RFC 4271, for both EBGP and IBGP sessions.
range: 1 to 255
-
-
-
List of routing protocols to used between peers of an ISL. Multiple protocols may be listed, if so multiple protocols will be used.
-
STATUS
FabricStatus defines the observed state of Fabric
-
Indicates the health score of the Fabric. The health score of the Fabric is determined by the aggregate health score of the resources emitted by the Fabric such as ISL, DefaultRouteReflectors etc.
-
Indicates the reason for the health score.
-
The time when the state of the resource last changed.
-
Operational state of the Fabric. The operational state of the fabric is determined by monitoring the operational state of the following resources (if applicable): DefaultRouters, ISLs.
enum: "Up", "Down", "Degraded", "Unknown"