Skip to content

Node Management#

In contrast to some automation frameworks that either only generate configuration snippets or push configurations in a one-off manner, EDA maintains continuous control over the network devices, referred to as topology nodes. The closed-loop management of these nodes is a fundamental aspect of EDA and is essential to achieving reliable and consistent infrastructure automation.

When a node is successfully onboarded into EDA the platform starts to act as the single source of truth for the configuration of that node and continuously monitors its state. From the operations perspective this means that any changes to the configuration of the managed nodes should be done through EDA, as any out-of-band changes will be classified as deviations and the platform will offer to remediate them to the desired state.

The simplified device onboarding flow is illustrated below:

Simplified device onboarding flow

Device onboarding procedure is covered in the user guide and is not part of this tour.

You don't need to have the real hardware network devices to experience EDA capabilities thanks to the Digital Twin component that is part of the EDA platform. EDA Digital Twin allows users to run virtualized network topologies of arbitrary complexity and size and interact with them as if they were real devices.

The Try EDA environment that you set up earlier, comes with a Kubernetes cluster powered by KinD that hosts the EDA platform and the three node Digital Twin topology pre-installed and ready to use:

To learn more about EDA's Digital Twin and how to define virtualized topologies, refer to the Digital Twin documentation.

Regardless of whether you are using web UI or any of the automation/CLI tools, the interaction with the EDA system is carried out over the REST API exposed by the EDA API server or Kubernetes API server respectively. Users, however, don't need to interact with the nodes directly, as all node management operations are powered by EDA. This connectivity diagram in its simplified form presented below:

The topology that Try EDA provides consists of three Nokia SR Linux virtual routers connected in a leaf-spine architecture with distinct hardware types emulated for each layer. The topology also describes the links between the devices as well as from the leaf nodes to emulated hosts. The node names in the physical topology and their corresponding emulated hardware types that Try EDA comes with is illustrated below:

Physical topology

Node List#

The simulator nodes are onboarded by EDA during the initial platform setup, so you should have them already managed by EDA. In the EDA UI you can navigate to the Nodes section under the Targets category to see the list of nodes entered in the EDA system and their status:

Nodes list in EDA UI

The table lists the three nodes along with their current configuration and state values. Selecting a node 1 from the list and opening the information panel on the right side of the screen makes it easy to see the node details:

Node details

such as:

2 NPP connection state - shows whether the EDA control node is connected to the managed node via gNMI1.

3 Node synchronization state. Synced status indicates that EDA has successfully applied the intended configuration to the node and its current state matches the desired state.

4 Node management address that EDA uses to connect to the node.

5 Node platform and OS details.

In the table view you can also see the Simulate = True value reported for all three nodes, indicating that these are virtualized nodes running in Digital Twin mode and not physical devices.

For CLI enthusiasts, the list of nodes can also be retrieved using the edactl or kubectl:

edactl -n eda get toponode
NAME     PLATFORM       VERSION   OS    ONBOARDED   MODE     NPP         NODE
leaf1    7220 IXR-D3L   25.10.1   srl   true        normal   Connected   Synced
leaf2    7220 IXR-D3L   25.10.1   srl   true        normal   Connected   Synced
spine1   7220 IXR-D5    25.10.1   srl   true        normal   Connected   Synced

Node Labels#

EDA allows its users to label3 any resource under its management to facilitate resource organization, filtering, and selection. Nodes are not an exception and you will find the labels assigned to each node in the Try EDA topology:

Node labels in EDA UI

Labels are free-form key-value pairs that can be assigned to any resource in EDA.

Evidently, each node has a couple of labels assigned to it:

  • eda.nokia.com/role=leaf | spine - denotes the role of the node in the topology. Since our Try EDA topology uses a leaf-spine architecture, the leaf nodes are labeled with role=leaf and the spine node with role=spine. Note, that eda.nokia.com/ is a just a prefix that by convention uses a domain-like format to denote the ownership of the label. When you create your own labels, you can use any prefix you like or none at all.
  • eda.nokia.com/security-profile=managed - indicates that the TLS certificates for these nodes are managed by EDA.

During the tour, you will see how role label is used in resource definitions to target specific nodes based on their role in the topology.

Physical Topology#

Physical Topology view

Refer to the Physical Topology section in the EDA UI page to read more about the topology view and how to interact with it.

Node Configuration#

As was discussed earlier, EDA is the authoritative source of truth for the configuration of the managed nodes. And while EDA does something smarter than just config templating and "replace at root" on each transaction, it still can display the full acting configuration on any managed node:

Displaying node configuration from the Node List view

The configuration view has a toggle 1 that allows to switch from a simple configuration view to a "blame" mode where each configuration region is annotated with the resource name that resulted in that configuration being generated.

Node configuration with resource annotations

The Interface resource2 named leaf1-ethernet-1-1 and highlighted with 2 icon above is responsible for generating the configuration snippet shown next to it. This makes it easy to trace back any part of the node configuration to the resource that created it.

The node configuration can also be displayed from the terminal using edactl CLI tool:

edactl -n eda node get-config leaf1

Node CLI#

While EDA UI/API is the primary interface for all interactions with the managed nodes, there are scenarios where direct CLI access to the nodes is required for troubleshooting or verification purposes. Users can access the CLI of any managed node using the node-ssh script that opens the SSH session from one of the EDA pods to the target node.

Setup the node-ssh script as described in the referenced section and then you can connect to any of the nodes in the topology. For example, to connect to the leaf1 node, run:

node-ssh leaf1
Warning: Permanently added '10.254.32.151' (ED25519) to the list of known hosts.
([email protected]) Password:
Last login: Sun Jan  4 15:00:55 2026 from 10.254.12.87
Loading environment configuration file(s): ['/etc/opt/srlinux/srlinux.rc']
Welcome to the Nokia SR Linux CLI.

--{ + running }--[  ]--
A:admin@leaf1#

Username is set to admin and the password is NokiaSrl1!.

  • Where to next?


    Now that we know how EDA manages network nodes, let's learn how EDA leverages declarative abstractions to ensure reliable and consistent network operations. Because imperative configuration management is a thing of the past!

    EDA Resource Model


  1. gNMI is currently the only supported protocol for device management in EDA, with support for other management protocols planned for future releases. â†©

  2. Resources are the declarative units of automation and core building blocks of EDA. They represent the desired state that should be enacted on a target device. Refer to the next section of the tour to learn more about resources in EDA. â†©

  3. Kubernetes-style labels are used in EDA. â†©