Containerlab Integration#
To facilitate end-to-end testing and validation of configuration changes, EDA comes equipped with its own multi vendor network emulation engine abbreviated as CX. CX is a highly scalable network emulation platform that powers EDA's Digital Twin capabilities.
Acknowledging that EDA CX is a new network emulation platform that is still in the process of maturing, we wanted to offer a way to integrate EDA with multitude of existing network topologies built with Containerlab.
In this section we cover how to integrate EDA with a lab built with Containerlab in a fully automated way first, and then explain how to do this manually with a deep dive on things involved in the onboarding process. To keep things practical, we will take a real lab built with Containerlab - srl-labs/srlinux-vlan-handling-lab and integrate it with EDA.
This tiny lab consists of two SR Linux nodes and two clients connected to it which is all we need to demonstrate the integration. Let's deploy it like any other containerlab topology:
- The lab will be cloned to the current working directory and deployed.
Containerlab, SR Linux, and EDA versions
For a successful integration you need to ensure the following minimal version requirements:
- Containerlab 0.62.2
- SR Linux 24.10.1
- EDA 24.12.1
This article was validated using the following versions:
- Containerlab: 0.62.2
- SR Linux: 24.10.1
- EDA: 24.12.1
Our end game is to install EDA and integrate it with the Containerlab topology so that we could manage the lab nodes using EDA. The integration scenario is depicted in the diagram below.
Installing EDA#
The reason we started this section with a mention of EDA CX is because as of EDA v24.12.1 the platform is installed by default with the CX engine enabled. What this means is that EDA will spin up virtual simulator nodes using the CX engine for every topology node.
To let EDA manage external nodes (either real hardware or virtual nodes spawned outside of EDA) we need to provide a specific installation option.
Clone the EDA Playground repository if you haven't already and uncomment the following line in the preferences (prefs.mk
) file :
Set other installation parameters in the prefs.mk
file as explained in the Try EDA section and deploy EDA:
- The necessary installation parameters like
EXT_DOMAIN_NAME
in this case are provided in the preferences file. See Try Eda Like a Pro post to learn some neat installation tricks.
With the disabled simulation mode, EDA will be installed without the eda-cx
deployment present and no topology loaded.
License required
Unfortunately, the nodes spawned outside of EDA CX are currently considered as hardware nodes and are licensed. Even the virtual SR Linux nodes that are spawned by Containerlab
You can reach out to the EDA PLM team member in discord to check if they can help acquire one.
If you have a license, apply it to your cluster like this:
Reachability requirements#
For EDA to manage nodes spawned outside of the Kubernetes cluster it is deployed in, it must be able to reach them. In this tutorial we are installing EDA in the KinD cluster that comes as a default with the EDA Playground installation; so our EDA installation will be running alongside the Containerlab topology on the same host machine.
Yet, even though KinD and Containerlab are running on the same host, these two environments are isolated from each other as prescribed by the Docker networking model and enforced by iptables. In order to allow KinD cluster to communicate with Containerlab nodes Containerlab 0.62.2 release installs allowing iptables rules to the DOCKER-USER
chain for v4 and v6 families.
To confirm that the communication is indeed allowed, we can take a management IP of one of our Containerlab nodes and ping it from the eda-bsvr
pod that is one of the pods requiring connectivity with the Containerlab nodes.
Let's issue a ping from the eda-bsvr
pod to the clab-vlan-srl1
node:
kubectl -n eda-system exec -i \
$(kubectl -n eda-system get pods -l eda.nokia.com/app=bootstrapserver \
-o=jsonpath='{.items[*].metadata.name}') \
-- ping -c 2 $(sudo docker inspect -f '{{.NetworkSettings.Networks.clab.IPAddress}}' clab-vlan-srl1)
If you managed to copy-paste things right, you should see packets happily flying between EDA and Containerlab nodes. OK, now, with the containerlab topology running, EDA installed and connectivity requirements satisfied, we can proceed with the actual integration.
Automated integration#
In pursue of a one-click integration experience, we have created the clab-connector
CLI tool that automates the integration process.
Installation#
The clab-connector
tool is easily installable using uv
package manager, therefore start with installing uv
and then clab-connector
:
Usage#
Clab-connector leverages Kubernetes API, EDA API and Containerlab topology export data to automate the integration process. Consequently, the host machine where the clab-connector
tool is installed must have access to the kube config file, EDA API endpoint and Containerlab's topology-data.json
1 file.
Integrate#
If you haven't changed any of the default credentials in your EDA installation, you can integrate EDA with Containerlab as simply as:
clab-connector integrate \
--eda-url https://your.eda.host \
-t ~/path/to/your-lab/clab-yourlab/topology-data.json #(1)!
- The
topology-data.json
file is located in the Containerlab Lab Directory, which is created next to the lab's topology file.
If you happen to change the default user credentials, you can provide them with --eda-user
and --eda-password
flags. Run clab-connector integrate --help
to see all the available flags.
The connector tool will create a new EDA namespace matching the Containerlab lab name and will create the required resources in it. This allows you to managed as many distinct labs as you want, without having clashing resources between them.
Remove#
To remove the EDA integration, run:
clab-connector remove \
--eda-url https://your.eda.host \
-t ~/path/to/your-lab/clab-yourlab/topology-data.json
This will remove the previously created namespace and all the resources inside it.
Manual integration#
TLDR
To integrate SR Linux nodes spawned by Containerlab with EDA in the manual mode you need to:
- Apply an EDA license to be able to integrate with SR Linux nodes spawned outside of EDA CX
- optional Change the default NodeUser resource to use the
NokiaSrl1!
password - Create a NodeProfile resource with the OS/version/yang fields set to the corresponding values
- Create a TopoNode resource for each SR Linux node
- Create an Interface resource per each endpoint of SR Linux nodes.
- Create a TopoLink resource for each link referencing the created Interface resources
Copy/Paste snippets
If you want to quickly onboard SR Linux nodes after spawning the srl-labs/srlinux-vlan-handling-lab containerlab topology, you can copy paste the following snippet entirely in your terminal.
cat << EOF | kubectl -n eda apply -f -
---
apiVersion: core.eda.nokia.com/v1
kind: NodeUser
metadata:
name: admin
namespace: eda
spec:
groupBindings:
- groups:
- sudo
nodeSelector:
- ""
username: admin
password: NokiaSrl1!
sshPublicKeys:
# an optional list of ssh public keys for the node user
# - "ssh-ed25519 AAAAC3NzaC1lZYOURKEYHEREYOURKEYHEREYOURKEYHEREYOURKEYHEREHDLeDteKN74"
$(ssh-add -L | awk '{print " - \""$0"\""}')
EOF
cat << 'EOF' | kubectl -n eda apply -f -
---
apiVersion: core.eda.nokia.com/v1
kind: NodeProfile
metadata:
name: srlinux-clab-24.10.1
namespace: eda
spec:
operatingSystem: srl
version: 24.10.1
versionPath: .system.information.version
versionMatch: v24\.10\.1.*
images:
- image: fake.bin
imageMd5: fake.bin.md5
port: 57410
yang: https://eda-asvr.eda-system.svc/eda-system/schemaprofiles/srlinux-ghcr-24.10.1/srlinux-24.10.1.zip
onboardingUsername: admin
onboardingPassword: NokiaSrl1!
nodeUser: admin
annotate: true
EOF
cat << EOF | kubectl -n eda apply -f -
---
apiVersion: core.eda.nokia.com/v1
kind: TopoNode
metadata:
name: clab-vlan-srl1
labels:
eda.nokia.com/security-profile: managed
namespace: eda
spec:
nodeProfile: srlinux-clab-24.10.1
operatingSystem: srl
platform: 7220 IXR-D2L
version: 24.10.1
productionAddress:
ipv4: $(sudo docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' clab-vlan-srl1)
---
apiVersion: core.eda.nokia.com/v1
kind: TopoNode
metadata:
name: clab-vlan-srl2
labels:
eda.nokia.com/security-profile: managed
namespace: eda
spec:
nodeProfile: srlinux-clab-24.10.1
operatingSystem: srl
platform: 7220 IXR-D2L
version: 24.10.1
productionAddress:
ipv4: $(sudo docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' clab-vlan-srl2)
EOF
cat << 'EOF' | kubectl -n eda apply -f -
#### srl1 Interface #####
#########################
---
apiVersion: interfaces.eda.nokia.com/v1alpha1
kind: Interface
metadata:
labels:
role: access
name: clab-vlan-srl1-ethernet-1-1
namespace: eda
spec:
enabled: true
encapType: "null"
lldp: true
members:
- enabled: true
interface: ethernet-1-1
node: clab-vlan-srl1
type: interface
---
apiVersion: interfaces.eda.nokia.com/v1alpha1
kind: Interface
metadata:
labels:
role: interSwitch
name: clab-vlan-srl1-ethernet-1-10
namespace: eda
spec:
enabled: true
encapType: "null"
lldp: true
members:
- enabled: true
interface: ethernet-1-10
node: clab-vlan-srl1
type: interface
#########################
#### srl2 Interface #####
#########################
---
apiVersion: interfaces.eda.nokia.com/v1alpha1
kind: Interface
metadata:
labels:
role: access
name: clab-vlan-srl2-ethernet-1-1
namespace: eda
spec:
enabled: true
encapType: "null"
lldp: true
members:
- enabled: true
interface: ethernet-1-1
node: clab-vlan-srl2
type: interface
---
apiVersion: interfaces.eda.nokia.com/v1alpha1
kind: Interface
metadata:
labels:
role: interSwitch
name: clab-vlan-srl2-ethernet-1-10
namespace: eda
spec:
enabled: true
encapType: "null"
lldp: true
members:
- enabled: true
interface: ethernet-1-10
node: clab-vlan-srl2
type: interface
EOF
cat << 'EOF' | kubectl -n eda apply -f -
---
apiVersion: core.eda.nokia.com/v1
kind: TopoLink
metadata:
name: srl1-client1
namespace: eda
spec:
links:
- local:
node: clab-vlan-srl1
interface: ethernet-1/1
interfaceResource: clab-vlan-srl1-ethernet-1-1
remote:
node: clab-vlan-client1
interface: eth1
interfaceResource: eth1
type: edge
---
apiVersion: core.eda.nokia.com/v1
kind: TopoLink
metadata:
name: srl1-srl2
namespace: eda
spec:
links:
- local:
node: clab-vlan-srl1
interface: ethernet-1/10
interfaceResource: clab-vlan-srl1-ethernet-1-10
remote:
node: clab-vlan-srl2
interface: ethernet-1/10
interfaceResource: clab-vlan-srl2-ethernet-1-10
type: interSwitch
---
apiVersion: core.eda.nokia.com/v1
kind: TopoLink
metadata:
name: srl2-client2
namespace: eda
spec:
links:
- local:
node: clab-vlan-srl2
interface: ethernet-1/1
interfaceResource: clab-vlan-srl2-ethernet-1-1
remote:
node: clab-vlan-client2
interface: eth1
interfaceResource: eth1
type: edge
EOF
Even though automated integration makes the integration so easy, it is the manual integration that explains the moving parts and the underlying concepts. By completing this section you will get a decent understanding of the onboarding process and will breeze through the automated integration later on.
SR Linux configuration#
Our goal is to have EDA to discover and onboard the SR Linux nodes running as part of the Containerlab topology. When Containerlab2 spins up the SR Linux nodes it will add two EDA-specific gRPC servers to the default config; these servers will allow EDA to discover and later manage the nodes.
You will find the eda-discovery
grpc server that is used by EDA to discover the node and setup the TLS certificates and the eda-mgmt
grpc server that is used by EDA to manage the node after the initial discovery using the provisioned TLS certificates.
TopoNode#
It is time to let EDA know about the Containerlab topology and onboard the two SR Linux nodes that are running under the names clab-vlan-srl1
and clab-vlan-srl2
. But how do we do it?
It all starts with the TopoNode resource. The TopoNode resource is part of the EDA core and describes an abstracted node in the topology. In order to let EDA know about a node it needs to manage, we need to create a TopoNode resource per each SR Linux node in our Containerlab topology.
TopoNode's Custom Resource Definition (CRD) documentation describes the fields a resource of this type might have, but we need only a subset of them. Here are our two TopoNode resources named according to the container names of our SR Linux nodes in the topology:
If you feel lost, don't worry, we will explain what these fields mean in a moment.
Metadata#
Following the Kubernetes Resource Model (KRM), we specify the apiVersion
and kind
of the resource we are describing in YAML format. The TopoNode resource belongs to the core.eda.nokia.com
API group of the v1
version and the resource kind is - TopoNode
.
Next comes the metadata section. There, we specify the desired resource name. The name of the TopoNode resource does not have to match anything specific, but to keep things consistent with the Containerlab topology, we will use the corresponding container name of the SR Linux node.
In the labels section we need to add a label that describes how the node TLS certificates should be handled. EDA is a secure-first platform where all the communications are secured by default, and interactions with the networking nodes are no exception. With the eda.nokia.com/security-profile: managed
label we tell EDA that it needs to manage the certificate lifecycle for the node.
Without going into the details, this mode ensures fully automated certificate management for the node.
EDA Playground installation comes with a pre-created user namespace called eda
. This pre-provisioned namespace should contain all user-provided resources, like TopoNode. Hence, we set the namespace to eda
in the metadata
section.
System information#
Jumping over to the .spec
object of the TopoNode resource, we can spot a block with System Information data:
- The
operatingSystem
field is set tosrl
for Nokia SR Linux nodes we about to get onboarded. - And the
platform
field should contain the SR Linux platform name in its full text form. Since in the Containerlab topology we did not specify the SR Linux platform type, it defaults to 7220 IXR-D2L - The
version
field must match the version of the SR Linux container image we are using in our topology.
Address#
Since our SR Linux nodes were deployed with Containerlab, EDA can't possibly know the nodes IP addresses. We need to provide this information, and TopoNode resource has a field for that:
We chose to use the IPv4 address assigned by Containerlab, but IPv6 addresses are also supported. EDA will use this IP address to reach the node and start the onboarding process once the TopoNode resource is created in cluster.
Providing the production address information disables the whole DHCP/ZTP workflow at the bootstrap server side, as the node is considered to be bootstrapped by an external system (like Containerlab).
Bootstrap server in this case will just ensure that the node is reachable and setup a valid TLS certificate.
Node profile#
The last piece in the TopoNode resource that we must set is a NodeProfile.
The NodeProfile, surprise-surprise, defines the profile of the node that a particular TopoNode resource is going to use.
It contains more details about the node transport parameters, imaging information and YANG schema. Let's cover the most important fields of this resource.
OS and version#
The first thing you see in the NodeProfile spec is the OS and version information. It has to match the OS and version provided in the associated TopoNode resource. Besides that, it also has to specify the path in JSPath notation to use to fetch the version value from the node and the regex to match against the fetched version value.
operatingSystem: srl
version: 24.10.1
versionPath: .system.information.version
versionMatch: v24\.10\.1.*
Image#
When a hardware node running SR Linux uses the ZTP process, the Bootstrap server provides a ZTP script that contains the initial bootstrap configuration and the target image URL.
The URL that the Bootstrap server uses is provided with the .spec.images[].image
field of the NodeProfile resource.
You might ask why we need that for a Containerlab-spawned virtual node that does not need to be imaged? Good question. Since this field is marked as required in the CRD we have to provide some value but for the virtual nodes we can provide a dummy URL.
This is exactly what we did in our NodeProfile resource.
gRPC port#
With the port
field we specify the gRPC port number for the server that EDA will use to manage the node.
If you remember, in the SR Linux configuration section we mentioned that Containerlab adds eda-mgmt
gRPC server listening on port 57410. This port is set in the NodeProfile resource and EDA will use it to connect to the node once the onboarding process is done.
YANG schema#
One of the EDA's core features is its ability to validate the intents before applying them to the nodes. The validation piece is crucial for the mission-critical networks and EDA takes care of that.
To validate the intents, no matter how complex they are, EDA needs to know the YANG schema of the node it talks to. This requirement makes YANG schema a mandatory field in the NodeProfile resource; it should point to an HTTP location where EDA can fetch the YANG schema. We call this YANG bundle a Schema Profile.
yang: https://eda-asvr.eda-system.svc/eda-system/schemaprofiles/srlinux-ghcr-24.10.1/srlinux-24.10.1.zip
As part of the EDA Playground installation, the schema profile for SR Linux 24.10.1 version is already provided and is server by the EDA's Artifact server.
User#
EDA uses gNMI protocol to communicate with the nodes, starting from discovery and onboarding. The gNMI server authenticates the client using the username and password pair provided in the gRPC metadata.
For the onboarding step to be successful, a pair of credentials needs to be provided via the onboardingUsername
and onboardingPassword
fields.
When EDA will reach to the node's discovery gRPC server over the predefined 50052
port it will supply this credentials in the gRPC metadata. The provided credentials must be valid and since we are using the default admin
credentials in these fields we can rest assured that the authentication will succeed.
But the onboarding user might not be the same as the one used for the ongoing management of the node. When EDA creates the Node Push/Pull (NPP) pods that are responsible for the configuration push and pull operations, these pods will use the credentials of a user defined in the NodeUser resource that we refer to in the NodeProfile as well:
The admin
NodeUser resource has been created as part of the EDA Playground installation, but it uses a non-default SR Linux password, that we would like to change. To do that, we will craft a resource manifest that uses the default NokiaSrl1!
password, as well as add a public key3 to enable typing-free SSH access.
The NodeUser resource references the NodeGroup resource that contains the AAA parameters this user should inherit.
TopoLink#
If we were to apply the TopoNode resource right now, we would end up getting the following topology diagram in EDA UI:
There is obviously a piece missing - the topology doesn't have any links! And the reason is simple - we haven't defined any topology link resources.
The TopoLink resource is responsible for defining the topology links. As the CRD description says:
TopoLink represents a logical link between two TopoNodes. It may include more than one physical link, being used to represent a LAG or multihomed link.
Looking at our lab diagram we can identify three topology links (highlighted in cyan):
In EDA, we call links between the switches inter switch links, links between the switches and the clients edge links, and loopback links are called just loopback. So our three topology links will be:
- The link between
srl1
andclient1
-edge
link - The link between
srl1
andsrl2
switches -interSwitch
link - The link between
srl2
andclient2
-edge
link
The TopoLink resource definition has a straightforward specification:
spec:
links:
- local: # required
interface:
interfaceResource: # required
node: # required
remote: # same as local
speed:
type: # required
A TopoLink, like any other link-like object is identified by a local endpoint and an optional remote endpoint. The local/remote endpoints are "connecting" to the TopoNode objects via node
field.
But this is not everything a TopoLink needs. It also requires us to provide a link to the Interface resource via the interfaceResource
field, as this is the bind point for the link in a particular node.
Interface#
The Interface resource creates a physical interface on the node. In our topology we have two physical interfaces per each managed SR Linux node:
Interface CRD
The Interface resource is part of an interface.eda.nokia.com
application and its CRD is currently not published for us to link you to the doc.crds.dev.
For a TopoLink resource to be valid, the Interface resources must be created first and then referenced in the TopoLink specification.
Here is how you would define the ethernet-1/1
interface on SR Linux node srl1
that connects it to the client1
node:
apiVersion: interfaces.eda.nokia.com/v1alpha1
kind: Interface
metadata:
labels:
role: access
name: clab-vlan-srl1-ethernet-1-1
namespace: eda
spec:
enabled: true
encapType: "null"
lldp: true
members:
- enabled: true
interface: ethernet-1-1
node: clab-vlan-srl1
type: interface
As indicated in the spec, the Interface resource has a members
field that can contain one (for a single interface) or multiple (for a LAG) interfaces objects. An implementation detail worth calling out is that the physical interface name should be normalized, i.e. SR Linux's ethernet-1/1
becomes ethernet-1-1
.
As we do not have LAG interfaces in our lab topology, all our interfaces will have identical configuration.
Applying the resources#
Let's summarize what we have learned so far:
- The TopoNode resource defines the node in the EDA topology.
- Creation of the TopoNode resource triggers onboarding process for the node.
- TopoNode resource references the NodeProfile resource that defines the lower level node parameters used in bootstrapping/onboarding and the management workflows.
- Onboarding happens over the well-known gRPC port 50052, this gRPC server is configured by Containerlab2 automatically for the SR Linux nodes.
- Onboarding/Bootstrapping procedure sets up the
EDA
TLS profile using gNSI for SR Linux nodes. Once the certificate is installed, the node is marked asonBoarded=true
. - Onboarding user and the user used for the EDA management might be different. The "permanent" user is declaratively defined by the NodeUser resource.
- The gRPC server used for the management of the node is tied to the NodeProfile resource and is identified by the
port
field. This server should reference a dynamicEDA
TLS profile that EDA's bootstrap server sets up during the onboarding workflow. - When the node is onboarded, the NPP pod is spawned and connects to the node; it replaces the existing node configuration with the configuration calculated by EDA based on the defined intents.
- To create TopoLink resources, we need to create Interface resources first and then reference them in the TopoLink resource.
Before we rush to apply the resources, let's capture the state of the current config present on our SR Linux nodes and verify that the configuration will be wiped out and replaced once EDA starts to manage the nodes.
If you take a look at the lab's topology file, you will notice that the two SR Linux nodes are defined with the startup config blobs that create a pair of interfaces and attach them to a bridge domain bridge-1
. It is easy to verify that:
NodeUser#
With the initial state captured, let's start applying the resources in the bottom-up order, starting with the NodeUser resource:
---
apiVersion: core.eda.nokia.com/v1
kind: NodeUser
metadata:
name: admin
namespace: eda
spec:
groupBindings:
- groups:
- sudo
nodeSelector:
- ""
username: admin
password: NokiaSrl1!
sshPublicKeys:
# an optional list of ssh public keys for the node user
# - "ssh-ed25519 AAAAC3NzaC1lZYOURKEYHEREYOURKEYHEREYOURKEYHEREYOURKEYHEREHDLeDteKN74"
In this command we retrieve the public keys from the SSH agent and add add them to the NodeUser resource.
cat << EOF | kubectl -n eda apply -f -
---
apiVersion: core.eda.nokia.com/v1
kind: NodeUser
metadata:
name: admin
namespace: eda
spec:
groupBindings:
- groups:
- sudo
nodeSelector:
- ""
username: admin
password: NokiaSrl1!
sshPublicKeys:
# an optional list of ssh public keys for the node user
# - "ssh-ed25519 AAAAC3NzaC1lZYOURKEYHEREYOURKEYHEREYOURKEYHEREYOURKEYHEREHDLeDteKN74"
$(ssh-add -L | awk '{print " - \""$0"\""}')
EOF
NodeProfile#
With admin
NodeUser modified to feature the NokiaSrl1!
password, let's create the NodeProfile resource named srlinux-clab-24.10.1
:
---
apiVersion: core.eda.nokia.com/v1
kind: NodeProfile
metadata:
name: srlinux-clab-24.10.1
namespace: eda
spec:
operatingSystem: srl
version: 24.10.1
versionPath: .system.information.version
versionMatch: v24\.10\.1.*
images:
- image: fake.bin
imageMd5: fake.bin.md5
port: 57410
yang: https://eda-asvr.eda-system.svc/eda-system/schemaprofiles/srlinux-ghcr-24.10.1/srlinux-24.10.1.zip
onboardingUsername: admin
onboardingPassword: NokiaSrl1!
nodeUser: admin
annotate: true
cat << 'EOF' | kubectl -n eda apply -f -
---
apiVersion: core.eda.nokia.com/v1
kind: NodeProfile
metadata:
name: srlinux-clab-24.10.1
namespace: eda
spec:
operatingSystem: srl
version: 24.10.1
versionPath: .system.information.version
versionMatch: v24\.10\.1.*
images:
- image: fake.bin
imageMd5: fake.bin.md5
port: 57410
yang: https://eda-asvr.eda-system.svc/eda-system/schemaprofiles/srlinux-ghcr-24.10.1/srlinux-24.10.1.zip
onboardingUsername: admin
onboardingPassword: NokiaSrl1!
nodeUser: admin
annotate: true
EOF
TopoNode#
So far the resources that we have modified or created did not trigger any activity in our EDA cluster; we just prepared the grounds for the next step of creating the TopoNode resources:
When applying the TopoNode resources, the difference between the resources (besides the resource name) is in the productionAddress
field. The kubectl
apply tab shows how to programmatically fetch the current assigned IP address from the docker state and populate the resources accordingly so that you can copy and paste the command on the host that runs the containerlab topology.
---
apiVersion: core.eda.nokia.com/v1
kind: TopoNode
metadata:
name: clab-vlan-srl1
labels:
eda.nokia.com/security-profile: managed
namespace: eda
spec:
nodeProfile: srlinux-clab-24.10.1
operatingSystem: srl
platform: 7220 IXR-D2L
version: 24.10.1
productionAddress:
ipv4: # IP address of the clab-vlan-srl1 node
---
apiVersion: core.eda.nokia.com/v1
kind: TopoNode
metadata:
name: clab-vlan-srl2
labels:
eda.nokia.com/security-profile: managed
namespace: eda
spec:
nodeProfile: srlinux-clab-24.10.1
operatingSystem: srl
platform: 7220 IXR-D2L
version: 24.10.1
productionAddress:
ipv4: # IP address of the clab-vlan-srl2 node
cat << EOF | kubectl -n eda apply -f -
---
apiVersion: core.eda.nokia.com/v1
kind: TopoNode
metadata:
name: clab-vlan-srl1
labels:
eda.nokia.com/security-profile: managed
namespace: eda
spec:
nodeProfile: srlinux-clab-24.10.1
operatingSystem: srl
platform: 7220 IXR-D2L
version: 24.10.1
productionAddress:
ipv4: $(sudo docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' clab-vlan-srl1)
---
apiVersion: core.eda.nokia.com/v1
kind: TopoNode
metadata:
name: clab-vlan-srl2
labels:
eda.nokia.com/security-profile: managed
namespace: eda
spec:
nodeProfile: srlinux-clab-24.10.1
operatingSystem: srl
platform: 7220 IXR-D2L
version: 24.10.1
productionAddress:
ipv4: $(sudo docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' clab-vlan-srl2)
EOF
Interface#
A topology without links is not a topology. Time to add the links between the nodes. And a prerequisite to creating the TopoLink resources is to create the Interface resources.
#### srl1 Interface #####
#########################
---
apiVersion: interfaces.eda.nokia.com/v1alpha1
kind: Interface
metadata:
labels:
role: access
name: clab-vlan-srl1-ethernet-1-1
namespace: eda
spec:
enabled: true
encapType: "null"
lldp: true
members:
- enabled: true
interface: ethernet-1-1
node: clab-vlan-srl1
type: interface
---
apiVersion: interfaces.eda.nokia.com/v1alpha1
kind: Interface
metadata:
labels:
role: interSwitch
name: clab-vlan-srl1-ethernet-1-10
namespace: eda
spec:
enabled: true
encapType: "null"
lldp: true
members:
- enabled: true
interface: ethernet-1-10
node: clab-vlan-srl1
type: interface
#########################
#### srl2 Interface #####
#########################
---
apiVersion: interfaces.eda.nokia.com/v1alpha1
kind: Interface
metadata:
labels:
role: access
name: clab-vlan-srl2-ethernet-1-1
namespace: eda
spec:
enabled: true
encapType: "null"
lldp: true
members:
- enabled: true
interface: ethernet-1-1
node: clab-vlan-srl2
type: interface
---
apiVersion: interfaces.eda.nokia.com/v1alpha1
kind: Interface
metadata:
labels:
role: interSwitch
name: clab-vlan-srl2-ethernet-1-10
namespace: eda
spec:
enabled: true
encapType: "null"
lldp: true
members:
- enabled: true
interface: ethernet-1-10
node: clab-vlan-srl2
type: interface
cat << 'EOF' | kubectl -n eda apply -f -
#### srl1 Interface #####
#########################
---
apiVersion: interfaces.eda.nokia.com/v1alpha1
kind: Interface
metadata:
labels:
role: access
name: clab-vlan-srl1-ethernet-1-1
namespace: eda
spec:
enabled: true
encapType: "null"
lldp: true
members:
- enabled: true
interface: ethernet-1-1
node: clab-vlan-srl1
type: interface
---
apiVersion: interfaces.eda.nokia.com/v1alpha1
kind: Interface
metadata:
labels:
role: interSwitch
name: clab-vlan-srl1-ethernet-1-10
namespace: eda
spec:
enabled: true
encapType: "null"
lldp: true
members:
- enabled: true
interface: ethernet-1-10
node: clab-vlan-srl1
type: interface
#########################
#### srl2 Interface #####
#########################
---
apiVersion: interfaces.eda.nokia.com/v1alpha1
kind: Interface
metadata:
labels:
role: access
name: clab-vlan-srl2-ethernet-1-1
namespace: eda
spec:
enabled: true
encapType: "null"
lldp: true
members:
- enabled: true
interface: ethernet-1-1
node: clab-vlan-srl2
type: interface
---
apiVersion: interfaces.eda.nokia.com/v1alpha1
kind: Interface
metadata:
labels:
role: interSwitch
name: clab-vlan-srl2-ethernet-1-10
namespace: eda
spec:
enabled: true
encapType: "null"
lldp: true
members:
- enabled: true
interface: ethernet-1-10
node: clab-vlan-srl2
type: interface
EOF
The moment we created the Interface resources, EDA will configure the associated physical interfaces on the SR Linux nodes. We will see it in the Verification section. Yet, the topology UI will not show the interfaces until we create the TopoLink resources.
TopoLink#
Now, that we have the Interfaces created we can create the last resource type - TopoLink.
---
apiVersion: core.eda.nokia.com/v1
kind: TopoLink
metadata:
name: srl1-client1
namespace: eda
spec:
links:
- local:
node: clab-vlan-srl1
interface: ethernet-1/1
interfaceResource: clab-vlan-srl1-ethernet-1-1
remote:
node: clab-vlan-client1
interface: eth1
interfaceResource: eth1
type: edge
---
apiVersion: core.eda.nokia.com/v1
kind: TopoLink
metadata:
name: srl1-srl2
namespace: eda
spec:
links:
- local:
node: clab-vlan-srl1
interface: ethernet-1/10
interfaceResource: clab-vlan-srl1-ethernet-1-10
remote:
node: clab-vlan-srl2
interface: ethernet-1/10
interfaceResource: clab-vlan-srl2-ethernet-1-10
type: interSwitch
---
apiVersion: core.eda.nokia.com/v1
kind: TopoLink
metadata:
name: srl2-client2
namespace: eda
spec:
links:
- local:
node: clab-vlan-srl2
interface: ethernet-1/1
interfaceResource: clab-vlan-srl2-ethernet-1-1
remote:
node: clab-vlan-client2
interface: eth1
interfaceResource: eth1
type: edge
cat << 'EOF' | kubectl -n eda apply -f -
---
apiVersion: core.eda.nokia.com/v1
kind: TopoLink
metadata:
name: srl1-client1
namespace: eda
spec:
links:
- local:
node: clab-vlan-srl1
interface: ethernet-1/1
interfaceResource: clab-vlan-srl1-ethernet-1-1
remote:
node: clab-vlan-client1
interface: eth1
interfaceResource: eth1
type: edge
---
apiVersion: core.eda.nokia.com/v1
kind: TopoLink
metadata:
name: srl1-srl2
namespace: eda
spec:
links:
- local:
node: clab-vlan-srl1
interface: ethernet-1/10
interfaceResource: clab-vlan-srl1-ethernet-1-10
remote:
node: clab-vlan-srl2
interface: ethernet-1/10
interfaceResource: clab-vlan-srl2-ethernet-1-10
type: interSwitch
---
apiVersion: core.eda.nokia.com/v1
kind: TopoLink
metadata:
name: srl2-client2
namespace: eda
spec:
links:
- local:
node: clab-vlan-srl2
interface: ethernet-1/1
interfaceResource: clab-vlan-srl2-ethernet-1-1
remote:
node: clab-vlan-client2
interface: eth1
interfaceResource: eth1
type: edge
EOF
Verifying integration#
Applying the TopoNode resources triggers a lot of activity in EDA. Starting with Bootstrap server to setup the dynamic TLS profile named EDA
as part of the bootstrap workflow, finishing with NPP pods connecting to the SR Linux nodes and replacing the existing configuration with the configuration calculated by EDA based on the intents defined in the system.
You should see the TopoNode resources and their associated state looking like this after approximately 30-60s after applying the TopoNode resources:
If you don't see the same output, check the TransactionResults resource that will reveal any potential issues with the transactions.
If your TopoNode resources look all good, how about the SR Linux nodes? What has changed there? Remember that we warned you about the config replacement that happens the moment the nodes become managed by EDA? Let's check the configuration on the clab-vlan-srl1
node using the same sr_cli
commands as we did in the beginning of the Applying the resources section.
Note, that the interfaces are all up, because they have been configured when we created Interface resources.
But contrary to the Interfaces, the bridge-1
mac-vrf network instance is completely gone, because we have not created any resources that would trigger a network instance creation.
Besides things that were removed, EDA added a new dynamic TLS profile named EDA
. The Bootstrap server created it and the eda-mgmt
gRPC server has been referring to it as part of the default configuration of SR Linux.
This completes the manual integration of EDA with a topology created by Containerlab. You were the witness of the process that is, well, manual, but the good news is that we you can use the clab-connector to automate the process.
-
topology-data.json
file is generated by Containerlab when the lab is deployed. It can be found in the Containerlab's Lab Directory. ↩ -
set your own public key, this one is for demonstration purposes only ↩