Units of automation#
EDA is an automation framework that follows declarative principles. An operator's input is the desired state of the resources and EDA takes care of the deployment, provisioning, configuration and reconciliation of the resource.
In other words, you tell EDA what state you want your infra to be in and EDA carries out the "how" for you in a reliable and most efficient way.
What is a Resource
?
In EDA, a resource is a unit of automation and can represent virtually anything:
As a Kubernetes citizen, EDA represents its resources via Custom Resources (CRs) of Kubernetes that can be created using multiple methods including the Kubernetes (K8s) API, the EDA API, or through a User Interface (UI).
You probably wonder what resources are available in EDA and how to interact with them. Great question!
EDA resources become available as soon as you install an EDA Application which is a way to extend EDA with new resources and capabilities on the fly. Applications may be provided by anyone: Nokia, our partners or indie developers - EDA is an open platform!
Nothing beats a hands-on experience, so let's learn more about Resources by following a short but powerful example of configuring a fabric on top of our 3-node topology deployed as part of our playground.
A Fabric resource#
You heard it right! We will configure a DC fabric using a single EDA resource in a fully declarative and reliable way. The Fabric resource is a high-level abstraction that allows you to define a fabric configuration suitable for environments ranging from small, single-node edge configurations to large, complex multi-tier and multi-pod networks.
What is a Fabric?
To put it simply, a Fabric resource represents a DC fabric configuration with all its components like:
- a set of leaf and spine devices
- allocation pools for system IPs, ASN numbers
- inter-switch links flavor (numbered, unnumbered, vlans)
- underlay protocol (eBGP, IGP)
- overlay protocol
At the end of the day, a Fabric resource defines and configures everything a DC fabric needs to support overlay networks or L2/L3 services.
The Fabric resource documentation provides a detailed description of the resource, its attributes and behavior. To not repeat ourselves, we will proceed with creating a Fabric resource and leave the exploration of its attributes to a reader.
Recall, that you can create EDA resources using the Kubernetes API, the EDA API or through a User Interface (UI). Let's start with the Kubernetes API.
Creating a resource with Kubernetes API#
To create a resource via the Kubernetes API, you must first define a Kubernetes Custom Resource (CR) specific to your needs. As we set ourselves to create a Fabric resource, we need to define a Fabric CR using our Fabric resource documentation.
To create the abstracted declarative definition of our Fabric in EDA we will use kubectl
3 CLI tool. Paste the below command in your terminal to create a Fabric resource named myfabric-1
in the eda
namespace.
Have a look at the Fabric CR input below as it highlights the power of abstraction and declarative configuration. In twenty lines of simple YAML, we defined an entire Fabric configuration, selected which leafs and spines, what inter switch links to select, and chose the underlay and overlay protocols.
cat << 'EOF' | tee my-fabric.yaml | kubectl -n eda apply -f -
apiVersion: fabrics.eda.nokia.com/v1alpha1
kind: Fabric
metadata:
name: myfabric-1
spec:
leafs:
leafNodeSelector:
- eda.nokia.com/role=leaf
spines:
spineNodeSelector:
- eda.nokia.com/role=spine
interSwitchLinks:
linkSelector:
- eda.nokia.com/role=interSwitch
unnumbered: IPV6
systemPoolIPV4: systemipv4-pool
underlayProtocol:
protocol:
- EBGP
bgp:
asnPool: asn-pool
overlayProtocol:
protocol: EBGP
EOF
Just like that, in a single command we deployed the Fabric resource, as we can verify with:
Ok, we see that the Fabric resource named myfabric-1
has been created in our cluster, but what exactly has happened? Let's find out.
Without getting you overwhelmed with the details, let's just say that EDA immediately recognized the presence of the Fabric resource and turned an abstracted declarative Fabric definition to dozens of important fabric-related sub-resources.
The sub-resources in their turn have been translated to the node-specific configuration blobs and were pushed in an all-or-nothing, transactional manner to all of the nodes in our virtual topology; all of this in a split second.
You see the power of abstraction and automation in action, where the complex configuration task is reduced to a single declarative statement that is reliably transacted to the nodes, just as it should be.
"I don't think that the Fabric should be abstracted like that"
It is absolutely fine if your view how the Fabric abstraction should look like is different from ours. EDA doesn't tell you how to do your infrastructure automation, EDA is here to help you do it.
Leveraging the power of pluggable applications, you can create your own Fabric abstraction and use them to configure your fabric in a way that is most convenient for you.
Now, when the abstracted and declarative input has been processed by EDA, a fully functional Fabric configuration has been deployed on the nodes of our virtual topology.
Don't take our word for it, let's connect to the nodes and check what config they have now. Do you remember that all the nodes in our fabric had no configuration at all? Let's see what changed after we applied the fabric resource:
Checking the running configuration on leaf1
We can connect to the nodes with a single command like make leaf1-ssh
and check the running configuration with info
command:
interface ethernet-1/1 {
admin-state enable
subinterface 0 {
admin-state enable
ipv6 {
admin-state enable
router-advertisement {
router-role {
admin-state enable
max-advertisement-interval 10
min-advertisement-interval 4
}
}
}
}
}
interface ethernet-1/2 {
admin-state enable
subinterface 0 {
admin-state enable
ipv6 {
admin-state enable
router-advertisement {
router-role {
admin-state enable
max-advertisement-interval 10
min-advertisement-interval 4
}
}
}
}
}
interface ethernet-1/3 {
admin-state enable
vlan-tagging true
}
interface ethernet-1/4 {
admin-state enable
vlan-tagging true
}
interface ethernet-1/5 {
admin-state enable
vlan-tagging true
}
interface ethernet-1/6 {
admin-state enable
vlan-tagging true
}
interface ethernet-1/7 {
admin-state enable
vlan-tagging true
}
interface ethernet-1/8 {
admin-state enable
vlan-tagging true
}
interface ethernet-1/9 {
admin-state enable
vlan-tagging true
}
interface ethernet-1/10 {
description lag-leaf1-e1011-local
admin-state enable
ethernet {
aggregate-id lag1
lacp-port-priority 32768
}
}
interface ethernet-1/11 {
description lag-leaf1-e1011-local
admin-state enable
ethernet {
aggregate-id lag1
lacp-port-priority 32768
}
}
interface ethernet-1/12 {
description lag-leaf1-2-e1212-local
admin-state enable
ethernet {
aggregate-id lag2
lacp-port-priority 32768
}
}
interface lag1 {
description lag-leaf1-e1011-local
admin-state enable
vlan-tagging true
lag {
lag-type lacp
min-links 1
lacp-fallback-mode static
lacp-fallback-timeout 60
lacp {
interval FAST
lacp-mode ACTIVE
admin-key 1
system-id-mac FE:2F:AA:00:00:01
system-priority 32768
}
}
}
interface lag2 {
description lag-leaf1-2-e1212-local
admin-state enable
vlan-tagging true
lag {
lag-type lacp
min-links 1
lacp-fallback-mode static
lacp-fallback-timeout 60
lacp {
interval FAST
lacp-mode ACTIVE
admin-key 2
system-id-mac FE:2F:AA:00:00:02
system-priority 32768
}
}
}
interface mgmt0 {
admin-state enable
subinterface 0 {
admin-state enable
ipv4 {
admin-state enable
dhcp-client {
trace-options {
trace [
messages
]
}
}
}
ipv6 {
admin-state enable
dhcp-client {
trace-options {
trace [
messages
]
}
}
}
}
}
interface system0 {
subinterface 0 {
admin-state enable
ipv4 {
admin-state enable
address 11.0.0.1/32 {
}
}
}
}
system {
configuration {
role sudo {
}
}
aaa {
authentication {
authentication-method [
local
]
admin-user {
password $y$j9T$2efd42dad2479d9f$nGS3iroL4eaDjeQBcoj.A8C8gcLddS5sSHM05UexSQ/
}
}
authorization {
role sudo {
superuser true
services [
cli
gnmi
netconf
]
}
}
server-group local {
type local
}
}
ssh-server mgmt {
admin-state enable
network-instance mgmt
}
boot {
autoboot {
admin-state enable
}
}
lldp {
interface ethernet-1/1 {
admin-state enable
}
interface ethernet-1/2 {
admin-state enable
}
interface ethernet-1/3 {
admin-state enable
}
interface ethernet-1/4 {
admin-state enable
}
interface ethernet-1/5 {
admin-state enable
}
interface ethernet-1/6 {
admin-state enable
}
interface ethernet-1/7 {
admin-state enable
}
interface ethernet-1/8 {
admin-state enable
}
interface ethernet-1/9 {
admin-state enable
}
interface ethernet-1/10 {
admin-state enable
}
interface ethernet-1/11 {
admin-state enable
}
interface ethernet-1/12 {
admin-state enable
}
}
name {
host-name leaf1
}
grpc-server mgmt {
admin-state enable
rate-limit 65535
session-limit 1024
metadata-authentication true
tls-profile EDA
network-instance mgmt
port 57400
services [
gnmi
gnoi
gnsi
]
gnmi {
commit-save false
}
}
network-instance {
protocols {
evpn {
ethernet-segments {
bgp-instance 1 {
ethernet-segment lag-leaf1-2-e1212-local {
admin-state enable
esi 00:FE:2F:AA:00:00:02:00:00:00
multi-homing-mode all-active
interface lag2 {
}
df-election {
algorithm {
type default
}
}
}
}
}
}
bgp-vpn {
bgp-instance 1 {
}
}
}
}
}
network-instance default {
type default
admin-state enable
description "fabric: myfabric-1 role: leaf"
router-id 11.0.0.1
ip-forwarding {
receive-ipv4-check false
}
interface ethernet-1/1.0 {
}
interface ethernet-1/2.0 {
}
interface system0.0 {
}
protocols {
bgp {
admin-state enable
autonomous-system 102
router-id 11.0.0.1
dynamic-neighbors {
interface ethernet-1/1.0 {
peer-group bgpgroup-ebgp-myfabric-1
allowed-peer-as [
101
]
}
interface ethernet-1/2.0 {
peer-group bgpgroup-ebgp-myfabric-1
allowed-peer-as [
101
]
}
}
ebgp-default-policy {
import-reject-all true
export-reject-all true
}
afi-safi evpn {
admin-state enable
multipath {
allow-multiple-as true
maximum-paths 64
}
evpn {
inter-as-vpn true
}
}
afi-safi ipv4-unicast {
admin-state enable
multipath {
allow-multiple-as true
maximum-paths 2
}
ipv4-unicast {
advertise-ipv6-next-hops true
receive-ipv6-next-hops true
}
evpn {
rapid-update true
}
}
afi-safi ipv6-unicast {
admin-state enable
multipath {
allow-multiple-as true
maximum-paths 2
}
evpn {
rapid-update true
}
}
preference {
ebgp 170
ibgp 170
}
route-advertisement {
wait-for-fib-install false
}
group bgpgroup-ebgp-myfabric-1 {
admin-state enable
export-policy [
ebgp-isl-export-policy-myfabric-1
]
import-policy [
ebgp-isl-import-policy-myfabric-1
]
afi-safi evpn {
admin-state enable
}
afi-safi ipv4-unicast {
admin-state enable
ipv4-unicast {
advertise-ipv6-next-hops true
receive-ipv6-next-hops true
}
}
afi-safi ipv6-unicast {
admin-state enable
}
}
}
}
}
network-instance mgmt {
type ip-vrf
admin-state enable
description "Management network instance"
interface mgmt0.0 {
}
protocols {
linux {
import-routes true
export-routes true
}
}
}
routing-policy {
prefix-set prefixset-myfabric-1 {
prefix 11.0.0.0/8 mask-length-range 32..32 {
}
}
policy ebgp-isl-export-policy-myfabric-1 {
default-action {
policy-result reject
}
statement 10 {
match {
prefix-set prefixset-myfabric-1
protocol local
}
action {
policy-result accept
bgp {
local-preference {
set 100
}
}
}
}
statement 15 {
match {
protocol bgp
}
action {
policy-result accept
bgp {
local-preference {
set 100
}
}
}
}
statement 20 {
match {
protocol aggregate
}
action {
policy-result accept
bgp {
local-preference {
set 100
}
}
}
}
statement 25 {
match {
bgp {
evpn {
route-type [
1
]
}
}
}
action {
policy-result accept
bgp {
local-preference {
set 100
}
}
}
}
statement 30 {
match {
bgp {
evpn {
route-type [
2
]
}
}
}
action {
policy-result accept
bgp {
local-preference {
set 100
}
}
}
}
statement 35 {
match {
bgp {
evpn {
route-type [
3
]
}
}
}
action {
policy-result accept
bgp {
local-preference {
set 100
}
}
}
}
statement 40 {
match {
bgp {
evpn {
route-type [
4
]
}
}
}
action {
policy-result accept
bgp {
local-preference {
set 100
}
}
}
}
statement 45 {
match {
bgp {
evpn {
route-type [
5
]
}
}
}
action {
policy-result accept
bgp {
local-preference {
set 100
}
}
}
}
}
policy ebgp-isl-import-policy-myfabric-1 {
default-action {
policy-result reject
}
statement 10 {
match {
protocol bgp
}
action {
policy-result accept
bgp {
local-preference {
set 100
}
}
}
}
statement 25 {
match {
bgp {
evpn {
route-type [
1
]
}
}
}
action {
policy-result accept
bgp {
local-preference {
set 100
}
}
}
}
statement 30 {
match {
bgp {
evpn {
route-type [
2
]
}
}
}
action {
policy-result accept
bgp {
local-preference {
set 100
}
}
}
}
statement 35 {
match {
bgp {
evpn {
route-type [
3
]
}
}
}
action {
policy-result accept
bgp {
local-preference {
set 100
}
}
}
}
statement 40 {
match {
bgp {
evpn {
route-type [
4
]
}
}
}
action {
policy-result accept
bgp {
local-preference {
set 100
}
}
}
}
statement 45 {
match {
bgp {
evpn {
route-type [
5
]
}
}
}
action {
policy-result accept
bgp {
local-preference {
set 100
}
}
}
}
}
}
The result of the deployed Fabric app is a fully configured BGP EVPN fabric that is configured on all of the nodes in our topology.
We can list the BGP neighbors on leaf1
to see that it has established BGP sessions with leaf2
and spine1
.
--{ + running }--[ ]--
A:leaf1# show network-instance default protocols bgp neighbor *
--------------------------------------------------------------------------------------------------------------------------------------------
BGP neighbor summary for network-instance "default"
Flags: S static, D dynamic, L discovered by LLDP, B BFD enabled, - disabled, * slow
+----------------+-----------------------+----------------+------+---------+-------------+-------------+-----------+-----------------------+
| Net-Inst | Peer | Group | Flag | Peer-AS | State | Uptime | AFI/SAFI | [Rx/Active/Tx] |
| | | | s | | | | | |
+================+=======================+================+======+=========+=============+=============+===========+=======================+
| default | fe80::53:beff:feff:1% | bgpgroup-ebgp- | D | 101 | established | 0d:0h:16m:2 | evpn | [0/0/0] |
| | ethernet-1/1.0 | myfabric-1 | | | | 2s | ipv4- | [2/2/1] |
| | | | | | | | unicast | [0/0/0] |
| | | | | | | | ipv6- | |
| | | | | | | | unicast | |
| default | fe80::53:beff:feff:2% | bgpgroup-ebgp- | D | 101 | established | 0d:0h:16m:2 | evpn | [0/0/0] |
| | ethernet-1/2.0 | myfabric-1 | | | | 2s | ipv4- | [3/2/3] |
| | | | | | | | unicast | [0/0/0] |
| | | | | | | | ipv6- | |
| | | | | | | | unicast | |
+----------------+-----------------------+----------------+------+---------+-------------+-------------+-----------+-----------------------+
Summary:
0 configured neighbors, 0 configured sessions are established, 0 disabled peers
2 dynamic peers
Everything a fabric needs has been provisioned and configured on the nodes in a declarative way, taking the inputs from the abstracted, sweet and short Fabric CR that everyone can understand.
State of a resource#
Too often, the automation platforms are built solely around the configuration problem, leaving state handling to a different set of applications. In EDA we believe that the state of a resource is as important as the configuration, and state-triggered automation is a key part of the EDA's philosophy.
Be it a higher-level abstracted resource such as Fabric or a lower-level Interface, you will find the state reported for every EDA-managed resource. The relationship between the resource's specification and its state allows us to work with the abstracted configuration and the abstracted state.
Take the recently deployed Fabric resource which spans multiple nodes, and consists of multiple sub-resources. How do we know that the Fabric is healthy? Checking the operational status of every BGP peer and every inter-switch link is not a practical approach.
In EDA, the application developer can define the rules to calculate the state of a resource and populate the resource with this information. By looking at the Fabric's state field an operator can confidently determine the health of the Fabric, without having to inspect the configuration of every single node.
Users can access the status of a resource using edactl
, kubectl
, or UI.
edactl
Not all resources are published into K8s and therefore it is recommended to use edactl
to view the status of resources. EDActl is a CLI tool that runs in the toolbox pod in a cluster and provides a way to interact with the EDA API.
To leverage edactl
, paste the following command into your terminal to install a shell alias that would execute edactl
in the toolbox pod each time you call it.
edactl
aliasalias edactl='kubectl -n eda-system exec -it $(kubectl -n eda-system get pods \
-l eda.nokia.com/app=eda-toolbox -o jsonpath="{.items[0].metadata.name}") \
-- edactl'
Now we can inspect the created Fabric resource using both edactl
and kubectl
.
Note, how the health
and operationalState
of the whole Fabric resource is reported in the status
field. Having the abstracted state is as important as the configuration, since it allows operators to focus on the important information, without having to inspect the configuration of every single node or component of a composite resource.
The operational state can have different values that would help an operator to determine the health of the Fabric. It is totally up to the application developer to define what status is reported for a given resource based on what is relevant for the application.
And, of course, the same information can be layed out nicely in the UI using resources dashboards (you guessed it, they are also customizable by the application developer).
With a glance at the Fabric's dashboard an operator can determine the state of the whole fabric, without having to inspect a dozen of dashboards in a separate system.
Transactions#
The Kubernetes reconciliation loop mechanism offers a way to enable the declarative approach for the infrastructure management. Define the desired state of the infrastructure stack, in the Kubernetes Resource Model, apply it, and the corresponding controllers start to reconcile the actual state of the infrastructure with the desired state.
Sounds great, but what if the desired state is not achievable? What if you deploy a workload with three replicas, but your infrastructure at the moment can only host two of them? The reconciliation loop will keep trying to reconcile the desired state with the actual state, and while it reconciles, you end up living with just two replicas running.
Doesn't sound like a big deal? Yes, maybe if you deploy three web servers and have two only two of them running for some time it is not the end of the world. But in the networking world, having a partially deployed service is a big, big problem.
- What if your CPM filter is partially deployed?
- What if your routing policy has been deployed on a subset of edge nodes?
- What if your service has been added to 100 out of 110 leafs?
In EDA every configuration change is done in an all-or-nothing fashion and transacted in Git. Always.
If the desired state is not achievable even on a single target node, the whole transaction is pronounced failed and the changes are reverted immediately from all of the nodes.
The transactions are network-wide.
When you create any resource, EDA automatically initiates a transaction and publishes its result. To view the list of existing transactions, use edactl
:
The transaction list shows the transaction ID, the status and the user who initiated the transaction. The transaction ID can be used to view the details of the transaction, including the changes made to the resources. Transactions are sequential and can be viewed in the order they were initiated.
What did we do last in this quickstart? Created a Fabric resource!
Let's see what the latest transaction has to say about it:
You will see a lot of details, some of them we clipped from the output to keep it short, but essentially the transaction logged the input resource (kind=Fabric name: myfabric-1) and the output products of this resource being created (output-crs). Each of these outputs constitute a Fabric resource.
At the very end of the transaction output you will see the identified nodes that are affected by this change and the result of the transaction. Since the result is OK, we are rightfully see the resulting configs applied to the nodes in our virtual network.
Creating a resource in UI#
You've seen how to create a resource using the k8s API, and were introduced to the concept of transactions. Now, let's see how we can change an existing resource, perform a dry-run and finally commit the changes.
Usually quickstarts show some simple operations to keep the flow clean and simple, like adding a VLAN to a switch. We won't bother you with these basics, instead lets swap the overlay protocol for every node in our Fabric from eBGP to iBGP with a single operation.
Here is what happened in these 60 seconds:
- We've found the
myfabric-1
Fabric resource created earlier withkubectl
in the UI under the Fabrics section. - We opened the resource and navigated its configuration schema all the way to the Overlay Protocol section.
- We changed the overlay protocol from eBGP to iBGP and provided the required iBGP bits such as ASN number, Router ID.
- We also provided the labels for the nodes that should be used as RR (route reflector) and RR clients; Our topology has been labeled with
role: spine
androle: leaf
when we deployed it. - Instead of applying the change right away, we added it to the transaction basket. We could've added more changes to it, but for now we were ok with a single change.
- Before applying the change we ran the Dry-Run, which started the process of unwrapping the abstracted high-level Fabric resource into the sub-resources and dependent resources.
- The dry-run provided us with the extensive diff view of the planned changes to the nodes and all sub-resources touched by our single protocol change.
- We've reviewed the diff and decided that it is good to commit the change.
- Once we committed the change, we ensured that the change was immediately applied to the nodes by looking at
leaf1
show output and seeing how iBGP appeared in the output of peer neighbors.
Have a look at the Fabric dashboard
Once the change is committed, BGP will take some time to converge. During this period you can see the resource's state in action by opening a Fabric dashboard and observing how the Fabric status transitions from "Degraded" to "Healthy".
Transactions made by a user in the UI5 are also visible in the Transactions UI4:
Congratulations, your fabric is now using iBGP as its overlay protocol
From a tiny change in the Fabric' declarative abstraction through the transformation to sub-resources and eventually to the node-level configurations, that are reliably transacted and pushed to the constituent nodes. How cool is that?
-
Like the Fabric resource documented in the Apps section. ↩
-
Like the Virtual Network resource documented in the Apps section. ↩
-
You can find the
kubectl
CLI tool in thetools
folder of your playground repository.
You can copy it to the/usr/local/bin
dir to make it globally available. ↩ -
Soon you will be able to see the transactions made via the k8s API as well, when the relevant permissions are granted. ↩
-
Transactions made via kubectl will be visible in the UI in a later release. ↩