Skip to content

Setting up the EDA virtual machine nodes#

This section describes how to the prepare the configurations file, generate the configuration files, and deploy the Talos virtual machines.

Preparing the EDAADM configuration file#

The edaadm tool helps with the creation of the necessary machine configuration files for the Talos VMs that are part of your deployment.

EDAAM configuration file fields#

The EDAADM configuration file is a YAML file that describes your Talos Kubernetes environment. You can use it to configure the different nodes and the general Kubernetes cluster environment.

Top-level parameter

Description

version

The version of the EDA environment to be deployed.
Example: 25.4.1

clusterName

The name of your EDA environment.
Example: eda-production-cluster

machines

A list of Kubernetes nodes. Each Kubernetes node has the following settings:

name

The name of a node.
Example: eda-node01

endpoint

The IP address on which the node is reachable for Talos to control. Optional.

interfaces

A list of interfaces present in the node, each with the following settings:

  • name: the name of the interface. Example: eth0

  • dhcp: indicates if DHCP is to be used for the interface. Values: true or false. For production environments, set to false.

  • mtu: the MTU setting for the interface. For an interface used to connect to nodes under management, set to 9000 for best practice. Optional.

  • interface: the interface name as it appears in Linux. Typically, eth0, eth1, and so forth. Optional.

  • addresses: a list of IP addresses; for dual-stack deployments, you can specify both IPv4 and IPv6 addresses. If DHCP is not provided, specify at least one address.

  • routes: a list of static routes to configure, including the default route. Optional. Routes have the following components:

    • gateway: the next-hop or gateway for the route.

    • metric: a metric to indicate the priority of the route. Optional.

    • mtu: a specific MTU for the route. Optional.

    • network: the destination CIDR of the route.

    • source: a source interface for the route to apply to. Optional.

  • deviceSelector: specifies how to select the device associated with this interface.

    • busPath: a PCI buspath that can contain wildcards. Optional.

    • hardwareAddr: a MAC address that can contain wildcards. Optional.

disks

Identifies the disks available in the node:

  • os: Specifies which disk to use for the OS. Required setting.

    Typically /dev/sda or /dev/vda, depending on the hypervisor platform

  • storage: Optional disk for use with nodes that are to be part of the storage cluster.

k8s

The Kubernetes-specific configuration. The following parameters define the Kubernetes cluster:

stack

Indicates the network stack to support. Values: ipv4, ipv6, or dual

primaryNode

The first control plane node in the cluster to be used for bootstrapping the Kubernetes cluster.

Specify the a the name of a machine.

endpointUrl

The URL on which to reach the Kubernetes control plane. This setting uses the Kubernetes VIP address. Example: https://192.0.2.10:6443

allowSchedulingOnControlPlanes

Specifies if workloads can be deployed on the control plane node. Values: true or false. For best practice, set to true.

control-plane

A list of control plane nodes. Specify a machine name.

worker

A list of worker nodes. Specify a machine name.

vip

The VIP addresses used for Kubernetes and the interfaces to which they should be attached in the control plane nodes. Depending on the IP stack in use, some values are required:

  • interface: the interface to which the VIP is attached on the nodes.

    Example: eth0

  • ipv4: the IPv4 VIP address.

    Example: 192.0.2.10

  • ipv6: the IPv6 VIP address

env

Section that includes the optional proxy settings for the Kubernetes nodes:

  • http_proxy: The HTTP proxy URL to use.

    Example: http://192.0.2.254:808

  • https_proxy: the HTTPS proxy URL to use.

    Example: http://192.0.2.254:808

  • no_proxy: the no proxy setting for IP addresses, IP ranges, and hostnames

time

Defines NTP settings.

  • disabled: Specifies whether NTP is enabled. For production environments, set to false to enable NTP.
  • servers: A list of NTP servers; required for production environments.

nameservers

A list of DNS servers specified under the following sub-element:

  • servers: the list of DNS servers

certBundle

An optional set of PEM-formatted certificates that need to be trusted; this setting is used for trust external services.

mirror

Only needed for Air-gapped environment, following settings can be set:

  • name: The name of the mirror
  • url: The URL of the mirror
  • insecure: should be true
  • overridePath: should be false
  • skipFallback: should be true
  • mirrors: A list of online registry domain names for which the mirror is used. This should look like:

    - docker.io
    - gcr.io
    - ghcr.io
    - registry.k8s.io
    - quay.io
    

Example EDAADM configuration file#

The following examples show an EDAADM configuration file for a 6-node Kubernetes cluster. For a standard Internet based installation, as well as for an Air-gapped installation. These are the same two files, with only the mirror addition on the second tab/file.

version: 25.4.1
clusterName: eda-compute-cluster
machines:
  - name: eda-node01
    endpoint: "192.0.2.11"
    interfaces:
      - name: eth0
        dhcp: false
        interface: eth0
        addresses:
          - 192.0.2.11/24
        routes:
          - network: 0.0.0.0/0
            gateway: 192.0.2.1
        mtu: 9000
      - name: eth1
        dhcp: false
        interface: eth1
        addresses:
          - 203.0.113.11/24
        mtu: 9000
    disks:
      os: /dev/vda
      storage: /dev/vdb
  - name: eda-node02
    endpoint: "192.0.2.12"
    interfaces:
      - name: eth0
        dhcp: false
        interface: eth0
        addresses:
          - 192.0.2.12/24
        routes:
          - network: 0.0.0.0/0
            gateway: 192.0.2.1
        mtu: 9000
      - name: eth1
        dhcp: false
        interface: eth1
        addresses:
          - 203.0.113.12/24
        mtu: 9000
    disks:
      os: /dev/vda
      storage: /dev/vdb
  - name: eda-node03
    endpoint: "192.0.2.13"
    interfaces:
      - name: eth0
        dhcp: false
        interface: eth0
        addresses:
          - 192.0.2.13/24
        routes:
          - network: 0.0.0.0/0
            gateway: 192.0.2.1
        mtu: 9000
      - name: eth1
        dhcp: false
        interface: eth1
        addresses:
          - 203.0.113.13/24
        mtu: 9000
    disks:
      os: /dev/vda
      storage: /dev/vdb
  - name: eda-node04
    endpoint: "192.0.2.14"
    interfaces:
      - name: eth0
        dhcp: false
        interface: eth0
        addresses:
          - 192.0.2.14/24
        routes:
          - network: 0.0.0.0/0
            gateway: 192.0.2.1
        mtu: 9000
      - name: eth1
        dhcp: false
        interface: eth1
        addresses:
          - 203.0.113.14/24
        mtu: 9000
    disks:
      os: /dev/vda
  - name: eda-node05
    endpoint: "192.0.2.15"
    interfaces:
      - name: eth0
        dhcp: false
        interface: eth0
        addresses:
          - 192.0.2.15/24
        routes:
          - network: 0.0.0.0/0
            gateway: 192.0.2.1
        mtu: 9000
      - name: eth1
        dhcp: false
        interface: eth1
        addresses:
          - 203.0.113.15/24
        mtu: 9000
    disks:
      os: /dev/vda
  - name: eda-node06
    endpoint: "192.0.2.16"
    interfaces:
      - name: eth0
        dhcp: false
        interface: eth0
        addresses:
          - 192.0.2.16/24
        routes:
          - network: 0.0.0.0/0
            gateway: 192.0.2.1
        mtu: 9000
      - name: eth1
        dhcp: false
        interface: eth1
        addresses:
          - 203.0.113.16/24
        mtu: 9000
    disks:
      os: /dev/vda
k8s:
  stack: ipv4
  primaryNode: eda-node01
  endpointUrl: https://192.0.2.5:6443
  allowSchedulingOnControlPlanes: true
  control-plane:
    - eda-node01
    - eda-node02
    - eda-node03
  worker:
    - eda-node04
    - eda-node05
    - eda-node06
  vip:
    ipv4: 192.0.2.5
    interface: eth0
  env:
    http_proxy: http://192.0.2.254:8080
    https_proxy: http://192.0.2.254:8080
    no_proxy: 192.0.2.0/24,203.0.113.0/24,.domain.tld,172.22.0.0/16,localhost,127.0.0.1,10.0.1.0/24,0.0.0.0,169.254.116.108
  time:
    disabled: false
    servers:
      - 192.0.2.253
      - 192.0.2.254
  nameservers:
    servers:
      - 192.0.2.253
      - 192.0.2.254
version: 25.4.1
clusterName: eda-compute-cluster
machines:
  - name: eda-node01
    endpoint: "192.0.2.11"
    interfaces:
      - name: eth0
        dhcp: false
        interface: eth0
        addresses:
          - 192.0.2.11/24
        routes:
          - network: 0.0.0.0/0
            gateway: 192.0.2.1
        mtu: 9000
      - name: eth1
        dhcp: false
        interface: eth1
        addresses:
          - 203.0.113.11/24
        mtu: 9000
    disks:
      os: /dev/vda
      storage: /dev/vdb
  - name: eda-node02
    endpoint: "192.0.2.12"
    interfaces:
      - name: eth0
        dhcp: false
        interface: eth0
        addresses:
          - 192.0.2.12/24
        routes:
          - network: 0.0.0.0/0
            gateway: 192.0.2.1
        mtu: 9000
      - name: eth1
        dhcp: false
        interface: eth1
        addresses:
          - 203.0.113.12/24
        mtu: 9000
    disks:
      os: /dev/vda
      storage: /dev/vdb
  - name: eda-node03
    endpoint: "192.0.2.13"
    interfaces:
      - name: eth0
        dhcp: false
        interface: eth0
        addresses:
          - 192.0.2.13/24
        routes:
          - network: 0.0.0.0/0
            gateway: 192.0.2.1
        mtu: 9000
      - name: eth1
        dhcp: false
        interface: eth1
        addresses:
          - 203.0.113.13/24
        mtu: 9000
    disks:
      os: /dev/vda
      storage: /dev/vdb
  - name: eda-node04
    endpoint: "192.0.2.14"
    interfaces:
      - name: eth0
        dhcp: false
        interface: eth0
        addresses:
          - 192.0.2.14/24
        routes:
          - network: 0.0.0.0/0
            gateway: 192.0.2.1
        mtu: 9000
      - name: eth1
        dhcp: false
        interface: eth1
        addresses:
          - 203.0.113.14/24
        mtu: 9000
    disks:
      os: /dev/vda
  - name: eda-node05
    endpoint: "192.0.2.15"
    interfaces:
      - name: eth0
        dhcp: false
        interface: eth0
        addresses:
          - 192.0.2.15/24
        routes:
          - network: 0.0.0.0/0
            gateway: 192.0.2.1
        mtu: 9000
      - name: eth1
        dhcp: false
        interface: eth1
        addresses:
          - 203.0.113.15/24
        mtu: 9000
    disks:
      os: /dev/vda
  - name: eda-node06
    endpoint: "192.0.2.16"
    interfaces:
      - name: eth0
        dhcp: false
        interface: eth0
        addresses:
          - 192.0.2.16/24
        routes:
          - network: 0.0.0.0/0
            gateway: 192.0.2.1
        mtu: 9000
      - name: eth1
        dhcp: false
        interface: eth1
        addresses:
          - 203.0.113.16/24
        mtu: 9000
    disks:
      os: /dev/vda
k8s:
  stack: ipv4
  primaryNode: eda-node01
  endpointUrl: https://192.0.2.5:6443
  allowSchedulingOnControlPlanes: true
  control-plane:
    - eda-node01
    - eda-node02
    - eda-node03
  worker:
    - eda-node04
    - eda-node05
    - eda-node06
  vip:
    ipv4: 192.0.2.5
    interface: eth0
  env:
    http_proxy: http://192.0.2.254:8080
    https_proxy: http://192.0.2.254:8080
    no_proxy: 192.0.2.0/24,203.0.113.0/24,.domain.tld,172.22.0.0/16,localhost,127.0.0.1,10.0.1.0/24,0.0.0.0,169.254.116.108
  time:
    disabled: false
    servers:
      - 192.0.2.253
      - 192.0.2.254
  nameservers:
    servers:
      - 192.0.2.253
      - 192.0.2.254
  mirror:
    name: 192.0.2.228
    url: https://192.0.2.228
    insecure: true
    overridePath: false
    skipFallback: true
    mirrors:
      - docker.io
      - gcr.io
      - ghcr.io
      - registry.k8s.io
      - quay.io

Generating the Talos machine configurations#

After creating the EDAADM configuration file, the next step is to generate all the configuration files that are necessary to deploy the Kubernetes environment using Talos.

Use the edaadm tool to generate the deployment files.

edaadm generate -c eda-input-6-node.yaml
$ edaadm generate -c eda-input-6-node.yaml
ConfigFile is eda-input-6-node.yaml
...
[1/4] Validating Machines
[1/4] Validated Machines
[2/4] Validating PrimaryNode
[2/4] Validated PrimaryNode
[3/4] Validating Endpoint URL
[3/4] Validated Endpoint URL
[4/4] Validating Virtual IP
[4/4] Validated Virtual IP
[  OK  ] Spec is validated
Generating secrets for eda-compute-cluster
Created eda-compute-cluster/secrets.yaml
generating PKI and tokens
Created eda-compute-cluster/eda-node01.yaml
Created eda-compute-cluster/talosconfig.yaml
generating PKI and tokens
Created eda-compute-cluster/eda-node02.yaml
generating PKI and tokens
Created eda-compute-cluster/eda-node03.yaml
generating PKI and tokens
Created eda-compute-cluster/eda-node04.yaml
generating PKI and tokens
Created eda-compute-cluster/eda-node05.yaml
generating PKI and tokens
Created eda-compute-cluster/eda-node06.yaml

The configuration files created by the edaadm tool are used in the next steps when you deploy the virtual machines.

Note

Nokia strongly recommends that you store these files securely and keep a backup.

Deploying the Talos virtual machines#

This section provides the procedures for deploying an EDA node as a virtual machine on KVM or VMware vSphere.

Creating the VM on bridged networks on KVM#

Complete the following steps to deploy an EDA node as a virtual machine on KVM. These steps are executed on the RedHat Enterprise Linux or Rocky Linux hypervisor directly. The steps below assume the deployment of the eda-node01 virtual machine as per the above configuration file. Ensure that you use the correct machine configuration file generated by the edaadm tool.

Note

This procedure expects two networks to be available on the KVM hypervisors. The OAM network is referred to as br0 and the fabric management network is referred to as br1. Both of these networks are standard Linux bridge networks. If you use only one interface, adapt Step 7 to only use the br0 network only.

  1. Ensure that the virt-install tool is installed on the KVM hypervisor.
    If you need to install the tool, use the following command:

    yum install virt-install
    
  2. Verify that the ISO image downloaded in Downloading the KVM image is available on the hypervisor.

  3. Copy the machine configuration file generated for this specific node to a file called user-data.

    cp eda-node01-control-plane.yaml user-data 
    
  4. Create a file called meta-data for the node. Use the appropriate instance-id and local-hostname values.

    instance-id: eda-node01 
    local-hostname: eda-node01 
    
  5. Create a file called network-config for the node.

    The file should have the following content:

    version: 2
    
  6. Create an ISO file containing the newly created files. For ease of use, name the ISO file with the name of the node for which you are creating the ISO.

    mkisofs -o eda-node01-data.iso -V cidata -J -r meta-data network-config user-data 
    
  7. Create the virtual machine. This step uses both the newly created ISO file and the ISO file downloaded from the Talos Machine Factory.

    virt-install -n eda-node01 \ 
    --description "Talos 1.9.2 vm for node eda-node01" \ 
    --noautoconsole --os-type=generic \ 
    --memory 65536 --vcpus 32 --cpu host \ 
    --disk eda-node01-rootdisk.qcow2,format=qcow2,bus=virtio,size=100 \ 
    --disk eda-node01-storagedisk.qcow2,format=qcow2,bus=virtio,size=300 \ 
    --cdrom nocloud-amd64.iso \ 
    --disk eda-node01-data.iso,device=cdrom \ 
    --network bridge=br0,model=virtio \ 
    --network bridge=br1,model=virtio
    

    Note

    If the node is not a storage node, you can remove the second --disk line.

Creating the VM on bridged networks on VMware vSphere#

Complete the following steps to deploy an EDA node as a virtual machine on VMware vSphere. The steps below assume the deployment of the eda-node01 virtual machine as per the above configuration file. Ensure that you are using the correct machine configuration file generated by the edaadm tool.

You can use one of the following methods to deploy the VM on VMware vSphere:

  • the VMware vSphere vCenter or ESXi UI

    For instructions, see Deploy an OVF or OVA Template in the VMware vSphere documentation.

  • the VMware Open Virtualization Format Tool CLI (VMware OVF Tool CLI)

    This procedure provides an example of how to use the VMware OVF Tool CLI.

Note

This procedure uses two networks (portgroups) to be available on the ESXi hypervisors. The OAM network is referred to as OAM and the fabric management network is referred to as FABRIC. Both of these networks can be standard PortGroups or distributed PortGroups. If you only use one network, you do not need to create a second interface on the VM.

  1. Download and install the latest version of the VMware OVF Tool from the VMware Developer website.
  2. Display details about the OVA image.

    ovftool vmware-amd64.ova 
    

    OVF version:   1.0
    VirtualApp:    false
    Name:          talos
    
    Download Size:  103.44 MB
    
    Deployment Sizes:
      Flat disks:   8.00 GB
      Sparse disks: Unknown
    
    Networks:
      Name:        VM Network
      Description: The VM Network network
    
    Virtual Machines:
      Name:               talos
      Operating System:   other3xlinux64guest
      Virtual Hardware:
        Families:         vmx-15
        Number of CPUs:   2
        Cores per socket: automatic
        Memory:           2.00 GB
    
        Disks:
          Index:          0
          Instance ID:    4
          Capacity:       8.00 GB
          Disk Types:     SCSI-VirtualSCSI
    
        NICs:
          Adapter Type:   VmxNet3
          Connection:     VM Network
    
    Properties:
      Key:         talos.config
      Label:       Talos config data
      Type:        string
      Description: Inline Talos config
    
    References:
      File:  disk.vmdk
    

  3. Create a base64 encoded hash from the Talos machine configuration for the node.

    In this example, the output is stored as an environment variable to make it easy to use in the command to deploy the image using the OVF Tool.

    export NODECONFIG=$(base64 -i eda-node01-control-plane.yaml)
    
  4. Deploy the OVA image using the OVF Tool. For details about command line arguments, see the OVF Tool documentation from the VMware website.

    Note

    If you prefer using the VMware vCenter UI to create the virtual machines, use the regular method of deploying an OVA/OVF template. In this process, in the Customize template step, when you are prompted to provide the Inline Talos config, you must provide the base64 encoded data from the Talos machine configuration for the node. This very long string that is returned when you execute the base64 -i eda-node01.yaml command. Copy that long string and paste it into the field in the UI, then continue.

    ovftool --acceptAllEulas --noSSLVerify \
    -dm=thick \
    -ds=DATASTORE \
    -n=eda-node01 \
    --net:"VM Network=OAM" \
    --prop:talos.config="${NODECONFIG}" \
    vmware-amd64.ova \
    vi://administrator%[email protected]/My-DC/host/My-Cluster/Resources/My-Resource-Group
    

    Opening OVA source: vmware-amd64.ova
    The manifest validates
    Enter login information for target vi://vcenter.domain.tld/
    Username: administrator%40vsphere.local
    Password: ***********
    Opening VI target: vi://administrator%[email protected]:443/My-DC/host/My-Cluster/Resources/My-Resource-Group
    Deploying to VI: vi://administrator%40vsphere.local@ vcenter.domain.tld:443/My-DC/host/My-Cluster/Resources/My-Resource-Group  
    Transfer Completed
    Completed successfully
    

    This step deploys the VM with the CPU, memory, disk, and NIC configuration of the default OVA image. The next step updates these settings.

  5. In vCenter, edit the VM settings.

    Make the following changes:

    • Increase the number of vCPU to 32.
    • Increase the memory to 64G.
    • Increase the main disk size to 100G. On boot, Talos automatically extends the file system.
    • Optionally, if this VM is a storage node, add a new disk with a size of 300G.
    • Optionally, add a second network interface and connect it to the FABRIC PortGroup.
    • Enable 100% resource reservation for the CPU, memory and disk.
  6. Power on the virtual machine.