Cloud Setup

Requirements

Refer to Installation to install EVMBuilder Edge.

Setting up the VM connectivity

Depending on your choice of cloud provider, you may set up connectivity and rules between the VMs using a firewall, security groups, or access control lists.

As the only part of theJUVIDOE-edge that needs to be exposed to other VMs is the libp2p server, simply allowing all communication between VMs on the default libp2p port 1478 is enough.

Overview

In this guide, our goal is to establish a working JUVIDOE-edge blockchain network working with IBFT consensus protocol. The blockchain network will consist of 4 nodes of whom all 4 are validator nodes, and as such are eligible for both proposing block, and validating blocks that came from other proposers. Each of the 4 nodes will run on their own VM, as the idea of this guide is to give you a fully functional EVMBuilder Edge network while keeping the validator keys private to ensure a trustless network setup.

To achieve that, we will guide you through 4 easy steps:

  1. Take a look at the list of Requirements above

  2. Generate the private keys for each of the validators, and initialize the data directory

  3. Prepare the connection string for the bootnode to be put into the shared genesis.json

  4. Create the genesis.json on your local machine, and send/transfer it to each of the nodes

  5. Start all the nodes

Step 1: Initialize data folders and generate validator keys

To get up and running with EVMBuilder Edge, you need to initialize the data folders, on each node:

node-1> JUVIDOE-edge secrets init --data-dir data-dir
node-2> JUVIDOE-edge secrets init --data-dir data-dir
node-3> JUVIDOE-edge secrets init --data-dir data-dir
node-4> JUVIDOE-edge secrets init --data-dir data-dir

Each of these commands will print the node ID. You will need that information for the next step.

Step 2: Prepare the multiaddr connection string for the bootnode

For a node to successfully establish connectivity, it must know which bootnode server to connect to gain information about all the remaining nodes on the network. The bootnode is sometimes also known as the rendezvous server in p2p jargon.

bootnode is not a special instance of a EVMBuilder Edge node. Every EVMBuilder Edge node can serve as a bootnode and every EVMBuilder Edge node needs to have a set of bootnodes specified which will be contacted to provide information on how to connect with all remaining nodes in the network.

To create the connection string for specifying the bootnode, we will need to conform to the multiaddr format:

/ip4/<ip_address>/tcp/<port>/p2p/<node_id>

In this guide, we will treat the first and second nodes as the bootnodes for all other nodes. What will happen in this scenario is that nodes that connect to the node 1 or node 2 will get information on how to connect to one another through the mutually contacted bootnode.

As the first part of the multiaddr connection string is the <ip_address>, here you will need to enter the IP address as reachable by other nodes, depending on your setup this might be a private or a public IP address, not 127.0.0.1.

For the <port> we will use 1478, since it is the default libp2p port.

And lastly, we need the <node_id> which we can get from the output of the previously ran commandJUVIDOE-edge secrets init --data-dir data-dir command (which was used to generate keys and data directories for the node 1)

After the assembly, the multiaddr connection string to the node 1 which we will use as the bootnode will look something like this (only the <node_id> which is at the end should be different):

/ip4/<public_or_private_ip>/tcp/1478/p2p/16Uiu2HAmJxxH1tScDX2rLGSU9exnuvZKNM9SoK3v315azp68DLPW

Similarly, we construct multiaddr for the second bootnode as shown below

/ip4/<public_or_private_ip>/tcp/1478/p2p/16Uiu2HAmS9Nq4QAaEiogE4ieJFUYsoH28magT7wSvJPpfUGBj3Hq 

Step 3: Generate the genesis file with the 4 nodes as validators

This step can be run on your local machine, but you will need the public validator keys for each of the 4 validators.

Validators can safely share the Public key (address) as displayed below in the output to their secrets init commands, so that you may securely generate the genesis.json with those validators in the initial validator set, identified by their public keys:

[SECRETS INIT]
Public key (address) = 0xC12bB5d97A35c6919aC77C709d55F6aa60436900
Node ID              = 16Uiu2HAmVZnsqvTwuzC9Jd4iycpdnHdyVZJZTpVC8QuRSKmZdUrf

Given that you have received all 4 of the validators' public keys, you can run the following command to generate the genesis.json

JUVIDOE-edge genesis --consensus ibft --ibft-validator=0xC12bB5d97A35c6919aC77C709d55F6aa60436900 --ibft-validator=<2nd_validator_pubkey> --ibft-validator=<3rd_validator_pubkey> --ibft-validator=<4th_validator_pubkey> --bootnode=<first_bootnode_multiaddr_connection_string_from_step_2> --bootnode <second_bootnode_multiaddr_connection_string_from_step_2> --bootnode <optionally_more_bootnodes>

What this command does:

  • The --ibft-validator sets the public key of the validator that should be included in the initial validator set in the genesis block. There can be many initial validators.

  • The --bootnode sets the address of the bootnode that will enable the nodes to find each other. We will use the multiaddr string of the node 1, as mentioned in step 2, although you can add as many bootnodes as you want, as displayed above.

After specifying the:

  1. Public keys of the validators to be included in the genesis block as the validator set

  2. Bootnode multiaddr connection strings

  3. Premined accounts and balances to be included in the genesis block

and generating the genesis.json, you should copy it over to all of the VMs in the network. Depending on your setup you may copy/paste it, send it to the node operator, or simply SCP/FTP it over.

The structure of the genesis file is covered in the CLI Commands section.

Step 4: Run all the clients

NETWORKING ON CLOUD PROVIDERS

Most cloud providers don't expose the IP addresses (especially public ones) as a direct network interface on your VM but rather setup an invisible NAT proxy.

To allow the nodes to connect to each other in this case you would need to listen on the 0.0.0.0 IP address to bind on all interfaces, but you would still need to specify the IP address or DNS address which other nodes can use to connect to your instance. This is achieved either by using the --nat or --dns argument where you can specify your external IP or DNS address respectively.

Example[​]

The associated IP address that you wish to listen on is 192.0.2.1, but it is not directly bound to any of your network interfaces.

To allow the nodes to connect you would pass the following parameters:

JUVIDOE-edge ... --libp2p 0.0.0.0:10001 --nat 192.0.2.1

Or, if you wish to specify a DNS address dns/example.io, pass the following parameters:

JUVIDOE-edge ... --libp2p 0.0.0.0:10001 --dns dns/example.io

This would make your node listen on all interfaces, but also make it aware that the clients are connecting to it through the specified --nat or --dns address.

To run the first client:

node-1> JUVIDOE-edge server --data-dir ./data-dir --chain genesis.json  --libp2p 0.0.0.0:1478 --nat <public_or_private_ip> --seal

To run the second client:

node-2> JUVIDOE-edge server --data-dir ./data-dir --chain genesis.json --libp2p 0.0.0.0:1478 --nat <public_or_private_ip> --seal

To run the third client:

node-3> JUVIDOE-edge server --data-dir ./data-dir --chain genesis.json --libp2p 0.0.0.0:1478 --nat <public_or_private_ip> --seal

To run the fourth client:

node-4> JUVIDOE-edge server --data-dir ./data-dir --chain genesis.json --libp2p 0.0.0.0:1478 --nat <public_or_private_ip> --seal

After running the previous commands, you have set up a 4 node EVMBuilder Edge network, capable of sealing blocks and recovering from node failurfailure.

Last updated