Creating a resilient consul cluster for docker microservice discovery with terraform and aws – codeproject

In this article I’m going to show you how to create a resilient Consul cluster, using Terraform and AWS. Data recovery youtube We can use this cluster for microservice discovery and management. 911 database No prior knowledge of the technologies or patterns is required! Terraform is a technology which allows us to script the provisioning of infrastructure and systems. Data recovery download This allows us to practice the Infrastructure as Code pattern. Data recovery after factory reset The rigour of code control (versioning, history, user access control, diffs, pull requests etc) can be applied to our systems.


And why AWS? We need to create many servers and build a network to see this system in action. Database schema design We can simulate parts of this locally with tools such as Vagrant, but we can use the arguably most popular 2 IaaS platfom for this job at essentially zero cost, and learn some valuable skills which are readily applicable to other projects at the same time. A lot of what we will learn is not really AWS specific – and the Infrastructure as Code pattern which Terraform helps us apply allows us to apply these techniques easily with other providers. Database uml The Goal • The nodes span more than one availability zone, meaning the system is redundant and can survive the failure of an entire availability zone (i.e. Data recovery ntfs data centre) As a quick caveat, in reality this setup would typically live in a private subnet, not directly accessible to the outside work except via public facing load balancers. Database error This adds a bit more complexity to the Terraform setup but not much value to the walk-though. Database functions A network diagram of how it might look is below, I invite interested readers to try and move to this model as a great exercise to cement the concepts! Step 1 – Creating our Network Creating a VPC and building a subnet is fairly trivial if you have done some network setup before or spent much time working with AWS, if not, you may be a little lost already. Top 10 data recovery There’s a good course on Udemy 4 which will take you through the process of setting up a VPC which I recommend if you are interested in this, as it is quite hands on. Database job titles It’ll also show you how to build a more ‘realistic’ network, which also contains a private subnet and NAT, but that’s beyond the scope of this write-up. Data recovery linux live cd Instead, I’ll take you through the big parts. S pombe database The Network We’re using AWS, we need to create a VPC. Database usa A VPC is a Virtual Private Cloud. Data recovery pro license key The key thing is that it is isolated. Data recovery on android Things you create in this network will be able to talk to each other if you let them, but cannot communicate with the outside world, unless you specifically create the parts needed for them to do so. A private network is probably something you regularly use if you work in a company 5. Icare data recovery 94fbr Most companies have their own internal network – when you use a computer on that network it can talk to other company computers (such as the company mail server). Image database When you are off that network, you might not be able to access your company email (unless it is publicly available, like gmail, or over a VPN [and by accessing a VPN, you are actually joining the network again, albeit remotely]). Perhaps the most immediately obvious part of a VPC is that you control the IP addresses. Database web application You specify the range of IP addresses which are available to give to machines on the network. Database graphic When a machine joins, it is given an IP in that range. In databases a category of data is called a I’m not going to go into too much detail here, if you are interested let me know and I’ll write up an article on VPCs in detail! Ensure you have an AWS account, and note your Secret Key and Access Key. Database wordpress We’ll need these to remotely control it. Note 2 data recovery Here’s the terraform script to create a VPC: // Setup the core provider information. This script uses Terraform Variables, such as var.access_key, which we keep in a variables.tf file. Tally erp 9 data recovery Terraform will use the default values defined in the file if they are present, or ask the user to supply them. Database erd Let's build the network: terraform apply You don't put hosts directly into a VPC, they need to go into a structure called a 'subnet', which is a part of a VPC. Database google docs Subnets get their own subset of the VPC's available IP addresses, which you specify. Subnets are used to build zones in a network. Database 5500 Why would you need this? Typically it is to manage security. Data recovery kit You might have a 'public zone' in which all hosts can be accessed from the internet, and a 'private zone' which is inaccessible directly (and therefore a better location for hosts with sensitive data). Database meaning You might have an 'operator' zone, which only sysadmins can access, but they can use to get diagnostic information. The defining characteristics of zones is that they are used to create boundaries to isolate hosts. Gale database These boundaries are normally secured by firewalls, traversed via gateways or NATs etc. Data recovery pc We're going to create two public subnets, one in each of the availability zones 5: // Create a public subnet for each AZ. With Terraform, resources can depend on each other. Data recovery easeus In this case, the subnets need to reference the ID of the VPC we want to place them in (so we use aws_vpc.consul-cluster.id). Top 10 data recovery software The Internet Gateway, Route Tables and Security Groups The final parts of the network you can see in the ./infrastructure/network.tf script. Database oracle These are the Internet Gateway, Route Table and Security Group resources. Data recovery services reviews Essentially they are for controlling access between hosts and the internet. Database 12c new features AWS have a good guide if you are not familiar with these resources; they don't add much to the article so I'll leave you to explore on your own. If you want to see the code as it stands now, check the Step 1 branch. Database lyrics Now we need to look at creating the hosts to install Consul on. Database 360 Step 2 - Creating the Consul Hosts The Consul documentation recommends running in a cluster or 3 or 5 nodes 7. Database architect salary We want to set up a system which is self-healing - if we lose a node, we want to create a new one. Enter Auto-Scaling Groups. Database administrator job description Auto-scaling groups allow us to define a template for an instance, and ask AWS to make sure there are always a certain number of these instances. Data recovery app If we lose an instance, a new one will be created to keep the group at the correct size 8. The Launch Configuration will define the characteristics of our instances and the auto-scaling group determines the size of our cluster: // Launch configuration for the consul cluster auto-scaling group. Once we run terraform apply, we'll see our auto-scaling group, which references the new launch configuration and works over multiple availability zones: To set up our instances we use a 'userdata' script' A userdata runs once when an instance is created. R database connection We can create a script in our repository, and reference it in our Terraform files. We add a new file called consul-node.sh to a files folder. Database weak entity This script installs Docker and runs Consul: yum install -y docker • Install Docker. Data recovery vancouver These scripts run as root, so we add the ec2-user to the Docker group, meaning when we log in later on via SSH, we can run Docker • Get our IP address. Level 3 data recovery AWS provide a magic address (169.254.169.254) which lets you query data about your instance, see Instance Metadata & User Metadata The actual scripts contains more! Getting userdata scripts right, testing and debugging them is tricky. Database blob See how I do
it in detail in Appendix 1: Logging. Now we need to tell Terraform to include this script as part of the instance metadata. Database examples Here's how we do that: resource " aws_launch_configuration" " consul-cluster-lc" { When Consul is running with the -ui option, it provides an admin UI. Database knowledge You can try it by running Consul locally with docker run -p8500:8500 consul and navigating to http://localhost:8500/ui. We can install a load balancer in front of our auto-scaling group, to automatically forward incoming traffic to a host. H2 database Here's the config: resource " aws_elb" " consul-lb" { The final change we make is to add an outputs.tf file, which lists all of the properties Terraform knows about which we want to save. Nexus 5 data recovery All it includes is: output " consul-dns" { Every time we refresh we will likely see a different node. Data recovery uk We've actually created five clusters each of one node - what we now need to do is connect them all together into a single cluster of five nodes. Creating the cluster is now not too much of a challenge. Database collation We will update the userdata script to tell the consul process we are expecting 5 nodes (via the bootstrap-expect flag. The problem is this won't work... Database yugioh We need to tell each node the address of another server in the cluster. Top 10 data recovery software free For example, if we start five nodes, we should tell nodes 2-5 the address of node 1, so that the nodes can discover each other. The challenge is how do we get the IP of node 1? The IP addresses are determined by the network, we don't preset them so cannot hard code them. Data recovery vancouver bc Also, we can expect nodes to occasionally die and get recreated, so the IP addresses of nodes will in fact change over time. 7 data recovery suite crack Getting the IP addresses of nodes in the cluster There's a nice trick we can use here. Database normalization definition We can ask AWS to give us the IP addresses of each host in the auto-scaling group. Data recovery wizard If we tell each node the addresses of the other nodes, then they will elect a leader themselves 14. There are a couple of things we need to do to get this right. Data recovery video First, update the userdata script to provide the IPs of other nodes when we're starting up, then update the role of our nodes so that they have permissions to use the APIs we're going to call. Database query example Getting the Cluster IPs This is actually fairly straightforward. Database migration We update our userdata script to the below: # A few variables we will refer to later... The problem is, if we try to run the script we will fail, because calling the AWS APIs requires some permissions we don't have. Data recovery free Let's fix that. Database vs spreadsheet Creating a Role for our nodes Our nodes now have a few special requirements. Database name sql They need to be able to query the details of an auto-scaling group and get the IP of an instance 15. We will need to create a policy which describes the permissions we need, create a role, attach the policy to the role and then ensure our instances are assigned the correct role. Database management This is consul-node-role.tf file: // This policy allows an instance to discover a consul cluster leader. Terraform is a little verbose here! Finally, we update our launch configuration to ensure that the instances assume this role. Data recovery miami resource " aws_launch_configuration" " consul-cluster-lc" { If you are familiar with Consul, this may be all you need. H2 database viewer If not, you might be interested in seeing how we actually create a new instance to host a service, register it with Consul and query its address. Database replication Step 4 - Adding a Microservice I've created a docker image for as simple a microservice as you can get. Database record It returns a quote from Futurama's Zapp Brannigan. Data recovery certification The image is tagged as dwmkerr/zapp-service. On a new EC2 instance, running in either subnet, with the same roles as the Consul nodes, we run the following commands: # Install Docker In this example, I used a DNS SRV query to ask where the zapp-service is, was told it was at 10.0.2.158 on port 5000, then called the service, receiving a response. Database programs I can discover any service using this method, from any node. Raid 1 data recovery As services are added, removed, moved etc, I can ask Consul for accurate information on where to find them. According to the Deployment Table from the Consul documentation, a cluster of five nodes means we have a quorum of three nodes (i.e. Database 2016 a minimum of three nodes are needed for a working system). Key value database This means we can tolerate the failure of two nodes. If we pick two random nodes, as above, and terminate them, we see the cluster determines that we have two failed nodes but will still function (if one was the leader, a new leader will be automatically elected): What's nice about this setup is that no manual action is needed to recover. Database mirroring Our load balancer will notice the nodes are unhealthy and stop forwarding traffic. Data recovery techniques Our auto-scaling group will see the nodes have terminated and create two new ones, which will join the cluster in the same way as the original nodes. Data recovery using linux Once they join, the load balancer will find them healthy and bring them back into rotation. The nodes which were terminated are still listed as failing. Data recovery apple After 72 hours Consul will stop trying to periodically reconnect to these nodes and completely remove them 16. Data recovery google store Wrapping Up Small typos or mistakes in the userdata script are almost impossible to effectively diagnose. Database works The scripts were actually built in the following way: I've included CloudWatch logging in the code. Data recovery cheap In this write-up I've omitted this code as it is purely for diagnostics and doesn't contribute to the main topic. Database of state incentives for renewables and efficiency The setup is in the consul-node.sh and consul-node-role.tf files. If you want more details, let me know, or just check the code. Data recovery usb stick I would heartily recommend setting up logging like this for all but the most straightforward projects: This kind of pattern is critical in the world of microservices, where many small services will be running on a cluster. Database companies Services may die, due to errors or failing hosts, and be recreated on new hosts. Database application Their IPs and ports may be ephemeral.It is essential that the system as a whole has a registry of where each service lives and how to access it. Database online Such a registry must be resilient, as it is an essential part of the system. Database high availability ↩ Most popular is a fairly loose term. Database utility Well ranked by Gartner and anecdotally with the largest infrastructure footprint. Data recovery zagreb https://www.gartner.com/doc/reprints?id=1-2G2O5FC&ct=150519&st=sb ↩ This is AWS parlance again. Database javascript An availabilty zone is an isolated datacenter. Database administrator jobs Theoretically, spreading nodes across AZs will increase resilience as it is less likely to have catastrophic failures or outages across multiple zones. Data recovery for iphone ↩ I don't get money from Udemy or anyone else for writing anything on this blog. Data recovery victoria bc All opinions are purely my own and influenced by my own experience, not sponsorship. 10k database Your milage may vary (yada yada) but I found the course quite good: https://www.udemy.com/aws-certified-solutions-architect-associate/. Data recovery software mac ↩ For more expert readers that may sound horribly patronising, I don't mean it to be. Cost of data recovery from hard drive For many less experienced technologists the basics of networking might be more unfamiliar! ↩ A common pattern is to actually make the group size dynamic, responding to eve
nts. S cerevisiae database For example, we could have a group of servers which increases in size if the average CPU load of the hosts stays above 80% for five minutes, and scales down if it goes below 10% for ten minutes. Snl database This is more common for app and web servers and not needed for our system. Database 4500 ↩ Check the admin UI every 30 seconds, more than 3 seconds indicates a timeout and failure. Data recovery miami fl Two failures in a row means an unhealthy host, which will be destroyed, two successes in a row for a new host means healthy, which means it will receive traffic. Data recovery sd card ↩

banner