anynines website

Categories

Series

Benjamin Guttmann

Published at 24.09.2019

How-To’s & Tutorials

Using Bosh Multi-CPI Feature to Deploy to Different IaaS

Table of Contents

  • Introduction
  • CPI
  • Cloud Config
  • CPI Config
Introduction

We have a vSphere installation with two data centers and thought about the possibility of adding a third availability zone without requiring an additional DC for a while. Therefore we thought about the option to move the third availability zone to AWS. With the multi-CPI feature introduced in BOSH version v261+, the initial blocker was removed.

Theoretically, we are now able to deploy to different infrastructures but while setting up some test deployment I faced some interesting questions that were not clearly answered in the BOSH docs or in certain blog posts I found about this topic so I decided to share my struggles and share my possible solutions with you.

CPI

As I already said BOSH supports the configuration for several CPIs but the documentation seems just to cover the case where you want to deploy to different regions (AWS, GCP) or different datacenters (vSphere) for the same infrastructure, but it shouldn’t be that hard to deploy to two different infrastructures right? 

The first thing we’ll need is the correct CPIs packed onto our BOSH director. As we are using bosh-deployment to deploy the director, this should not be too hard. We just add the correct ops file and run bosh create-env, but for ops files the order matters. If you use the wrong order for the CPI ops files, your cloud_provider block will have the wrong configuration.

Question 1: Which CPI ops file to apply first? 

As the CPI also includes information needed to create the director, I decided to apply the CPI for the infrastructure the director is deployed to last, so we ensure that all information needed for the CPI to deploy the director is available. An example create-env command could look like that:

Code Example

1bosh create-env bosh.yml
2  -o uaa.yml
3  -o credhub.yml
4  -o jumpbox-user.yml
5  -o aws/cpi.yml
6  -o vsphere/cpi.yml
7  --vars-store creds.yml
8  --vars-file vars.yml
9  --state state.yml
Cloud Config

After we got our BOSH Director up and running and all CPI configs in place, we need to check our cloud-config for possible adjustments. In my case this base cloud-configuration was used to deploy on a vSphere environment:

Code Example

1azs:
2- cloud_properties:
3    datacenters:
4    - clusters:
5      - Cluster01:
6          resource_pool: Test_Cluster01
7      name: nameme
8  name: z1
9- cloud_properties:
10    datacenters:
11    - clusters:
12      - Cluster02:
13          resource_pool: Test_Cluster02
14      name: nameme
15  name: z2
16- cloud_properties:
17    datacenters:
18    - clusters:
19      - Cluster03:
20          resource_pool: Test_Cluster03
21      name: nameme
22  name: z3
23compilation:
24  az: z1
25  network: compilation
26  reuse_compilation_vms: <span class="token boolean">true</span>
27  vm_type: compilation
28  workers: 2
29disk_types:
30- cloud_properties:
31    type: thin
32  disk_size: 2048
33  name: small
34- cloud_properties:
35    type: thin
36  disk_size: 4096
37  name: medium
38- cloud_properties:
39    type: thin
40  disk_size: 6144
41  name: big
42- cloud_properties:
43    type: thin
44  disk_size: 10144
45  name: large
46- cloud_properties:
47    type: thin
48  disk_size: 20124
49  name: xlarge
50networks:
51- name: net
52  subnets:
53  - az: z1
54    cloud_properties:
55      name: Cluster01_TEST-1
56    dns:
57    - 8.8.8.8
58    - 8.8.4.4
59    gateway: 10.0.1.1
60    range: 10.0.1.0/24
61    reserved:
62    - 10.0.1.1  - 10.0.1.10
63    - 10.0.1.200 - 10.0.1.255
64  - az: z2
65    cloud_properties:
66      name: Cluster02_TEST-1
67    dns:
68    - 8.8.8.8
69    - 8.8.4.4
70    gateway: 10.0.2.1
71    range: 10.0.2.0/24
72    reserved:
73    - 10.0.2.1  - 10.0.2.16
74    - 10.0.2.18 - 10.0.2.254
75  - az: z3
76    cloud_properties:
77      name: Cluster03_TEST-1
78    dns:
79    - 8.8.8.8
80    - 8.8.4.4
81    gateway: 10.0.3.1
82    range: 10.0.3.0/24
83    reserved:
84    - 10.0.3.1  - 10.0.3.16
85    - 10.0.3.18 - 10.0.3.254
86  type: manual
87- name: compilation
88  subnets:
89  - az: z1
90    cloud_properties:
91      name: Cluster01_TEST-1
92    dns:
93    - 8.8.8.8
94    - 8.8.4.4
95    - 8.8.8.8
96    - 8.8.4.4
97    gateway: 10.0.1.1
98    range: 10.0.1.0/24
99    reserved:
100    - 10.0.1.1 - 10.0.1.200
101vm_types:
102- cloud_properties:
103    cpu: 1
104    disk: 4096
105    ram: 1024
106  name: <span class="token function">nano</span>
107- cloud_properties:
108    cpu: 1
109    disk: 10000
110    ram: 4096
111  name: small
112- cloud_properties:
113    cpu: 2
114    disk: 20000
115    ram: 4096
116  name: medium
117- cloud_properties:
118    cpu: 4
119    disk: 20000
120    ram: 4096
121  name: big
122- cloud_properties:
123    cpu: 4
124    disk: 60000
125    ram: 8192
126  name: large
127- cloud_properties:
128    cpu: 20
129    disk: 60000
130    ram: 16384
131  name: xlarge
132- cloud_properties:
133    cpu: 20
134    disk: 20000
135    ram: 8192
136  name: compilation

The first part we check for adjustments is the ‘availability_zone’ definition, which looks like this  at the moment:

Code Example

1azs:
2- cloud_properties:
3    datacenters:
4    - clusters:
5      - Cluster01:
6          resource_pool: Test_Cluster01
7      name: nameme
8  name: z1
9- cloud_properties:
10    datacenters:
11    - clusters:
12      - Cluster02:
13          resource_pool: Test_Cluster02
14      name: nameme
15  name: z2
16- cloud_properties:
17    datacenters:
18    - clusters:
19      - Cluster03:
20          resource_pool: Test_Cluster03
21      name: nameme
22  name: z3

What we need to do now is to add certain availabilities for AWS, in our example, we will add a `z4` for AWS now.

Code Example

1azs:
2- cloud_properties:
3    datacenters:
4    - clusters:
5      - Cluster01:
6          resource_pool: Test_Cluster01
7      name: nameme
8  name: z1
9  cpi: vsphere_cpi
10- cloud_properties:
11    datacenters:
12    - clusters:
13      - Cluster02:
14          resource_pool: Test_Cluster02
15      name: nameme
16  name: z2
17  cpi: vsphere_cpi
18- cloud_properties:
19    datacenters:
20    - clusters:
21      - Cluster03:
22          resource_pool: Test_Cluster03
23      name: nameme
24  name: z3
25  cpi: vsphere_cpi
26- cloud_properties:
27    availability_zone: eu-central-1a
28  name: z4
29  cpi: aws_cpi

Did you notice that we added the CPI name here to tell BOSH which availability zone needs to get targeted with which CPI? The names you can use here for the CPIs are defined via the CPI configs which we will have a look at in a couple of lines. 

But before, we will check the remaining parts of the cloud-config for adjustments, that means the disk_types and the vm_types:

Code Example

1disk_types: <span class="token comment"># (vsphere)</span>
2- cloud_properties:
3    type: thin
4  disk_size: 2048
5  name: small
6disk_types: <span class="token comment"># (aws)</span>
7- cloud_properties:
8    type: gp2
9  disk_size: 2048
10  name: small

When comparing the respective part from an AWS cloud-config and a vSphere cloud-config we can see that the type property is used for both infrastructures. For vSphere we set the value `thin` for AWS we used `gp2`:

So how do we tell the CPI which type should be used?
Do we need to create separate disk_types for every infrastructure?
And if yes, how do we use them in the manifests? 

So actually this issue can be solved via the CPI config. Having a look at the CPI Configuration  for AWS and vSphere we can see that for AWS the disk_type defaults to ‘gp2’ so we do not need to explicitly configure it for AWS and for vSphere we got a global property named ‘default_disk_type’ which means we can set a default_disk_type via the cpi config. We will have a closer look at that right after this section. So by removing the unneeded values, we get the following result:

Code Example

1disk_types: <span class="token comment"># (vsphere|aws)</span>
2  disk_size: 2048
3  name: small

Last step to check is the vm_type definition: 

Code Example

1- cloud_properties:
2vm_types: <span class="token comment">#(vsphere)</span>
3- cloud_properties:
4    cpu: 1
5    ram: 1024
6  name: xsmall
7
8vm_types: <span class="token comment">#(aws)</span>
9- cloud_properties:
10    instance_type: t2.micro
11  name: xsmall

The cloud_properties needed for the AWS/ vSphere CPI differ, so they will not get overwritten by each other and we can just merge them into one. Like this, every CPI will take the information needed to create the VM.

Code Example

1vm_types: <span class="token comment">#(vsphere|aws)</span>
2- cloud_properties:
3    cpu: 1
4    ram: 1024
5    instance_type: t2.micro
6  name: xsmall

Last thing that is missing in the cloud-config is the network for the 4th availability_zone:

Code Example

1  - az: z4
2    cloud_properties:
3      subnet: subnet-
4    dns:
5    - 10.0.4.2
6    - 8.8.8.8
7    - 8.8.4.4
8    gateway: 10.0.4.1
9    range: 10.0.4.0/24
10    reserved:
11    - 10.0.4.1  - 10.0.4.16
12    - 10.0.4.18 - 10.0.4.254
CPI Config

The centerpiece to enable the multi-cpi feature is the CPI config. The CPI config includes all the necessary information to configure the used CPIs. For a general overview, you can have a look at the official BOSH documentation.

In our case the CPI config includes not only the needed credentials and configuration information but also the ‘default_disk_type: thin’ for our vSphere VMs to solve the disk_type issue we discussed earlier:

Code Example

1cpis:
2- name: a9s-vsphere
3  type: vsphere
4  properties:
5    host: <span class="token variable"><span class="token punctuation">((</span>vcenter_ip<span class="token punctuation">))</span></span>
6    user: <span class="token variable"><span class="token punctuation">((</span>vcenter_user<span class="token punctuation">))</span></span>
7    password: <span class="token variable"><span class="token punctuation">((</span>vcenter_password<span class="token punctuation">))</span></span>
8    default_disk_type: thin
9    datacenters:
10    - clusters: <span class="token variable"><span class="token punctuation">((</span>vcenter_clusters<span class="token punctuation">))</span></span>
11      datastore_pattern: <span class="token variable"><span class="token punctuation">((</span>vcenter_ds<span class="token punctuation">))</span></span>
12      disk_path: <span class="token variable"><span class="token punctuation">((</span>vcenter_disks<span class="token punctuation">))</span></span>
13      name: <span class="token variable"><span class="token punctuation">((</span>vcenter_dc<span class="token punctuation">))</span></span>
14      persistent_datastore_pattern: <span class="token variable"><span class="token punctuation">((</span>vcenter_ds<span class="token punctuation">))</span></span>
15      template_folder: <span class="token variable"><span class="token punctuation">((</span>vcenter_templates<span class="token punctuation">))</span></span>
16      vm_folder: <span class="token variable"><span class="token punctuation">((</span>vcenter_vm_folder<span class="token punctuation">))</span></span>
17- name: aws-a9s
18  type: aws
19  properties:
20    access_key_id: <span class="token variable"><span class="token punctuation">((</span>access_key_id<span class="token punctuation">))</span></span>
21    secret_access_key: <span class="token variable"><span class="token punctuation">((</span>secret_access_key<span class="token punctuation">))</span></span>
22    default_key_name: <span class="token variable"><span class="token punctuation">((</span>default_key_name<span class="token punctuation">))</span></span>
23    default_security_groups:
24    - <span class="token variable"><span class="token punctuation">((</span>default_security_groups<span class="token punctuation">))</span></span>
25    region: <span class="token variable"><span class="token punctuation">((</span>region<span class="token punctuation">))</span></span>

One important step to mention here is that after you uploaded the CPI config you need to re-upload the stemcells for the different CPIs. After this was done the output of bosh stemcells look like the following:

As you can see the different stemcells are now distinguished by the CPI they are used for (in our case a9s-vsphere and aws-a9s).

After everything is in place now, I used the following manifest to deploy a Prometheus Alertmanager to both infrastructures AWS and vSphere.

Code Example

1---
2name: prometheus
3
4instance_groups:
5  - name: alertmanager
6    azs:
7      - z1
8      - z4
9    instances: 2
10    vm_type: small
11    persistent_disk: 1_024
12    stemcell: default
13    networks:
14      - name: net
15    jobs:
16      - name: alertmanager
17        release: prometheus
18        properties:
19          alertmanager:
20            route:
21              receiver: default
22            receivers:
23              - name: default
24            test_alert:
25              daily: <span class="token boolean">true</span>
26
27update:
28  canaries: 1
29  max_in_flight: 32
30  canary_watch_time: 1000-100000
31  update_watch_time: 1000-100000
32  serial: <span class="token boolean">false</span>
33
34stemcells:
35  - alias: default
36    os: ubuntu-xenial
37    version: latest
38
39releases:
40- name: prometheus
41  version: 25.0.0
42  url: https://github.com/bosh-prometheus/prometheus-boshrelease/releases/download/v25.0.0/prometheus-25.0.0.tgz
43  sha1: 71cf36bf03edfeefd94746d7f559cbf92b62374c

Which will result to

If you are familiar with the style of the VM CID you can here, that z4 shows an AWS styled VM CID and z1 one for vSphere.

So let’s wrap up what needed to be done to use the BOSH multi CPI to deploy to different infrastructures:

  • Add the AZs for the new infrastructure
  • Add the vm_type information needed for new infrastructure
  • Remove properties that can be just used by one CPI and move it to CPI config (e.g. disk_type in cloud-config to default_disk_type in CPI config)
  • Upload a CPI config
  • Add new AZs to your manifest
  • Deploy

I hope this small blog post helps you to easily spread your deployment over different infrastructures.

© anynines GmbH 2024

Imprint

Privacy Policy

About

© anynines GmbH 2024