Cloudforms provider creation via API

In my new job, I’ve been working with other cloud technologies apart from OpenStack. Ansible is used heavily and now some version of this technology runs through a large percentage of Red Hat’s products. Cloudforms positions itself as a single pane of glass through which to control not just traditional infrastructure providers like RHEV and VMware but also OpenStack, AWS, Satellite 6, Ansible Tower and a multitude of other tools.

So I have only a small amount of experience with the above, OpenStack aside. Documentation is generally pretty good but I have spent some time reading the API runes to determine how to automatically create providers within Cloudforms (note that this should work fine for ManageIQ as well). Ansible does have a manageiq provider module but its far from complete.

NB: The following is appropriate for MY usage on one environment, you WILL need to set and adjust parameters to suit. This should just be used to understand what parameters you need, not how to set them. This was using the Ansible uri module in 2.4 against Cloudforms 4.5.

RHEV providers are pretty simple:

- name: Create RHEV Provider
  uri:
    url: "https://{{ inventory_hostname }}/api/providers"
    method: POST
    user: "{{ vault_cfme_user }}"
    password: "{{ vault_cfme_password }}"
    body:
      type: "ManageIQ::Providers::Redhat::InfraManager"
      name: "{{ cloudforms.rhev_name }}"
      hostname: "{{ inventory_hostname }}"
      credentials:
        userid: "{{ vault_rhev_user }}"
        password: "{{ vault_rhev_password }}"
    status_code: 200
    body_format: json
    validate_certs: no

Satellite 6 too (but notice URL is different):

- name: Create Satellite Provider
  uri:
    url: "https://{{ inventory_hostname }}/api/providers?provider_class=provider"
    method: POST
    user: "{{ vault_cfme_user }}"
    password: "{{ vault_cfme_password }}"
    body:
      type: "ManageIQ::Providers::Foreman::Provider"
      name: "{{ cloudforms.satellite_name }}"
      url: "{{ inventory_hostname }}"
      credentials:
        userid: "{{ vault_satellite_user }}"
        password: "{{ vault_satellite_password }}"
    status_code: 200
    body_format: json
    validate_certs: no

OpenStack – note that you have to set BOTH security_protocol and verify_ssl here, at least if you are needing to set those. This would not be appropriate outside of dev/PoC yada-yada-yada:

- name: Create OpenStack Provider
  uri:
    url: "https://{{ inventory_hostname }}/api/providers"
    method: POST
    user: "{{ vault_cfme_user }}"
    password: "{{ vault_cfme_password }}"
    body:
      type: "ManageIQ::Providers::Openstack::CloudManager"
      verify_ssl: "false"
      security_protocol: "Non-SSL"
      name: "{{ cloudforms.openstack_name }}"
      hostname: "{{ inventory_hostname }}"
      credentials:
        userid: "{{ vault_openstack_user }}"
        password: "{{ vault_openstack_password }}"
    status_code: 200
    body_format: json
    validate_certs: no

Ansible Tower – pretty simple but again, note the specific “provider_class” URL:

- name: Create Ansible Tower Provider
  uri:
    url: "https://{{ inventory_hostname }}/api/providers?provider_class=provider"
    method: POST
    user: "{{ vault_cfme_user }}"
    password: "{{ vault_cfme_password }}"
    body:
      type: "ManageIQ::Providers::AnsibleTower::Provider"
      name: "Ansible Tower"
      url: "{{ inventory_hostname }}"
      credentials:
        userid: "{{ vault_tower_user }}"
        password: "{{ vault_tower_password }}"
    status_code: 200
    body_format: json
    validate_certs: no

Finally OpenShift, the most complex but not that much to it. You just need to note that here we pass an array to endpoint_configurations of both the OpenShift and Hawkular endpoints. Plus we are using a token here. And again, be sure to set both ssl options otherwise the provider is created but doesn’t work.

- name: Create OCP Provider
  uri:
    url: "https://{{ inventory_hostname }}/api/providers"
    method: POST
    user: "{{ vault_cfme_user }}"
    password: "{{ vault_cfme_password }}"
    body:
      type: "ManageIQ::Providers::Openshift::ContainerManager"
      name: "OpenShift"
      port: "8443"
      connection_configurations:
      - endpoint:
          role: "default"
          hostname: "{{ inventory_hostname }}"
          port: "8443"
          verify_ssl: "false"
          security_protocol: "ssl-without-validation"
        authentication:
          authtype: "bearer"
          auth_key: "{{ vault_ocp_token }}"
      - endpoint:
          role: "hawkular"
          hostname: "{{ cloudforms.hawkular_hostname }}"
          port: "443"
          verify_ssl: "false"
          security_protocol: "ssl-without-validation"
        authentication:
          authtype: "hawkular"
          auth_key: "{{ vault_ocp_token }}"
  status_code: 200
  body_format: json
  validate_certs: no
Advertisements

RDO and Clustering-as-a-Service

In a very old life, I maintained some packages for the Fedora project – an audio editor, a map rendering system, a lazy logger, that kind of thing.

Senlin is a clustering service for OpenStack. My employer has a vested interest in HPC clusters and I enjoy side projects which contribute upstream and allow users to consume tooling. Mostly my approach is:

  1. Produce ham-fisted hack which, at best, partially fixes the problem
  2. This annoys the developer enough to complete the patch
  3. Winning/PROFIT

Evidence of this is support in Elasticluster for Keystone v3

Anyway, I digress. RDO didn’t have Senlin packaged for easy consumption so I decided to apply my “skills” in order to fix that particular problem.

  1. I started by following the new package documentation
  2. Its the start of a long journey….
  3. The Senlin service review alone took 40 reviews. Yes, 40.
  4. The client was a bit better.

Naming is hard but some of the terminology and naming in RDO is a bit odd. DLRN, Weirdo. There is a steep on-ramp for a new contributor and it feels like a bit like a walled garden at times. But the folks in IRC are helpful and most of all, patient. Special thanks goes to Alfredo Moralej, Haïkel Guémar, Chandan Kumar and Javier Peña for their limitless patience in the face of blundering git commits.

The good news is, both service and client are now available in RDO in time for the Pike release. I’m hoping this will increase usage of Senlin in OpenStack as I think lack of packaging for what is often very good code is a barrier to overall adoption. There are plenty of options for spinning up clusters in OpenStack not limited to Senlin or Elasticluster but its good to be able to nudge these along where time allows.

Whats next? Knowing very little about writing puppet code, I’m next intending to write the puppet module for Senlin. Apparently it involves something called a cookie cutter.

We have some users who elected to deploy RDO and others using Red Hat’s OpenStack Platform for commercial support. I’m hopeful that by understanding the RDO process better, we can help support both sets of users equally. Most deployments we do necessitate the odd patch or two going upstream and hopefully this will be easier from here on in.

If you are intending to use Senlin please get in touch.

Scaling issues with the TripleO undercloud node

With large deployments, its important to make good architectural choices about nodes in your environment and during deployment, none is more critical than the provisioning node itself.

This is because in TripleO the undercloud node doesn’t just push out images to nodes but also orchestrates configuration of the entire cluster. In order to do this, it needs to run a bunch of OpenStack services like nova, neutron, ironic, heat, keystone, glance etc. It also runs two databases, a messaging bus and web server.

All of this means that the provisioning node needs to be quite a powerful piece of kit as the provisioning process involves lots of disk and network I/O amongst other things. So its important to specify a fast disk, plenty of memory, a goodly amount of cores and a quick nic.

But sometimes, even with all of the above, you need to tweak things because with the best will in the world, the undercloud installation and configuration applies “best guess” values when it comes to things like threads, processes, timeouts and retries.

Each service has a tonne of configurable options and its important to understand the implications of each one and the impact this will have in order to get the best performance out of the node. Its also important to understand what changes will help in response to any particular bottleneck.

Specifically, we found that tuning the process and thread count for WSGI processes and increasing haproxy maxconn values caused the node to handle load with greater efficiency. A patch has been merged to address this[1]. Red Hat produce a guide on tuning the undercloud (Director in their commercial parlance)[2].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1330980

[2] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/director_installation_and_usage/chap-troubleshooting_director_issues#sect-Tuning_the_Undercloud

So you need a management network quick-smart?

TripleO deployments can be deployed with an optional Management VLAN. You can use this to run ansible playbooks, monitoring systems and manage your cloud, hence the name.

However this requires configuration during deployment. So what happens if you have a cloud that doesn’t have a management vlan? You can use the provisioning network. But the problem is that this doesn’t have fixed addresses, only dynamic. However these rarely change so to perform a quick playbook run or a cluster-wide config with pdsh for example, you can use OpenStack’s cli to create a hosts file as follows:

openstack server list -f value --column Networks --column Name | sed 's/ ctlplane=/ /g' | awk '{ print $2 " " $1}'

This converts the output of your ironic nodes to a format you can cat into a hosts file.

This avoids having to add your management node to another network (e.g. storage) and use an existing network.

Its not big, its not clever but it does work.

Exporting Amazon EC2 instances into OpenStack

I had a requirement to get some workloads running on EC2 (which I’m a huge fan of, I just hate the vendor lock-in) imported into OpenStack.

Tools to help you get anything out of AWS are almost non-existent. I did try ec2-create-instance-export-task from AWS API tools but this has so many hurdles to jump through that it became slightly farcical. In the end it wouldn’t let me export the image because it wasn’t an imported image in the first place. Hmmm.

Despite what the general consensus online, this turns out to be fairly straightforward. The problem appears to come if you’ve used Amazon Linux AMI’s with their custom kernel. Thankfully, these were Ubuntu 16.04 images.

Step 1. Boot an instance from your AMI. Use SSD and a decent instance size if you’re feeling flush and in a hurry.

Step 2. Snapshot the instance and attach that snapshot to the running instance

Step 3. On your OpenStack environment, dd the attached disk, gzip and pipe over an ssh tunnel because, y’know Amazon egress charges. E.g.:

ssh -i chris.pem ubuntu@my.amazon.v4.ip “sudo dd if=/dev/xvdf | gzip -1 -” | dd of=image.gz

Step 4. Unzip the image, upload it to OpenStack and boot it.

Step 5 (For those with Amazon kernels). Fudge around replacing the Amazon kernel with something close to the same version. YMMV.

OpenStack Release Notes with Reno

I’m currently trying to get a patch submitted to the Puppet Keystone project which implements the ability to turn “chase referrals” on or off for deployments that use Active Directory.

One comment came back from the initial patch:

please add release note

Ok. So of course this being OpenStack it turns out to be complicated. You need to use “Reno”, a tool that has been used since Liberty (I think) to document changes to OpenStack. The HUGE irony is that the documentation for OpenStack’s documentation tool is sparse and pretty hopeless. It recommends running:

tox -e venv — reno new slug-goes-here

which gives the error: ERROR: unknown environment ‘venv’

Of course. Thankfully some kind soul in the Manila documentation project has added the missing clue for the clueless:

If reno is not installed globally on your system, you can use it from venv of your manila’s tox. Run:

source .tox/py27/bin/activate

py27 needed replacing with “releasenotes” for some obscure reason in the puppet-keystone directory but then it worked and I could finally run:

reno new implement-chase-referrals

and the release note was created.

Manually re-setting failed deployments with Ironic

OpenStack commands have some odd naming conventions sometimes – just take a look at the whole evacuate/host-evacuate debacle in nova for example – and ironic is no exception.

I’m currently using tripleo to deploy various environments which sometimes results in failed deployments. If you take into account all the vagaries of various ipmi implementations I think it does a pretty good job. Sometimes though, when a stack gets deleted, I’m left with something like the following:

[stack@undercloud ~]$ nova list
+—-+——+——–+————+————-+———-+
| ID | Name | Status | Task State | Power State | Networks |
+—-+——+——–+————+————-+———-+
+—-+——+——–+————+————-+———-+

[stack@undercloud ~]$ ironic node-list
+————————————–+————-+————————————–+————-+——————–+————-+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————–+————-+————————————–+————-+——————–+————-+

| 447ffea5-ae3f-4796-bfba-ce44dd8a84b7 | compute4 | 26843ce8-e562-4945-ad32-b60504a5bca3 | power on | deploy failed | False |

So an instance is still associated with the baremetal node.

In this case, it isn’t obvious but after some digging:

ironic node-set-provision-state compute4 deleted

should result in the node being set back to available. I’m still not clear if this re-runs the clean steps but it gives me what I want to re-run deployment.

OpenStack Tempest on RDO Mitaka

There are two main tools for testing a deployed cloud, Rally and Tempest.

I have been looking into verifying functionality in a private cloud once it has been created (using TripleO) and the documentation is, as usual, abysmal. Its the usual rabbit warren of developer docs, stuff relating to releases from 3 years back, blueprints which mention “the upcoming Havana release” etc etc.

So for reference (mine mostly), here are the steps to get OpenStack Tempest working on the RDO Mitaka stable release:

  1. Ensure you have a neutron network called “nova”
    $ neutron net-create nova --router:external --provider:network_type flat --provider:physical_network datacentre
    $ neutron subnet-create --name nova --enable_dhcp=False --allocation-pool=start=10.1.1.51,end=10.1.1.250 --gateway=10.1.1.1 nova 10.1.1.0/24
  2. Check that you have a role called “heat_stack_owner”. If not, create one:
    $ openstack role create heat_stack_owner
  3. Create your tempest directory and change into it
    $ mkdir ~/tempest && cd ~/tempest
  4. Initialize the directory by running
    $ /usr/share/openstack-tempest-10.0.0/tools/configure-tempest-directory
  5. Configure tempest
    $ tools/config_tempest.py --deployer-input ~/tempest-deployer-input.conf \
    --create identity.uri $OS_AUTH_URL identity.admin_password $OS_PASSWORD
  6. Run tempest (NOT with tools/run-tests.sh)
    $ ./run_tempest.sh
  7. Answer yes to the prompt to initialise your virtual environment. This will download required libraries etc.

Depending on environment the tests will take about an hour to run. So go make a brew and get ready to debug the failures. 🙂

Source: https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/director-installation-and-usage/85-validating-the-overcloud

Shellinabox and serial consoles

TripleO is in fairly dire need of something similar to conserver/wcons/rcons in xCAT. Just so you can see what the heck the node’s console is doing instead of having to fire up your out of band web interface, log in, launch web console and that is *if* you have the license for it.

CLI console access in Ironic is currently under development after I filed an RFE:

https://bugs.launchpad.net/ironic/+bug/1536572

but in the meantime I decided to try and get serial console access through shellinabox working.

It’s not too hard and the following is a good start:

http://docs.openstack.org/developer/ironic/deploy/install-guide.html#configure-node-web-console

The key thing to understand is the terminal_port value which varies according to ipmi driver.

Once configured this gives a nice view with a decent amount of scroll-back.

Its a pity all this is manual – I guess it would be fairly easy to script as part of an undercloud install to enable serial consoles but its enough of a security risk to discourage this but not making it so easy perhaps!

Deleting multiple nova instances

We spawn large numbers of instances for testing and benchmark purposes. This happens a lot in HPC and as long as it isn’t orchestrated by heat, its fine to batch-delete these. But OpenStack CLI doesn’t appear to provide a way to do this intelligently and carefully. You could log into horizon but this takes a while (improvements coming in that area in Mitaka apparently) and the screen only loads a maximum of 25 or 50 or something.

You don’t want to use this lightly (run just the command and grep to confirm the match) but here it is more for my reference than anything else.

nova list –all-tenants | grep -i UNIQUE_COMMON_KEYWORD_HERE | awk ‘{print $2}’ | grep -v ^ID | grep -v ^$ | xargs -n1 nova delete

Then obviously replace UNIQUE_COMMON_KEYWORD_HERE with mytestinstance or testinstance or cbtest or so on.