So you need a management network quick-smart?

TripleO deployments can be deployed with an optional Management VLAN. You can use this to run ansible playbooks, monitoring systems and manage your cloud, hence the name.

However this requires configuration during deployment. So what happens if you have a cloud that doesn’t have a management vlan? You can use the provisioning network. But the problem is that this doesn’t have fixed addresses, only dynamic. However these rarely change so to perform a quick playbook run or a cluster-wide config with pdsh for example, you can use OpenStack’s cli to create a hosts file as follows:

openstack server list -f value --column Networks --column Name | sed 's/ ctlplane=/ /g' | awk '{ print $2 " " $1}'

This converts the output of your ironic nodes to a format you can cat into a hosts file.

This avoids having to add your management node to another network (e.g. storage) and use an existing network.

Its not big, its not clever but it does work.

Manually re-setting failed deployments with Ironic

OpenStack commands have some odd naming conventions sometimes – just take a look at the whole evacuate/host-evacuate debacle in nova for example – and ironic is no exception.

I’m currently using tripleo to deploy various environments which sometimes results in failed deployments. If you take into account all the vagaries of various ipmi implementations I think it does a pretty good job. Sometimes though, when a stack gets deleted, I’m left with something like the following:

[stack@undercloud ~]$ nova list
+—-+——+——–+————+————-+———-+
| ID | Name | Status | Task State | Power State | Networks |
+—-+——+——–+————+————-+———-+
+—-+——+——–+————+————-+———-+

[stack@undercloud ~]$ ironic node-list
+————————————–+————-+————————————–+————-+——————–+————-+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————–+————-+————————————–+————-+——————–+————-+

| 447ffea5-ae3f-4796-bfba-ce44dd8a84b7 | compute4 | 26843ce8-e562-4945-ad32-b60504a5bca3 | power on | deploy failed | False |

So an instance is still associated with the baremetal node.

In this case, it isn’t obvious but after some digging:

ironic node-set-provision-state compute4 deleted

should result in the node being set back to available. I’m still not clear if this re-runs the clean steps but it gives me what I want to re-run deployment.

Shellinabox and serial consoles

TripleO is in fairly dire need of something similar to conserver/wcons/rcons in xCAT. Just so you can see what the heck the node’s console is doing instead of having to fire up your out of band web interface, log in, launch web console and that is *if* you have the license for it.

CLI console access in Ironic is currently under development after I filed an RFE:

https://bugs.launchpad.net/ironic/+bug/1536572

but in the meantime I decided to try and get serial console access through shellinabox working.

It’s not too hard and the following is a good start:

http://docs.openstack.org/developer/ironic/deploy/install-guide.html#configure-node-web-console

The key thing to understand is the terminal_port value which varies according to ipmi driver.

Once configured this gives a nice view with a decent amount of scroll-back.

Its a pity all this is manual – I guess it would be fairly easy to script as part of an undercloud install to enable serial consoles but its enough of a security risk to discourage this but not making it so easy perhaps!

Red Hat OSP 7 Director custom hostnames

By default, nodes deployed with Director have standard FQDN in the format:

overcloud-$nodetype-$nodenumber.localdomain

E.g.

overcloud-controller-2.localdomain

Which is fine but its nice to customize this a bit, no?

To do so, edit the following undercloud files:

/etc/neutron/dhcp_agent.ini
/etc/nova/nova.conf

and create/change the following parameter:

dhcp_domain = iaas.local (or whatever you prefer)

You also need to edit the deployment parameter:

CloudDomain:
default: ‘iaas.local’

in overcloud-without-mergepy.yaml

NB: It looks like this will only apply to OSP 7, OSP 8 has some changes to make this easier

OpenStack and Clustered Data ONTAP

NetApp Fabric-Attached Storage (FAS) devices are pretty great. They offer dual controller config for HA setups, CIFS and NFS including pNFS as well as iSCSI and FCoE and you can populate them with drives for up to 1PB storage with a flash pool for a nice fast cache. They also have the added benefit of being backed by NetApp who are a big contributor to OpenStack storage architectures so its a generally safe assumption that they will play nicely with Cinder, Glance, Swift and now Manila.

Because I mostly blog about problems, its worth noting this one:

The Clustered Data ONTAP GUI doesn’t appear to allow you to set owner and group on volumes when you create them – you have to do this through the CLI. Normally I expect to do most things at the command line but NetApp docs are quite explicit about doing everything at the GUI as commands to create things like volumes are complex, eg:

volume create -vserver vs0 -volume user_jdoe -aggregate aggr1 -state online -policy default_expolicy –user 165 –group 165 -group dev -junction-path /user/jdoe -size 250g -space-guarantee volume -percent-snapshot-space 20 -foreground false

Note the -user and -group parameters. These allow you to set ownership on the volume and therefore when OpenStack mounts, means you can lock it down to either the cinder or glance user.

When is a link actually, y’know, UP?

TL;DR Use device names such as eno1, enp7s0 rather than nic1, nic2

I’ve been chasing an issue with a TripleO-based installation whereby the nodes were provisioning but failing to configure networking correctly.

Debugging TripleO deployments is fiendishly hard and this was made more complex by being unable to connect to the failed nodes. Deployed TripleO nodes only allow key-based ssh authentication. It’s great to see security being so good even the sysadmin can’t access the node I guess.

If you want to login to a node at the console, you basically have to roll your own deployment image. I was on the verge of heading down this route when I considered the following:

TripleO deployments have two methods of specifying usable, active network interfaces. The first (and unfortunately, the default) is to number them nic1 , nic2, nic3 etc in the config. Unfortunately this introduces some logic from os-net-config to determine which links are actually connected to switches.

This would be fine on most machines but sadly the hardware I’m working on has a built-in ethernet-over-usb device for out of band access. This reports itself as having a link (for reasons unknown, maybe a link to the usb interface?) and therefore fulfills the following criteria:

  1. Not the local loopback
  2. Has an address
  3. Reports carrier signal as active in /sys/class/net/<device>/carrier
  4. Has a subdirectory of device information in /sys/class/net/<device>/device/

Despite reporting the link state as UNKNOWN in the ip command, this meant that the logic of os-net-config concluded that this was the management nic and attempted to configure it as such, obviously to no avail.

Happily this has resulted in my first OpenStack patch:

https://review.openstack.org/#/c/291243

which may even get accepted.