Building the first brick
The first concrete step in rebuilding my services infrastructure is building a virtual Gluster node, or two. I’m going to start with one to see how easy it is to add nodes. Actually I may as well install k8s as well and I’ll have my first brick.
The node(s)
I’m using Ubuntu 20.04 VMs with two processors and 3GB of RAM. I suspect this will be underpowered even for modest needs given there’ll be Kubernetes and Gluster both operating as servers and clients, or workers and control plane. But I’d like to see what that looks like so I’m ready for it if it happens on the laptops when I build this out on them.
Gluster
Basic Gluster is pretty easy. Install the software, set up a partition, create a filesystem on it, create a volume and a brick and you can mount it.
Later on I’ll see what it’s like to add a brick–I had a look and think I know what the commands will be.
Anyway, this is what I did.
- Prepare a partition and a filesystem on it. The filesystem has to support
extended attributes. I used ext4.
$ sudo mkfs.ext4 /dev/vdb1 $ sudo mkdir -p /data/brick1 $ sudo mount /dev/vdb1 /data/brick1
- Install Gluster server and start it:
$ sudo apt install glusterfs-server $ sudo systemctl start glusterd $ sudo systemctl status glusterd $ sudo gluster peer status
- Create the first volume:
$ sudo mkdir -p /data/brick1/gv0 $ sudo gluster volume create gv0 brick-0:/data/brick1/gv0 $ sudo gluster volume start gv0 $ sudo gluster volume info
- Now it should be ready to mount:
$ sudo mkdir /data/data $ sudo mount -t glusterfs localhost:/gv0 /data/data
I definitely need to rethink my directory layout. I could hide the Gluster
volumes by placing them in, for example, /data/.gluster
, but that
immediately sets of warnings in my head. Maybe /srv/gluster
.
I should mention here that it is almost certainly not best practice to combine Gluster bricks and K8s workers on the same nodes, but in this case I am only using Gluster to support the K8s and I don’t have multi-user issues or concerns. I am still interested in exploring this, though, because I really like the idea of k8s bricks with onboard, shared, persistent storage.
So at this point I did a basic test and yee-haw, I can read and write to this filesystem. A basic performance test shows writes take seven times as long and reads take about 1.5 times as long. In a limited, one-time test:
$ cd $HOME
$ time for i in {0..1000}; do echo 'test' > "test${i}.txt"; done
real 0m0.182s
user 0m0.011s
sys 0m0.026s
$ time for i in {0..1000}; do cat "test${i}.txt" > /dev/null; done
real 0m0.914s
user 0m0.676s
sys 0m0.299s
$ cd $GLUSTERDIR
$ time for i in {0..1000}; do echo 'test' > "test${i}.txt"; done
real 0m1.309s
user 0m0.043s
sys 0m0.051s
$ time for i in {0..1000}; do cat "test${i}.txt" > /dev/null; done
real 0m1.462s
user 0m0.719s
sys 0m0.310s
This is a doofy test and I’m not overly concerned about write performance for my purposes, but we’ll see. This is without there being any other Gluster bricks, so the writes aren’t waiting on network traffic establishing synchronization (on the metadata, anyway, I assume writes wait on synchronizing the metadata across all the bricks, but not the data itself).
Anyway, continuing on to Kubernetes.
MicroK8s
Setting up MicroK8s is also easy, but last time around I tripped over a few gotchas. I’m going to trip over some of them again because my notes are incomplete. (On the bright side, this’ll give me an opportunity to note it down this time.)
-
Set the hostname. I didn’t do this before Gluster and probably should have, but oops.
$ sudo hostnamectl set-hostname brick0
-
Set the domain. Add an entry in
/etc/hosts
mapping the IP to the FQDN and then the hostname, like so:10.0.0.16 brick0.example.org brick0
Without this, the nodes may not be able to talk to eachother, depending on the DNS setup on the infrastructure. In this particular case, the FQDN would be used, which would resolve to the public address of the cluster, and inter-node communciation would be broken. For the broken laptops, it probably won’t matter since it’ll all be one flat intranet (at this stage I don’t have a way to segregate the networks as I’d like to).
-
Install MicroK8s. There’s no choice other than Snap.
$ snap install microk8s --classic --channel=1.19
-
Add self to
microk8s
group to have access to some commands without sudo.$ usermod -a -G microk8s $USER
-
Edit
/var/snap/microk8s/current/certs/csr.conf.template
and edit the CSR to have the values I would like. The DN information does not really matter as this is for internal use (I still like to use something more or less correct) but thealt_names
section is important. I set one to the hostname I’ll use from outside the cluster. This template will be copied to other nodes as well and customized for each–each will need its own certificate. -
Refresh the certs (this takes a while):
$ sudo microk8s refresh-certs
-
Install some plugins. DNS and Storage are recommended by the documentation. RBAC is something I will need for enabling users and deployment accounts.
$ microk8s enable dns storage rbac
-
The MicroK8s documentation gives the impression that you’ll use
microk8s kubectl
for everything, but that’s not necessary: this’ll just be another k8s cluster I manage. MicroK8s does get you started with the configuration though:$ microk8s config apiVersion: v1 clusters: - cluster: certificate-authority-data: [ ... ] server: https://10.0.0.16:16443 name: microk8s-cluster contexts: - context: cluster: microk8s-cluster user: admin name: microk8s current-context: microk8s kind: Config preferences: {} users: - name: admin user: token: [ ... ]
This has all the necessary parts but I don’t refer to this cluster as “microk8s” elsewhere. I’m going to give it a different name. Of course the
cluster
andcontext
stanzas have to all relate properly so ensure any name changes are properly propagated throughout the configuration. For example,clusters::cluster::name
must be matched bycontexts::context::cluster
to keep the references.As well, of course, I’ll need to merge it into the rest of my
~/.kube/config
. But this does handily wrap up the admin token and cluster certificate for me.
At this stage I have a one-node k8s/Gluster cluster. I won’t be able to reach it from outside yet since I don’t have a reverse proxy set up to get at either services or the API, so that’ll come next.