Hosted Website
So I mentioned that I had registered a new domain recently. I also mentioned that I was going to host a website for this on the virtual private servers I have for my mail servers. I had a couple of prerequisites that needed meeting before setting this up.
New gTLD domain
I have always been sceptical of the new generic top level domains, I saw them as ICANN shamelessly cashing in on something it had the power to control. Because of this I have until now avoided them. However my current domain name is quite long, and I have for a long time wanted something shorter, but the good ones that may be applicable to me have all been taken.
But the time has come to admit that the new generic top level domains are here to stay, so I have swallowed my pride, and found that most of the good names are gone already anyway. But one was available that was suitable, so I have registered stewart.zone. I'm going to use it to set up a website that isn't hosted on my home connection. For this I'm going to use the hosted virtual servers I already have for my mail servers, but that is going to need me to set up a backup process for them, as they will no longer just be mail servers, so rather than trust my ability to reconfigure a new server from scratch I'm going to trust in my ability to back up the configurations in a sensible way, and save myself the trouble of having to manually rebuild their configurations if they go wrong.
Also this will give me an opportunity to build a website that isn't quite so ugly, and also isn't lumbered with some of the "features" of my current site that I haven't had the heart to do away with, but are a bit rubbish. Once this is done successfully I'll look at migrating my current site over to the new hosts, and the new design, and then I won't need to open up firewall rules on my router any more.
Mail Server Update
So in a previous blog post I set up postfix and dovecot by sort of following an online guide. Well the author of that guide has updated it for debian stretch. This doesn't help me much, as I already built my mail servers on debian stretch by adapting his previous guide. But some of the changes do interest me. I have been meaning to set up DKIM and DMARC, and the new guide includes instructions on doing so. The new guide also includes instructions for setting up clamav, which wouldn't hurt. However the instructions for clamav depend on using a new anti-spam tool, and I am actually getting on with spamassassin, on top of this the new anti-spam tool is not in the debian default repositories, which puts me off somewhat. They do provide an APT repository for stretch though which eases this concern a little. The new tool also supports some features I may be interested in, including greylisting shared across hosts by using redis (a piece of software I may be a little familiar with) a possibility that intrigues me. I am going to read this new guide, and decide if there is anything I wish to take from it, if so I shall almost certainly write a new blog post on the matter, if not I probably won't.
House Keeping on My mail server
So I've had my mail servers set up and working for a month now, and there are a few things I haven't done. My old mail server is still set to send from a domain of craig-james-stewart.co.uk by default, and it is no longer in the SPF record as a sender for that domain, so I have had to fix that so that I can continue to receive emails from it seamlessly. I've also had to alter the contact form on my website for the same reason. As well as these minor tweeks I have come to the realisation that I ignored time drift when setting up the mail servers, easily corrected by installing ntpd in it's default configuration on debian, apart from my rather strict iptables rules. So having fixed that, the only thing left to do, is configure certbot to auto-renew my ssl certificates, which is as simple as adding a couple of cron entries. So now I have two mail servers that will continue to work, with little maintenance effort. I still need to look at DKIM and DMARC, but those can wait.
Getting Postfix and Dovecot working
In my last blog post I set up apache and certbot and got the ssl certificates I needed for my new mail server. So this blog was going to be about postfix, but as I found a handy guide online I followed some of it to get what I wanted. That is to say I followed those steps that made sense, skipped the ones that conflicted with my requirements, and altered the ones that didn't apply because of changes I had made. This gave me a reasonable set up, on two servers, that could each act independently, but lacked the mailbox sync to allow me to use them as a single mail infrastructure. To be fair the only things that really needed changing in any great detail where the dovecot userdb settings to allow doveadm to enumerate the users and get the correct settings, most of the remaining settings changes were trivial (SSL cert locations for example). I also skipped all of the optional extras (like roundcube and phpmyadmin). After this I had to configure dovecot mailbox sync as per their guide, and tweak the SSL settings to harden them, and now I have new mail servers. It took longer than I would like, and I have less to say than I have for the previous steps. But all is now working. I have however decided to look into DKIM and DMARC settings, as I have already configured SPF and there is a nice guide to follow linked from the comments on the guide I followed to get postfix installed and working.
Apache Config SSL and certbot
So after my last blog post I decided that this one should be less rushed, and more practised and tested, which turns out to be a good thing. After my last blog post the hosted servers I have didn't work over IPv6, this is due to the hosting firm's use of SLAAC to configure the external IPv6 address and routing, and my use of iptables to block all traffic that wasn't otherwise allowed. Now I allowed icmp echo requests on IPv4 but those commands raised an error run I transposed them to IPv6 so I left them out. This led to SLAAC, which requires ICMP to work over IPv6 to not work. That has been rectified now. So onto apache, and SSL certs. Now one of the requirements I had for these servers was the ability to swap between them via DNS, and as I do not know how to configure postfix to use multiple SSL certs based upon the domain that is being connected too I decided the easiest way to do that would be to get a cert with a cname to that shared domain for each server. Using http authentication with lets encrypt you put a file on disk and they request that file from the domain they are validating. This would be a problem for the server that is not currently being pointed at for the shared domain.
IPTables config
So in my last blog post I promised that I would talk about iptables, and basically I have been a little lax in getting started with configuring the IPTables rules on the new servers I have set up. Now I mentioned that IPTables is quite powerful, and it can be if configured to be so, but I am using it as a basic firewall, so that should I accidental configure a service to listen on an external port it shan't be able too. On top of this I am going to set the rules up such that the three default chains drop packets that don't match any rules, meaning I am using them as first match allows the flow firewall, with a default drop.
A new project, emails
So, when I started this blog I wanted to make it a record of my learning of new skills, particularly around electronics. That hasn't happened, and now that I have a new project to start it isn't about to start, this project is very much within my skill set (or at least it should be). A little background, I have been running my website, and email server, on my home connection for years, I got an internet connection with a company that was a good ISP for those who were a little more knowledgeable of networking and computers when I moved into my house. Back then I was a novice, but with an ISP a little more forgiving of allowing more advanced use of an internet connection I could host a website, and emails, without paying any extra money for a proper hosting solution. This has lead to me being the only person on my street that has a wireless internet connection during a power outage, but that is not really the point. Since then there has been a great deal of consolidation in the UK ISP market, and my ISP, PlusNet, was bought, some time ago, by BT. Until recently this wasn't really an issue, nothing much changed, BT kept PlusNet at arms length, but for some reason, now, PlusNet have chosen to add the block of IP addresses that the static IP for my connection is in to Spamhaus' Policy Block List. This marks my internet connection as not suitable for email hosting. So my new project is to move my emails into a proper hosting solution.
So I decided to join a professional body.
I have for a long time thought that the IT industry has an issue with how people within it present themselves to the rest of the world. Everyone wants to be an "Engineer", indeed my current job title is "DevOps Engineer" (a title I am not particularly enamoured with, but that is a matter for another time). We all know that Engineers create clever solutions to otherwise very difficult problems. The issue I have with this is that in many other fields where you find Engineers there are rules, and regulations, and bodies that decide who gets to call themselves "Engineer" and what standards those people must meet. In most of these other fields there are highly defined Engineering Standards against which we can measure the ability and performance of these Engineers. In IT this is not enforced, now I have been very lucky to work with some incredibly talented and intelligent individuals, and I do not wish to deride their contributions in anyway, but without the standards to measure ourselves against, using the term "Engineer" just cheapens it. Unfortunately I have no idea what the standards should be in IT, and I have no idea what the underlying problem with the way many working in IT think that I feel is not proper Engineering, after all I am no more an "Engineer" than anyone else in IT using that title, and claiming otherwise would be a lie. And so I have joined BCS in order that maybe I can get more exposure to the rest of IT and perhaps learn more about what the standards I feel are missing should be.
I shall probably write more on this in the future, but for now here's to hoping that membership of a professional body is going to be a positive step towards understanding my industry, and how I can make it better.
Should have seen it coming!
So the SSL certificate that I used to secure my website (and other things) is no longer trusted by Chrome (as of version 57), and so I have been forced to upgrade to a Lets Encrypt SSL certificate. It's almost as if I could have predicted this state of affairs in advance. At least I can now rest assured that my SSL certs will be easy to keep up to date (I have set up what I believe to be the required automated steps to do just that, time will tell).