OSE Server

From Open Source Ecology
Jump to: navigation, search

As of 2019, Open Source Ecology has exactly one production server, and this article is specifically about this production server.

For information about OSE's development server, see OSE Development Server.

For information about OSE's staging server, see OSE Staging Server.

Introduction

The OSE Server is a critical piece of the OSE Development Stack - thus making the (1) OSE Software Stack and the OSE Server Stack the 2 critical components of OSE's development infrastructure.

Uptime & Status Checks

If you think one of the OSE websites or services may be offline, you can verify their status at the following site:

* http://status.opensourceecology.org/

Note that this URL is just a convenient CNAME to uptime.statuscake.com, which is configured to redirect our CNAME to our Public Reporting Dashboard here:

* https://uptime.statuscake.com/?TestID=itmHX7Pfj2

It may be a good idea to bookmark the above URL in the event that our site goes down, possibly including DNS issues preventing the CNAME redirect from status.opensourceecolgy.org

Note that Statuscake also tracks uptime over months, and can send monthly uptime reports, as well as immediate email alerts when the site(s) go down. If you'd like to receive such alerts, contact the OSE System Administrator.

Adding Statuscake Checks

To modify our statuscake checks, you should login to the statuscake website using the credentials stored in keepass.

If you want the test to be public (appearing on http://status.opensourceecology.org), you should add it by editing the Public Reporting Dashboard.

OSE Server Management

Working Doc - edit

Working Presentation -

edit

Assessment of Server Options

In 2016, OSE purchased a dedicated hosting plan from Hetzner. We call this server 'hetzner2'.

Prior to this, we used a shared hosting plan from 2011 (on a dedicated host, but without root access) from hetzner for about the same price, though it was much less powerful (AMD Athlon 64 X2 5600+ Processor, 4 GB RAM, 2x 400 GB Harddisks, 1 Gbit/s Connection). We call this server 'hetzner1'.

In 2018, we consolidated all our sites onto the under-utilized hetzner2 & canceled hetzner1.

Hetzner2 is a CentOS 7 server with 4 CPU cores, 64G RAM, and 2x 250G SSDs in a software RAID1. When I (Michael Altfield) adopted this system in mid-2017, it was already built & but running only the Open Building Institute website. I heavily modified its config, added varnish as a cache (64G RAM!) + nginx for https, and migrated all our sites onto it.

OSE server specs on Hetzner as of June 2016. Older server had only 4 GB RAM - compared to the 64 GB in the upgrade - which is 16x performance at lower cost. Thus, OSE should assess new server plans at Hetzner every few years due to dropping cost of hardware.

OSE Server and Server Requirements

As of May 2018, the server (hetzner2) is very overprovisioned.

The 4 core system's load averages about 0.14 and has sustained spikes to about 2 several times per week. Therefore, we could maybe run this site on a 2 cores, but we certainly don't need more than 4 cores.

This idle CPU is largely due to running a large varnish cache in memory. The system has 62G of usable RAM. I set varnish to use 40G, but that's more than is needed. The system has 57G of available RAM, suggesting that we're only using 5G. So we could probably run on a system with 8G and large enough disks for swap (varnish w/ spill-over to swap is still probably faster than rendering the page over-and-over on the apache backend with mediawiki or plugin-bogged-down wordpress installs. For reference, varnish uses ~1.1G of RAM, mysqld uses 0.7G, httpd uses ~0.8G, nginx uses ~0.2G, and system-journal uses 0.1G

The disks are currently 33% full, but we've filled them many times when creating & restoring backups, migrating around the wiki, creating stating sites, etc. 250G feels about right. I don't think we should go smaller than 200G here. For reference, /var/www is 22G, /var/log is 0.2G, /var/tmp is 3.0G, /var/lib/mysql is 2.8G, /var/ossec is 1.9G, and /root/backups is 29G.

In conclusion, we require 2-4 cores, 8-16 G RAM, and ~200G disk.

Looking Forward

HintLightbulb.png Hint: Note that in 2022, we can get 2x the hard drive for the same price - 1TB instead of 500GB - Hetzner 2022 Dedicated Server

First, about Hetzner. Within my (Michael Altfield)'s first few months at OSE (in mid 2017), our site went down for what was probably longer than 1 day (1)(2). We got no notification from Hetzner, no explanation why, and no notice when it came back. I emailed them asking for this information, and they weren't helpful. So the availability & customer support of Hetzner is awful, but we've been paying 40 EUR per month for a dedicated server with 64G of RAM since 2016. And the datacenter is powered by renewable energy. That's hard to beat. At this time, I think Hetzner is our best option.

That said, after being a customer with Hetzner for 5 years (from 2011), we bought a new plan (in 2016). The 2016 plan cost the same as our 2011 plan, but the RAM on our system, for example, jumped from 4G to 64G! Therefore, it would be wise to change plans on Hetzner every few years so that we get a significant upgrade on resources for the same fee.

Hetzner Cloud

Based on the calculations above, their cloud platform seems tempting.

  • CX51 - For 30 EUR/month, we can get a Hetzner cloud machine with 8 vCPU, 32G RAM, 240G disk space, & 20T traffic. If we migrated to that now, we'd save 120 EUR/year and still be overprovisioned
  • CX41 - Or for 16 EUR/month, we can get a 4 vCPU, 16G RAM, 160G disk, & 20T traffic node. That provisioning sounds about right with just enough headroom to be safe. Disk may be a concern, but we can mount up to 10T for an additional fee--though it's not clear how optimized & cost-effective that would be to an actual local disk on a dedicated server..

HetznerCloud2018.png

Dedicated Servers

2019-12: I spy a better server for the same price

Moving to the cloud may not be the best option from a maintenance perspective. Dividing things up into microservices certainly has advantages, most of which are realized when you are serving millions of customers on a cyclic cycle, so you can spin up capacity during the day and down at night. At OSE, we wouldn't realize many of these benefits, and therefore it may not be worth it.

Just for an example, our current web server configuration has https terminated by nginx. Traffic to varnish and to the apache backend doesn't have to be encrypted because it's all occurring on the same machine over loopback interface. Once we start moving traffic between many distinct cloud instances, all that traffic would have to be encrypted again, which some items in the stack (varnish) don't even have the capability to do with their free version (but a hack could be built with stunnel). This is just one example of the changes that would be necessary and the complexity that would be introduced by moving to the cloud.

My (Michael Altfield) advice: don't migrate to the cloud until OSE has at least 1 full-time IT staff to manage that complexity. Otherwise you're setting yourself up for failure.

A simpler solution would be to just stick with Dedicated Servers. As shown above, our current server as of 2019-12 is super idle. The only thing that's reaching capacity is our default RAID1 250G SSD disks. But we can easily just add up to two additional disks up to 12T in space to our dedicated server and move everything from /var/ onto them if we hit this limit

* https://wiki.hetzner.de/index.php/Root_Server_Hardware/en#Drives

And as of 2019-12, we can "upgrade" to a Dedicated server for the same price that we're currently paying and get a better processor, same RAM, and 8x the disk space, albeit HDD instead of SSD (AX41[1])

* https://www.hetzner.com/dedicated-rootserver
* https://www.hetzner.com/sb

Provisioning

OSE does not use any provisioning software to manage, for example, the users/packages/files on our server. This is intentional.

As of 2018, we have no need to scale beyond 1 server; this makes both the benefits & complexity of a load balancer & stateless web servers that can be spun-up as needed (which is the best use-case for provisioning solutions) irrelevant.

The biggest con of not using provisioning tool is that rebuilding our server from backups after catastrophic failure is an annoyingly manual & time-consuming process. However, with our current architecture, the reality is that--if we were to put our configs in a provisioning tool--it would be just as manual & time-consuming (if not worse!). This is because of config rot. Unless nodes are actively being destroyed & launched with the provisioning tool, there will end up being changes made to the node directly, which will not be checked-into the provisioning tool. Unfortunately, this configuration drift is highly likely in a small nonprofit organization with sysadmins coming & going and when managing a single server that is never destroyed & re-provisioned.

I (Michael Altfield) am very familiar with provisioning tools. I've written one from scratch. I've used Puppet, Chef, Ansible, etc. I love them. But the inevitable config rot/drift described above would mean that use of a provisioning tool would make our maintenance *more* complex, not less.

Therefore, the source of truth for our server's users/packages/files is our backups.

If our server experiences a catastrophic failure requiring a rebuild, the restore will necessarily be time-consuming (taking maybe a few days of work), but the data will be in exactly 1 trustworthy place. This is better than trying to restore from provisioning files, finding that things are broken because some files were missing (or different because someone just commented-out the puppet cron to "make it work") from the provisioned configs, trying to diff the backups from the provisioner's files, and then just giving up & going with the backups anyway.

If we get to the point where we actually autoscale stateless servers behind a load balancer, and we can ensure that our stateless servers are being intentionally destroyed & rebuilt at least a few times per week to prevent provisioning config rot/drift, then we *should* use a provisioning tool.

In the meantime, rebuilding our server after catastrophic failure means:

  1. Downloading the most recent full backup of our server (Hopefully nightlies are available. Maybe we have to fall-back on our once-a-month backups stored to Glacier)
  2. Installing a fresh server with the OS matching the previous server's OS (ie: CentOS), perhaps using a newer version
  3. Installing the packages needed on the new server
  4. Copying the config files from the backups to the new server
  5. Copying & restoring the db contents to the new server
  6. Copying & restoring the web roots to the new server
  7. Test, fix, reboot, etc. Until it can reboot & work as expected.

Here are some package hints that you'll want to ensure are installed (probably in this order). Be sure to grep the backups for their config files, and restore the configs. But, again, this doc itself is going to rot; the source-of-truth is the backups.

  1. sshd
  2. iptables
  3. ip6tables
  4. OSSEC
  5. our backup scripts
  6. crond
  7. mariadb
  8. certbot (let's encrypt)
  9. nginx
  10. varnish
  11. php
  12. apcu
  13. httpd (apache)
  14. logrotated
  15. awstats
  16. munin

Don't forget to test & verify backups are working!

SSH

Our server has ssh access. If you require access to ssh, contact the OSE System Administrator with subject "ssh access request," and include the following information in the body of the email:

  1. An explanation as to why you need ssh access
  2. What you need access to
  3. Provide a link to a portfolio of prior experience working with linux over command line that demonstrates your experience & competency using the command line safely
  4. Provide a few references for previous work in which you had experience working with linux over command line

Add new users

The following steps will add a new user to the OSE Server.

First, create the new user. Generate & set a temporary, 100-character, random, alpha-numeric password for the user.

useradd <new_username>
passwd <new_username>

Only if it's necessary, send this password to the user through a confidential/encrypted medium (ie: the Wire app). They would need it if they want to reset their password. Note that they will not be able to authenticate with their password over ssh, and this is intentional. In fact, it is unlikely they will need their password at all, unless perhaps they will require sudo access. For this reason, it's best to set this password "just in case," not save it, and not send it to the user--it's more likely to confuse them. If they need their password for some reason in the future, you can reset it to a new random password in the future as the root user, and send it to them over an encrypted medium.

If the user needs ssh access, add them to the 'sshaccess' group.

gpasswd -a <new_username> sshaccess

Have the user generate a strong rsa keypair using the following command. Make sure they have it encrypted with a strong passphrase--to ensure they have 2FA. Then have them send you their new public key. The following commands should be run on the new user's computer, not the server:

ssh-keygen -t rsa -b 4096 -o -a 100
cat /home/<username>/.ssh/id_rsa.pub

The output from the `cat` command above is their public key. Have them send this to you. They can use an insecure medium such as email, as there is no reason to keep the public key confidential. They should never, ever send their private key (/home/<username>/.ssh/id_rsa) to anyone. Moreover, the private key should not be copied to any other computer, except in an encrypted backup. Note this means that the user should not copy their private key to OSE servers--that's what ssh agents are for.

Now, add the ssh public key provided by the user to their authorized_keys file on the OSE Server, and set the permissions:

cd /home/<new_username>
mkdir /home/<new_username>/.ssh
vim /home/<new_username>/.ssh/authorized_keys
chown -R <new_username>:<new_username> /home/<new_username>/.ssh
chmod 700 /home/<new_username>/.ssh
chmod 644 /home/<new_username>/.ssh/authorized_keys

If the user needs sudo permissions, edit the sudoers file. This should only be done in very, very, very rare cases for users who have >5 years of experience working as a Linux Systems Administrator. Users with sudo access must be able to demonstrate a very high level of trust, experience, and competence working on the command line in a linux environment.

Backups

HintLightbulb.png Hint: Backup milestone reached on Feb 16, 2019. Now the daily, weekly, monthly, and yearly backups appear to be accumulating (and self-deleting) as desired in our Backblaze B2 'ose-server-backups' bucket.

How to obtain & decrypt a backup from Backblaze B2

* https://wiki.opensourceecology.org/wiki/Backblaze

Our bill last month was $0.78, and it's estimating that the upcoming monthly bill will be $0.86. Surely, this will continue to rise (we should expect to store 500-1000G on B2; currently we just have 16 backups totaling to 193G), but we should be spending far less on B2 than the >$100/year that was estimated for Amazon Glacier, considering their minimum-archive-lifetime fine print.

I think this is the most important thing that I have achieved at OSE. I still want to add some logic to the backup script that will email us when a nightly backup fails for some reason, but our backup solution (and therefore all of OSE Server's data) has never been as safe & stable as it is today.

We actively backup our server's data on a daily basis.

Logging In

We use a shared Keepass file that lives on the OSE server for server-related logins. The password for Backblaze for server backups is on this shared Keepass file.

Important Files & Directories

The following files/directories are related to the daily backup process:

  1. /root/backups/backup.sh This is the script that preforms the backups
  2. /root/backups/backupReport.sh This is the script that preforms sanity checks on the remote backups and sends emails with the results
  3. /root/backups/sync/ This is where backup files are stored before they're rsync'd to the storage server. '/root/backups/sync*' is explicitly excluded from backups itself to prevent a recursive nightmare.
  4. /root/backups/sync.old/ This is where the files from the previous backup are stored; they're deleted by the backup script at the beginning of a new backup, and replaced by the files from 'sync'
  5. /root/backups/backup.settings This holds important variables for the backup script. Note that this file should be on heavy lockdown, as it contains critical credentials (passwords).
  6. /etc/cron.d/backup_to_backblaze This file tells the cron server to execute the backup script at 07:20 UTC, which is roughly midnight in North America--a time of low traffic for the OSE Server
  7. /var/log/backups/backup.log The backup script logs to this file

What's Backed-Up

Here is what is being backed-up:

  1. mysqldump of all databases - including phpList?
  2. all files in /etc/* - these files include many os-level config files
  3. all files in /home/* (except the '/home/b2user/sync*' dirs) - these files include our users' home directories
  4. all files in /var/log/* - these files include log files
  5. all files in /root/* (except the 'backups/sync*' dirs) - these files include the root user's home directory
  6. all files in /var/www/* - these files include our web server files. The picture & media file storage location is application-specific. For wordpress, they're in wp-content/<year>/<month>/<file>. For mediawiki, they're in images/<dir>/<dir>/. For phplist, they're in uploadimages/. Etc.

Backup Server

OSE uses Backblaze B2 to store our encrypted backup archives on the cloud. For documentation on how to restore from a backup stored in Backblaze B2, see Backblaze#Restore_from_backups

Note that prior to 2019, OSE stored some backups to Amazon Glacier, but the billing fine-print of Glacier (minimum data retention) made Glacier unreasonably expensive for our daily backups.

Also note that, as a nonprofit, we're eligible for "unlimited" storage account with dreamhost. But, in fact, this isn't actually unlimited. Indeed, storing backups on Dreamhost is a violation of their policy, and OSE has been contacted by Dreamhost for violating this policy by storing backups on our account in the past.

https

In 2017 & 2018, Michael Altfield migrated OSE sites to use https with Let's Encrypt certificates.

Nginx's https config was hardened using Mozilla's ssl-config-generator and the Qualys ssllabs.com SSL Server Test.

For more information on our https configuration, see Web server configuration#Nginx

Keepass

Whenever possible, we should utilize per-user credentials for logins so there is a user-specific audit trail and we have user-specific authorization-revocation abilities. However, where this is not possible, we should store usernames & passwords that our OSE Server infrastructure depends on in a secure & shared location. At OSE, we store such passwords in an encrypted keepass database that lives on the server.

passwords.kdbx file

The passwords.kdbx file is encrypted; if an attacker obtains this file, they will not be able to access any useful information. That said, we keep it in a central location on the OSE Server behind lock & key for a few reasons:

  1. The OSE Server already has nightly backups, so keeping the passwords.kdbx on the server simplifies maintenance by reusing existing backup procedures for the keepass file
  2. By keeping the file in a central location & updating it with sshfs, we can prevent forks & merges of per-person keepass files, which would complicate maintenance. Note that writes to this file are extremely rare, so multi-user access to the same file is greatly simplified.
  3. The keepass file is available on a need-to-have basis to those with ssh authorization that have been added to the 'keepass' group.

The passwords.kdbx file should be owned by the user 'root' and the group 'keepass'. It should have the file permissions of 660 (such that it can be read & written by 'root' and users in the 'keepass' group, but not accessible in any way from anyone else).

The passwords.kdbx file should exist in a directory '/etc/keepass', which is owned by the user 'root' and the group 'keepass'. This directory should have permissions 770 (such that it can be read, written, & executed by 'root' and users in the 'keepass' group, but not accessible in any way from anyone else).

Users should not store a copy of the passwords.kdbx file on their local machines. This file should only exist on the OSE Server (and therefore also in backups).

Unlocking passwords.kdbx

In order to unlock the passwords.kdbx file, you need

  1. Keepass software on your personal computer capable of reading Keepass 2.x DB files
  2. sshfs installed on your personal computer
  3. ssh access to the OSE Server with a user account added to the 'keepass' group
  4. the keepass db password
  5. the keepass db key file

Note that the "Transform rounds" has been tuned to '87654321', which makes the unlock process take ~5 seconds. This also significantly decreases the effectiveness of brute-forcing the keys if an attacker obtains the passwords.kdbx file.

KeePassX

OSE Devs are recommended to use a linux personal computer. In this case, we recommend using the KeePassX client, which can be installed using the following command:

sudo apt-get install keepassx

sshfs

OSE Devs are recommended to use a linux personal computer. In this case, sshfs can be installed using the following command:

sudo apt-get install sshfs

You can now create a local directory on your personal computer where you can mount directories on the OSE Server locally on your personal computer's filesystem. We'll also store your personal keepass file & the ose passwords key file in '$HOME/keepass', so let's lock down the permisions as well:

mkdir -p $HOME/keepass/mnt/ose
chown -R `whoami`:`whoami` $HOME/keepass
find $HOME/keepass/ -type d -exec chmod 700 {} \;
find $HOME/keepass/ -type f -exec chmod 600 {} \;

ssh access

If you're working on a task that requires access to the passwords.kdbx file, you'll need to present the case to & request authorization from the OSE System Administrator asking for ssh access with a user that's been added to the 'keepass' group. Send an email to the OSE System Administrator explaining

  1. Why you require access to the OSE passwords.kdbx file and
  2. Why you can be trusted with all these credentials.

The System Administrator with root access can execute the following command on the OSE Server to add a user to the 'keepass' group:

gpasswd -a <username> keepass

Once you have an ssh user in the 'keepass' group on the OSE Server, you can mount the passwords.kdbx file to your personal computer's filesystem with the following command:

sshfs -p 32415 <username>@opensourceecology.org:/etc/keepass $HOME/keepass/mnt/ose

keepass db password

OSE Devs are recommended to use a linux personal computer & store their personal OSE-related usernames & passwords in a personal password manager, such as KeePassX.

If you don't already have one, open KeePassX and create your own personal keepass db file. Save it to '$HOME/keepass/keepass.kdbx'. Be sure to use a long, secure passphrase.

After being granted access to the OSE shared keepass file from the OSE System Administrator, they will establish a secure channel with you to send you the keepass db password, which is a long, randomly generated string. When you receive this password, you should store it in your personal keepass db.

This password, along with the key file, is a key to unlocking the encrypted passwords.kdbx file. You should use extreme caution to ensure that this string is kept secret & secure. Never give it to anyone through an unencrypted channel, write it down, or save it to an unencrypted file.

keepass db key file

After being granted access to the OSE shared keepass file from the OSE System Administrator, they will establish a secure channel with you to send you the keepass db key file, which is a randomly generated 4096 byte file.

This key file is the most important key to unlocking the encrpted passwords.kdbx file. You should use extreme caution to ensure that this file is kept secret & secure. Never give this key file to anyone through an unencrypted channel, save it on an unencrypted storage medium, or keep it on the same disk as the passwords.kdbx file.

This key file should never be stored or backed-up in the same location as the passwords.kdbx file. It would be a good idea to store it on an external USB drive kept in a safe, rather than keeping it stored on your computer.

Unmounting

Note that a mounted filesystem on a local desktop will do to the source files on the server whatever actions you take on your local computer. Thus, DO NOT TRASH the mounted keepass file on your computer - as this action will trash the file on the server. You need to unmount the file system from your local computer first:

umount $HOME/keepass/mnt/ose

Shutting down your computer also serves to unmount the file system.

Errors

An error that keeps coming up is, as at 5:30 CST USA time on Aug 6, 2019:

Keepasserror.png

Solution is to sudo pkill -f gnome-keyring-daemon

Keepasserror2.png

Solution? Did you create the dir to which it needs to be mounted? Specifically, run the mkdir, chown, find, and find commands here: [1]

TODO

Current Tasks

  1. Discourse POC
  2. Email alerts with munin (or nagios)
  3. Ransomware-proof backups (append-only, offsite cold-storage) Maltfield_Log/2019_Q4#Mon_Dec_02.2C_2019
  4. Design & document webapp upgrade procedure using staging server
  5. Fix awstats cron to include $year in static output dir (prevent overwrite)
  6. Optimize load speeds for osemain (www.opensourceecology.org) (eliminate unused content, minify, lazy load, varnish, modpagespeed, cdn, etc)
  7. AskBot POC
  8. Janus POC
  9. Jitsi Videobridge POC
  10. LibreOffice Online (CODE) POC
  11. Upgrade/migrate hetzner dedicated server to new plan of same price

Tasks completed in 2018/2019

  1. Move offsite backup storage destination from Dreamhost to Backblaze B2
  2. Email alerts if nightly backups to backblaze fail
  3. Monthly backups status report email
  4. Phplist
  5. Provision OSE Development Server in Hetzner Cloud
  6. Put Dev server behind OpenVPN intranet
  7. 2FA for VPN
  8. Document guide for authorizing new users to VPN (audience: OSE sysadmin)
  9. Document guide for users to gain VPN access (audience: OSE devs)
  10. Provision OSE Staging Server as lxc container on dev server
  11. Sync prod server services/files/data from prod to staging
  12. Automation script for syncing from prod to staging

Deprecate Hetzner1 (2017/18)

When I first joined OSE, the primary goal was to migrate all services off of Hetzner 1 onto Hetzner 2 and to terminate our Hetzner 1 plan entirely. This project had many, many dependencies (we didn't even have a functioning backup solution!). It started in 2017 Q2 and finished in 2018 Q4.

  1. Backups
  2. Harden SSH
  3. Document how to add ssh users to Hetzner 2
  4. Statuscake
  5. Awstats
  6. OSSEC
  7. Harden Apache
  8. Harden PHP
  9. Harden Mysql
  10. iptables
  11. Let's Encrypt for OBI
  12. Organize & Harden Wordpress for OBI
  13. Qualys SSL labs validation && tweaking
  14. Varnish Cache
  15. Disable Cloudflare
  16. Fine-tune Wiki config
  17. Munin
  18. Keepass solution + documentation
  19. Migrate forum to hetzner2
  20. Migrate oswh to hetzner2
  21. Migrate fef to hetzner2
  22. Migrate wiki to hetzner2
  23. Migrate osemain to hetzner2
  24. Deprecate forum
  25. Harden oswh
  26. Harden fef
  27. Harden osemain
  28. Harden wiki
  29. Backup data on hetzner1 to long-term storage (aws glacier)
  30. Block hetzner1 traffic to all services (though easily revertible)
  31. End Hetzner1 contract
  32. Encrypted Backups

Links

Changes

As of 2018-07, we have no ticket tracking or change control process. For now, everything is on the wiki as there's higher priorities. Hence, here's some articles used to track server-related changes:

  1. CHG-2018-07-06 hetzner1 deprecation - change to deprecate our hetzner1 server and contract by Michael Altfield

See Also

FAQ

  1. How does Awstats compare to Google Analytics?
    1. https://www.hetzner.com/dedicated-rootserver/ax41
    Retrieved from "https://wiki.opensourceecology.org/index.php?title=OSE_Server&oldid=276499"