====== My HomeLab ====== ===== Requirements ===== - Mix Disk Sizes must be possible - As most of us, I don't have a few hundred € to just drop on HDD on the go. Therefore I purchase by disks on a need basis, which usually were accumulated over months / years. Therefore I must be able to add disks that have different sizes. I can accept that I will lose the advantages of a RAID system, but that is a price to pay for mixing HDD with different sizes. Since this is a HomeLab, a few hours / days until the drive is mounted, and has the data restored is not a big deal. - 3/2/1 GFS Backup - Systems can be rebuilt, the data itself can't. Therefore I want to have at least 3 copies of all data I consider important. At least 1 of those backups need to be offsite so that in case of fire / flood / whatever, the data is still there ready to be cloned to a local hardware. - This data, once outside my HomeLab should also be encrypted. - Easy / Fast to re-deploy - Again, since the Data is the important part, the system itself, should be easy / fast to re-deploy. No "hacks" that can't be automated or aren't "future proof" allowed. If I were to lose the entire system, I don't want to spend more then, let's say, 1 hour on getting the basic infrastructure back up. After that, everything should be done automatically / without the need for input / without the need for me to be present. - Expandible without hassles - Referencing to 1.. I want to be able to add more disks / more RAM / more whatever, without having the issues like, let's say, RAID5, where I have to move the data to another Pool, recreate the Array, move the data back. Also referencing to 1.: if that means I might loose a few hours / days offline until the "lost" data is restored due to a hardware failure, so be it - "Future Proof" - Nothing lasts forever. However, I don't want to use proprietary software which might massively change / be removed in the - near - future. Everything must have as Root a standard that is FOSS so that, in case the software running X isn't available anymore for whatever reasons, I can use the protocol they were using to recover data - as example, even though I use FreeNAS to sync to Backblaze B2, if FreeNAS were to disappear from existence, I can still use rclone with decryption to recover my files from Backblaze B2 -. ===== Hardware ===== [TODO] List HomeLab Hardware ===== Software ===== [[engineering:computer_science:homelab:proxmox:proxmox|Proxmox]] as my main Hypervisor. [[engineering:computer_science:homelab:freenas:freenas|FreeNAS]] as File, Media and Backup Server [[engineering:computer_science:docker:docker|Docker]] as Container for (Micro)services. ===== How is it working? ===== ==== Overview ==== I use [[engineering:computer_science:homelab:proxmox:proxmox|Proxmox]] as my main Hypervisor. This Hypervisor is responsible for virtualizing almost every part of the infrastructure: * FreeNAS running as a [[refractor_computer_science:sysadmin:homelab:freenas:freenas_backup_server|Backup Server]] - a.k.a. backup01 * Windows 10 Pro as a [[refractor_computer_science:sysadmin:homelab:jumpbox:windows10_as_jumpbox|JumpBox]] - a.k.a. jumpbox01 * Ubuntu running Pi.Hole to act as an DNS / AdBlocker - a.k.a dns01 * FreeNAS running as a [[refractor_computer_science:sysadmin:homelab:freenas:freenas_file_server|File Server]] & Media Server - a.k.a. fileserver01 * Ubuntu running [[engineering:computer_science:docker:docker|Docker]] - a.k.a. docker01 Every Virtual Machine (VM) has it's own boot disk - usually around 10-15 GB - and then disks added for data only. The only exception to this rule is the backup01, which has the drives directly mounted - via passthough - to it. This is, of course, because it makes little sense to give the entirety of a disk to a VM, just to make a qcow2 file inside it. ==== File Sharing and Media Streaming ==== So, all my files are stored on **fileserver01**. I want - and should - have separation of Data, as in, the fileserverX would only have personal files, fileserverY only media files, fileserverZ for "Public" Shares but that is not possible at the moment (atm) due to a) my lack of resources on my Server and b) the resources required by FreeNAS to actually run without feeling like a computer from pre-2000s. - Anywho, **fileserver01** shares my Data over the HomeNetwork via NFS and Samba, all controlled by ACLs. It also runs a few services to share the data to TV / Mobile Devices / etc. so to allow media / content streaming. ==== Backups ==== Backup Infrastruture is as follows: * HDD passthough to VM - backupX - * backupX has FreeNAS as OS, the HDD is a Pool * fileserverX uses rclone to sync it's data to backupX * DataSetA every X days, and to DataSetB every Y days * backupX uses rclone with Remote Encryption to sync it's data with Backblaze B2 * DataSetA is synced every X days * DataSetB is synced every Y days Backups are done as following: * **fileserver01 **uses rclone to sync it's data to **backup01** * **backup01** uses rclone to sync (encrypted) files to **Backblaze B2**"But what about the VMs?" you ask. Good question! At first I though about just vzdump-ing from Proxmox to a NFS on backup01 but: * Syncing to the cloud files which are about 200-300 GB would be a P.I.T.A. if you ask me. Never mind downloading them / keep them updated / overhead etc. * Seriously, how long will it take to sync over 2 TB of data to the cloud, everytime starting from fresh. The Tear and Wear on the disk will be massive, since it would have to be spinning for days on end, just to be finished, and start all over again, since a new backup just dropped. So to take care of the VM Configuration, I backup the configuration on each VM manually - yes, I know! - after the first deployment and make that a Template - which is also synced to backup01 but not to Backblaze B2 -. In cases which the VM has a special OS - take FreeNAS for example - I just download the OS Configuration from inside the VM, which can be easily backuped up and restored from backup01 / Backblaze B2. Therefore I always have at least 3 copies of my data: - The Data being shared - fileserverX - - The Data synced from fileserverX to backupY/DataSetA and backupY/DataSetB - which only happens every X/Y days - The DataSetA and DataSetB synced to Backblaze - which only happenes every X/Y days So usually, I can go anywhere from yesterday to 30 days back or even more, if I were to increase the Data Retention Policy in Backblaze B2. === In case of loss: === * If disk on fileserverX * Restore from backupX * If disk on backupX * Backup to backupX from fileserverX * if entire system * Restore from Backblaze B2 to backupX - or directly -, then from backupX to fileserverX ===== Remote Connection / Remote Management ===== Besides that jumpbox01, I don't have any way of accessing my server from outside my network. That is done on purpose and by design, since a) I rarely require anything from my server outside my home network, b) security reasons which are related to c) not being able to port-foward my own VPN Server. And while I could use a VPS to make a VPN Bridge, until that need arises, I will not be considering Remote Management anyway. I've considered / I'm considering to run a docker service which will watch a mail box and act based on E-Mails it receives from trusted E-Mail Addresses. That is, at the time of writing, not yet implemented nor planned. ====== My VPS ====== [TODO] Add VPS Information ====== Main PC ====== [[engineering:computer_science:homelab:main_build|Main Desktop Builds]]