Using a NAS as a Firewall?

Recently, I have been trying to rejigger my home network to support a bunch of the usual things: a firewall, a VPN, ssh access, perhaps a media streaming server and, last but not least, a new backup solution. Someone I work with pointed me at Synology for the backup portion. As a result, I learned about their software product DiskStation Manager (DSM) which does a lot of cool things like file sharing and remote access to your data. The software also has a bunch of cool plugins for things like offsite backup (via Amazon Glacier) and running a web server.. I also found that QNAP has similar software. They both also seem to be supporting at least the letter of the GPL by publishing some (all?) of their code (Synology, QNAP respectively) on sourceforge.

However., and here is the big “but,” it strikes me as incredibly dangerous to run my network on the same hardware as where all of my data lives. In other words, if you root my network connectivity you now have instant access to all my data. I think the concerns go the other way as well. There are a ton of WiFi APs that offer support for publishing data (via USB drive) and printers which seems equally dangerous. One I think looks cool is this Asus one, however, it doesn’t seem to support the nice “pluggable apps” of, at least, the Synology software. A quick search seems to indicate that my concerns are at least somewhat valid.

And, my final note, I would really appreciate someone producing a nice network management device with all the features of the Synology software (or, failing that, the Asus, or QNAP) and just leave out the bits that are asking for trouble.

Fedora Modularization: OSTree Prototype

Using rpm-ostree to deliver a “regular” system

The first idea, which I think had been banging around in several people’s heads for a while, actually came up more formally in an Environments & Stacks meeting on Apr 16. The idea, in essence, is “can we use rpm-ostree (an “implementation” of OSTree) to layer components on to an installation of Fedora.”

In some ways, the use case of adding desktop components in layers is really the original use case for OSTree. From talking to Colin a long time ago, he had originally started and used OSTree to allow him to work on Gnome, basically to use it as a way write some code, test it, then roll back to the stable version to write some more code. What we want here is similar but, really, a way to run a “production install” with the layering/rollback ability of OSTree for sets of components.

The Prototypes

Prototype-1: Normal Workstation & Server

  1. Use rpm-ostree-toolbox to create a compose tree from the official Fedora 21 Workstation kickstart file. It is likely that Brent Baude’s articles1 2 on RHT Developer Blog will be helpful to this work.
  2. Create a VM and install the ostree-based Fedora Workstation
  3. Ensure that normal operation performs correctly, aside from installing new things
  4. Repeat steps for Fedora Server

Prototype-2: Normal Workstation & Server update

  1. Using the VM and compose tree from the “Normal Workstation” use case, introduce an updated rpm in to the tree
  2. Ensure that “rpm-ostree upgrade” (or “atomic host upgrade“) successfully update the system with the new rpm.
  3. Ensure that the rpm can be reverted by executing a “rpm-ostree rollback” (or “atomic host rollback”)
  4. Repeat steps for Fedora Server

Prototype-3: Investigate location of user files

In order for this to “feel” like a normal user system, a user must have the freedom, with some constraints, to add “content” to the places they expect to on the system as well as have the applications they use recognize those locations as “where things go.” For example, I often symlink my “Downloads” directory (in Gnome) to my mounted “projects” directory so that it can grow with my “projects allocation” (and be reused across installs) rather than with my “home dir allocation.” However, if you do that, you need to ensure that Firefox default downloads directory follows the symlink when downloading, the Files app keeps “Downloads” in the “pick list,” etc. As a result, if we move home-dirs to somewhere else, we need to ensure the user experience is the same, or has easily documented differences. I would expect we want to have a similar experience for /opt.

Prototype-4: Investigate using dnf to switch compose-trees

  1. Create a plugin for dnf that front-ends “rpm-ostree rebase”
  2. Create an alternate compose-tree with a significant component change. For example, tuned or a different version of Gnome
  3. Attempt to rebase to the new compose tree using dnf
  4. Attempt to rebase back to the old compose tree using dnf

Prototype-5: Investigate using dnf to create a new compose tree

In order to execute on this prototype in a reasonable way, we will need to declare a couple of tenets which, arguably, invalidate the test, but are still a good prototype while we devise a prototype that will test the tenets.

First off, we are just going to be writing the new compose-tree to disk with some mechanism to verify its quality. In a later protoype we can worry about moving the compose-tree to “someplace” which could host a rebase to that tree.

Second, the ability of the existing compose-tree to meet the dependency graph of the new rpm may prove problematic. While the compose-tree installed on the local system should have an rpm database that can be used for the dependency walk, the rpm coming from an external repository may have new dependencies, or, perhaps more likely, new versions of existing dependencies. For this prototype, it is recommended that we just carefully select the rpms to avoid this problem.

  1. Write a dnf plugin to front-end “rpm-ostree-toolbox.” However, the input should be the existing compose-tree from the user’s box and an rpm from a normal repo
  2. The plugin should generate a new compose-tree including the existing components, the rpm selected, and dependencies walking the new rpm’s dependency tree

Prototype-6: Use dnf to “host” compose-trees

  1. Leveraging a dnf plugin, likely the same one as from Prototype-5, create and manage a location on disk to manage ostrees.
  2. Using the dnf plugin and the compose-tree from Prototype-5, setup the compose-tree to be a target for rebasing of the local system
  3. Use the new compose-tree to rebase
  4. Rebase back to the old compose-tree

Prototype-7: Update existing compose-tree and add new rpm

The need for this prototype is to address the second tenet in Prototype-5. We may discover in the work to do Prototype-5 that the composition of rpms in to the new compose-tree just as easily uses the upstream repository directly as using the locally installed tree. If so, then this prototype is unnecessary or “marked complete” based on those results.

  1. Leveraging the work from Prototypes 5 & 6, identify an rpm that has changed or updated dependencies in the upstream repository
  2. As part of the update to the compose, layer in the changed rpm dependencies and the new rpm and its dependencies
  3. Host the compose-tree per Prototype-6
  4. Rebase to the new compose-tree
  5. Rebase back to the old compose-tree


In order to make this more workable, I have created a github repo with the prototypes identified above as sub-directories. Each sub-directory will contain a markdown file of the description of the prototype as well as Behave features and steps to test the efficacy of the prototype. If you would please file issues there for comments or changes to the prototypes I think this will be a better “living document.” Also, depending on when you read this, there might be lots more prototypes and/or results!

Vagrant-Kubernetes: A proposal for a Kubernetes Provider for Vagrant

Many people use Vagrant to quickly and consistently deploy the infrastructure upon which they want to do their development. Vagrant is also used by people to work on the infrastructure components themselves, but we will concentrate on the first case.

Recently, containerization of infrastructure applications has allowed for lighter-weight deployment of that infrastructure1, commonly people have been using Docker to provide the containerization. Vagrant has a provider for Docker called, intuitively enough, the Docker Provider2 which allows one to use containerized infrastructure applications in a similar way to traditional VM-hosted infrastructure. Personally, I found this confusing at first, because I was expecting to use the Vagrant Docker Provider to develop docker containers. However, once you see it in action and recognize the traditional goals of Vagrant Providers I think it makes perfect sense.
Continue reading