Why modularization matters to Sys Admins — Fedora Community Blog

As a systems administrator, you generally worry about two things. First, the security of the systems you support. Second, that the applications you run work as designed. You would like to do those two things with as little effort as possible, however, you want to be aware of and balance the risk inherent in meeting…

via Why modularization matters to Sys Admins — Fedora Community Blog

Modularity Use Case: Application Independence

We will be writing a series of blog posts regarding the project to help the Modularity effort move forward. Some of the posts will be about “Why?” and some will be about “How?” As the first post in the series, this article is about “Why?” The Rings Proposal and the Modularity Objective are both about […]

via Modularity Use Case: Application Independence — Fedora Community Blog

Using a NAS as a Firewall?

Recently, I have been trying to rejigger my home network to support a bunch of the usual things: a firewall, a VPN, ssh access, perhaps a media streaming server and, last but not least, a new backup solution. Someone I work with pointed me at Synology for the backup portion. As a result, I learned about their software product DiskStation Manager (DSM) which does a lot of cool things like file sharing and remote access to your data. The software also has a bunch of cool plugins for things like offsite backup (via Amazon Glacier) and running a web server.. I also found that QNAP has similar software. They both also seem to be supporting at least the letter of the GPL by publishing some (all?) of their code (Synology, QNAP respectively) on sourceforge.

However., and here is the big “but,” it strikes me as incredibly dangerous to run my network on the same hardware as where all of my data lives. In other words, if you root my network connectivity you now have instant access to all my data. I think the concerns go the other way as well. There are a ton of WiFi APs that offer support for publishing data (via USB drive) and printers which seems equally dangerous. One I think looks cool is this Asus one, however, it doesn’t seem to support the nice “pluggable apps” of, at least, the Synology software. A quick search seems to indicate that my concerns are at least somewhat valid.

And, my final note, I would really appreciate someone producing a nice network management device with all the features of the Synology software (or, failing that, the Asus, or QNAP) and just leave out the bits that are asking for trouble.

Fedora Modularization: OSTree Prototype

Using rpm-ostree to deliver a “regular” system

The first idea, which I think had been banging around in several people’s heads for a while, actually came up more formally in an Environments & Stacks meeting on Apr 16. The idea, in essence, is “can we use rpm-ostree (an “implementation” of OSTree) to layer components on to an installation of Fedora.”

In some ways, the use case of adding desktop components in layers is really the original use case for OSTree. From talking to Colin a long time ago, he had originally started and used OSTree to allow him to work on Gnome, basically to use it as a way write some code, test it, then roll back to the stable version to write some more code. What we want here is similar but, really, a way to run a “production install” with the layering/rollback ability of OSTree for sets of components.

The Prototypes

Prototype-1: Normal Workstation & Server

  1. Use rpm-ostree-toolbox to create a compose tree from the official Fedora 21 Workstation kickstart file. It is likely that Brent Baude’s articles1 2 on RHT Developer Blog will be helpful to this work.
  2. Create a VM and install the ostree-based Fedora Workstation
  3. Ensure that normal operation performs correctly, aside from installing new things
  4. Repeat steps for Fedora Server

Prototype-2: Normal Workstation & Server update

  1. Using the VM and compose tree from the “Normal Workstation” use case, introduce an updated rpm in to the tree
  2. Ensure that “rpm-ostree upgrade” (or “atomic host upgrade“) successfully update the system with the new rpm.
  3. Ensure that the rpm can be reverted by executing a “rpm-ostree rollback” (or “atomic host rollback”)
  4. Repeat steps for Fedora Server

Prototype-3: Investigate location of user files

In order for this to “feel” like a normal user system, a user must have the freedom, with some constraints, to add “content” to the places they expect to on the system as well as have the applications they use recognize those locations as “where things go.” For example, I often symlink my “Downloads” directory (in Gnome) to my mounted “projects” directory so that it can grow with my “projects allocation” (and be reused across installs) rather than with my “home dir allocation.” However, if you do that, you need to ensure that Firefox default downloads directory follows the symlink when downloading, the Files app keeps “Downloads” in the “pick list,” etc. As a result, if we move home-dirs to somewhere else, we need to ensure the user experience is the same, or has easily documented differences. I would expect we want to have a similar experience for /opt.

Prototype-4: Investigate using dnf to switch compose-trees

  1. Create a plugin for dnf that front-ends “rpm-ostree rebase”
  2. Create an alternate compose-tree with a significant component change. For example, tuned or a different version of Gnome
  3. Attempt to rebase to the new compose tree using dnf
  4. Attempt to rebase back to the old compose tree using dnf

Prototype-5: Investigate using dnf to create a new compose tree

In order to execute on this prototype in a reasonable way, we will need to declare a couple of tenets which, arguably, invalidate the test, but are still a good prototype while we devise a prototype that will test the tenets.

First off, we are just going to be writing the new compose-tree to disk with some mechanism to verify its quality. In a later protoype we can worry about moving the compose-tree to “someplace” which could host a rebase to that tree.

Second, the ability of the existing compose-tree to meet the dependency graph of the new rpm may prove problematic. While the compose-tree installed on the local system should have an rpm database that can be used for the dependency walk, the rpm coming from an external repository may have new dependencies, or, perhaps more likely, new versions of existing dependencies. For this prototype, it is recommended that we just carefully select the rpms to avoid this problem.

  1. Write a dnf plugin to front-end “rpm-ostree-toolbox.” However, the input should be the existing compose-tree from the user’s box and an rpm from a normal repo
  2. The plugin should generate a new compose-tree including the existing components, the rpm selected, and dependencies walking the new rpm’s dependency tree

Prototype-6: Use dnf to “host” compose-trees

  1. Leveraging a dnf plugin, likely the same one as from Prototype-5, create and manage a location on disk to manage ostrees.
  2. Using the dnf plugin and the compose-tree from Prototype-5, setup the compose-tree to be a target for rebasing of the local system
  3. Use the new compose-tree to rebase
  4. Rebase back to the old compose-tree

Prototype-7: Update existing compose-tree and add new rpm

The need for this prototype is to address the second tenet in Prototype-5. We may discover in the work to do Prototype-5 that the composition of rpms in to the new compose-tree just as easily uses the upstream repository directly as using the locally installed tree. If so, then this prototype is unnecessary or “marked complete” based on those results.

  1. Leveraging the work from Prototypes 5 & 6, identify an rpm that has changed or updated dependencies in the upstream repository
  2. As part of the update to the compose, layer in the changed rpm dependencies and the new rpm and its dependencies
  3. Host the compose-tree per Prototype-6
  4. Rebase to the new compose-tree
  5. Rebase back to the old compose-tree

Conclusion

In order to make this more workable, I have created a github repo with the prototypes identified above as sub-directories. Each sub-directory will contain a markdown file of the description of the prototype as well as Behave features and steps to test the efficacy of the prototype. If you would please file issues there for comments or changes to the prototypes I think this will be a better “living document.” Also, depending on when you read this, there might be lots more prototypes and/or results!

Vagrant-Kubernetes: A proposal for a Kubernetes Provider for Vagrant

Many people use Vagrant to quickly and consistently deploy the infrastructure upon which they want to do their development. Vagrant is also used by people to work on the infrastructure components themselves, but we will concentrate on the first case.

Recently, containerization of infrastructure applications has allowed for lighter-weight deployment of that infrastructure1, commonly people have been using Docker to provide the containerization. Vagrant has a provider for Docker called, intuitively enough, the Docker Provider2 which allows one to use containerized infrastructure applications in a similar way to traditional VM-hosted infrastructure. Personally, I found this confusing at first, because I was expecting to use the Vagrant Docker Provider to develop docker containers. However, once you see it in action and recognize the traditional goals of Vagrant Providers I think it makes perfect sense.
Continue reading

Fedora, Modularization, & Prototypes

Fedora has adopted the earliest stage of the Fedora.Next proposal by releasing the Fedora Editions with Fedora 21. As part of that proposal, a concept of “rings” for software was also identified. Roughly, the idea of the rings was to allow for various “levels” of software, some “closer in” were expected to be of higher quality and not allowed to conflict and further out, software could abide by less strict rules. However, as with the editions work, the technical detail of “how” to implement the rings was not laid out in the original plan. As a result, we have a new Fedora Objective to identify requirements and propose an implementation plan over the next few months.

However, in the meantime, we can expect that the requirements will likely result in a need for new methods of packaging and application deployment. Just to be clear, I am not talking about binary blobs or closed vs open source software, just how binary code lands on an end-system. Not so much on the concept of repos or mirrors or the like, but rather the nature of dependencies, addition of software, removal of software, and configuration of that software.

I would like to propose a couple of prototypes that we could implement to provide some “food for thought” once we have a better understanding of the requirements. In no way do I think these prototypes should be taken as solutions but, rather, just a way of gathering technical information regarding what is possible.

The first idea, which I think had been banging around in several people’s heads for a while, actually came up more formally in an Environments & Stacks meeting on Apr 16. The idea, in essence, is “can we use rpm-ostree (an “implementation” of OSTree) to layer components on to an installation of Fedora.”

The next idea is to expand the feature set (but, perhaps, not the goals) of RoleKit 123 to include
reconfiguration and removal of a role through the use of an unioning filesystem, likely OverlayFS. While this may be similar to the prototype of using OSTree for layered installs, it may have different tradeoffs particularly concerning user experience of application installation.

I will elaborate on specifics for these prototypes in a follow up post or two.

Reliable Messaging (in the cloud era)

At Flock today, someone mentioned to me that they have been getting requests to support “persistence” in fedmsg. I spent many years working in Financial Services which, as you might imagine, has some pretty strong requirements around “only once” and “definitely once” messages, particularly, in trading applications. As a result, I instantly went to “reliable messaging” as a the problem to be solved (which isn’t necessarily correct, but definitely part of the story). However, it has been a while since I was really deeply involved in FS. As a result, I did a little Googling to try and discover the “current state of reliable messaging.” I found some interesting but, rather dated, articles. Specifically, check out this and this. Googling for anything in the last year just gave me “ratified” standards around WS-ReliableMessaging which, I am sure, is good stuff, but, I was more interested in the “why” not the “how” and, unfortunately, didn’t see much (but my searching may not be awesome 😉 ).

OK, on to the point. After reading the two articles above, I was fairly convinced that in the average “trading application” (read: any single application that uses messages to communicate), “reliable messaging,” in the sense described by the standards, the articles, and the general world, probably doesn’t require a protocol-level solution. However, and why I wrote this post, fedmsg, and many other “environments,” is in a somewhat different position. Basically, the application sending the message has no interest in guaranteeing that the message sent was received by fedmsg because the application has no “dependency” on the processing done by fedmsg (this is probably not strictly true in all cases, but illustrative for my point). All of the methods described in the above, and “reliable messaging” in general, has a preconceived notion that the client for the messaging infrastructure actually cares that the server gets the message. By extension, as fedmsg is a broker, when it acts as a client to the servers who signed up to receive messages, the servers have no “interest” in communicating to fedmsg that they got the message because the business logic is within their own applications.

So, dilemma. Fedmsg wants to ensure that it does its job but, no other applications in the environment has any way to know or “care” that fedmsg is doing its job :). Now, do we need reliable messaging? I am not sure, one nice aspect (semi-irrelevant to the distinct implementation) is it forces the applications on both sides of the broker to “care” because they have to do extra work now to send and receive messages at all. However, the tradeoff is that it is “harder” for the applications using the broker which may drive down participation, thereby decreasing the set of interesting things that can happen in the “environment” by essentially removing applications from the environment. Unfortunately, I am not sure I know the answer. However, I can point to a few things that may have similar problems and may be insightful to the answer. Specifically, SMTP is Reliable (as in guaranteed) with the characteristic of no parties really having any interest ensuring the reliability. TCP/IP is also semi-reliable (don’t recall if it is actually guaranteed) as in it normally “just works” with lots of interesting mechanisms to ensure that it works.

Now, let’s also deal with another potential meaning for the term “persistence,” specifically, fedmsg also wants to be able to provide audit and metric information about the transactions it is brokering. Some of that audit/metric information is about performance (quality, including, but not limited to, speed), but, it does, and can, generate other useful information about the environment itself vs the activities of the end points. For example, part of the genesis of this conversation was a discussion about how fedmsg messages trigger badging in the openbadges implemented recently by Fedora. Now, perhaps obviously, the badging system should really register for the messages it cares about (which, it does). However, applications have bugs and something like badging has an inherit need for audit-ability. However, I still think that fedmsg shouldn’t actually implement this kind of persistence. I think that fedmsg should treat the gathering of metrics and audit-ability as just another application that is registering for events. The “audit and metrics consumer” should then be responsible for the persistence of the data and the toolchain to feed consumers of the data. Does this require reliable messaging? Well, arguably, I think this makes fedmsg actually fall in to the same “application-type” that the authors above were referencing. In other words, fedmsg and the, “magical/mystical, audit and metrics application” have a shared interest in the reliability of the messages between the systems. As a result, I think, based on the arguments above, they don’t need reliable messaging at the protocol-level.

All in all, this was very interesting subject for me because when I was in FS, the be all end all problem was how to guarantee transactions got delivered through a multitude of systems exactly once. And, as with so many things in the new era of stateless software development, maybe we never needed to jump through all those hoops. 🙂

SSH Completion

I use a number of different machines during my average day, many of those via ssh. I have a hard time remembering what the creds are for each of the machines so a long time ago I learned to use an .ssh/config file to keep track of the info. I also used to enjoy tab completion with ssh to find all the servers depending on context (e.g. home-www, home-fw, work-test1, etc) but, on RHEL 6 workstation, I didn’t have tab completion with ssh. Finally got around to looking at why and discovered a handy package in epel: bash-completion. Wow, lots and lots that I was missing (and, didn’t have to build for myself).

Check out EPEL. Once you have EPEL set up, then: sudo yum install bash-completion
Or for details see the package page (technically a noarch, this just happens to be the x86_64 link).

If you want to get in to writing your own, check out this article. The article is written about Debian but bash (or, potentially zsh) is what does the heavy lifting so it should be pretty x-distro. Please leave anything cool you make in the comments.

My sound broke on RHEL 6

Not sure what I did (I suspect it had something to do with rebooting while docked), but my laptop sound (and mic, I think) stopped working again. Unfortunately, this is one of those things that happens rarely enough that I can’t ever remember what to do about it. I usually go through the obvious on the little GUI sound prefs panel (twiddle output devices, test speakers, etc) which, in many cases, is sufficient to kick it back to working. However, that didn’t work for this one, so I did some googling and found a bunch of handy things. However, the one that really worked was http://fedoraproject.org/wiki/How_to_troubleshoot_sound_problems. In particular, going to the command line and running the alsamixer (alsamixer -c 0) which, for some reason, always shows me the actual output device that has magically gotten muted.