Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

October 31, 2014

Google Blog

Through the Google lens: search trends October 23-30

Grab some candy corn and a caramel apple and settle in for a look back at another week in search trends:

Time for trick or treating
With today’s Halloween holiday, people are turning to the web to look for
last-minute costumes and pumpkin-carving tips. Top costume searches include Elsa from Frozen, Anna from Frozen, Olaf from Frozen (people can’t just let it go, can they?) and Maleficent. Whether you’re trick-or-treating or not, get the most out of the twilight hours tonight—Daylight Savings Time comes to an end on Sunday, which means it will be getting darker earlier. At least you get some extra sleep out of the deal.

Sports endings and beginnings
The World Series came to a thrilling conclusion on Wednesday night with Game 7 in Kansas City’s Kauffman Stadium, as the San Francisco Giants’ took home their third victory in just five (even) years. The star of the night—and the series—was undoubtedly Madison Bumgarner, the Giants’ 25-year-old ace starting pitcher who came into the game in relief in the fifth inning and more than earned both the save and his MVP trophy, capping off a postseason performance for the history books. He was the top topic in search Wednesday, with more than 1 million searches. Fellow teammates Buster Posey and “Panda” Pablo Sandoval were also on the list.
As baseball fans put their caps and gloves in storage and look longingly at the calendar for March (pitchers and catchers report in 114 days!), fans of the NBA are just getting going. Basketball season started this week and the web was full of searches for the Cleveland Cavaliers (who are welcoming hometown hero Lebron James back to the fold), Miami Heat (the team LeBron left behind) and Chicago Bulls.

Trouble in the skies
There was a spike in searches around NASA when an unmanned rocket erupted into flames seconds after liftoff on Tuesday. The spacecraft and its cargo were lost, and the launch pad suffered heavy damage. Also this week, there was a breakthrough in the mystery of Amelia Earhart’s final flight. A piece of debris located on a tiny island has been identified as a piece of her lost plane.

Movie marvels
Marvel this week revealed a lineup of nine new movies to be released over the coming years, along with some casting details. Alongside familiar faces like Captain America and Iron Man, we’ll soon see a film about the Black Panther, who will be played by Chadwick Boseman. Marvel also revealed that Sherlock star (and Internet fave) Benedict Cumberbatch will play Doctor Strange in the 2016 movie.

Tip of the week
Don’t get caught off-guard by the changing of the clocks. With the Google app, you can set a reminder to reset the clocks on your microwave, in your car and on your wall as soon as Daylight Savings Time comes to an end on Sunday. Just open the app and say “Ok Google, remind me to change the clocks on Sunday.” Now relax and enjoy that extra hour of sleep!

by Emily Wood ( at October 31, 2014 01:30 PM

Chris Siebenmann

A drawback to handling errors via exceptions

Recently I discovered an interesting and long standing bug in DWiki. DWiki is essentially a mature program, so this one was uncovered through the common mechanism of someone using invalid input, in this case a specific sort of invalid URL. DWiki creates time-based views of this blog through synthetic parts of the URLs that end in things like, for example, '.../2014/10/' for entries from October 2014. Someone came along and requested a URL that looked like '.../2014/99/', and DWiki promptly hit an uncaught Python exception (well, technically it was caught and logged by my general error code).

(A mature program usually doesn't have bugs handling valid input, even uncommon valid input. But the many forms of invalid input are often much less well tested.)

To be specific, it promptly coughed up:

calendar.IllegalMonthError: bad month number 99; must be 1-12

Down in the depths of the code that handled a per-month view I was calling calendar.monthrange() to determine how many days a given month has, which was throwing an exception because '99' is of course not a valid month of the year. The exception escaped because I wasn't doing anything in my code to either catch it or not let invalid months get that far in the code.

The standard advantage of handling errors via exceptions definitely applied here. Even though I had totally overlooked this error possibility, the error did not get quietly ignored and go on to corrupt further program state; instead I got smacked over the nose with the existence of this bug so I could find it and fix it. But it also exposes a drawback of handling errors with exceptions, which is that it makes it easier to overlook the possibility of errors because that possibility isn't explicit.

The calendar module doesn't document what exceptions it raises, either in general or especially in the documentation for monthrange() in specific (where it would be easy to spot while reading about the function). Because an exception is effectively an implicit extra return 'value' from functions, it's easy to overlook the possibility that you'll actually get an exception; in Python, there's nothing there to rub your nose in it and make you think about it. And so I never even thought about what happened if monthrange() was handed invalid input, in part because of the usual silent assumption that the code would only be called with valid input because of course DWiki doesn't generate date range URLs with bad months in them.

Explicit error returns may require a bunch of inconvenient work to handle them individually instead of letting you aggregate exception handling together, but the mere presence of an explicit error return in a method's or function's signature serves as a reminder that yes, the function can fail and so you need to handle it. Exceptions for errors are more convenient and more safe for at least casual programming, but they do mean you need to ask yourself what-if questions on a regular basis (here, 'what if the month is out of range?').

(It turns out I've run into this general issue before, although that time the documentation had a prominent notice that I just ignored. The general issue of error handling with exceptions versus explicit returns is on my mind these days because I've been doing a bunch of coding in Go, which has explicit error returns.)

by cks at October 31, 2014 05:01 AM

Ubuntu Geek

FreeCAD – Extensible Open Source CAx program

Sponsored Link
FreeCAD is a general purpose feature-based, parametric 3D modeler for CAD, MCAD, CAx, CAE and PLM, aimed directly at mechanical engineering and product design but also fits a wider range of uses in engineering, such as architecture or other engineering specialties. It is 100% Open Source and extremely modular, allowing for very advanced extension and customization.
Read the rest of FreeCAD – Extensible Open Source CAx program (186 words)

© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to
Post tags: , , ,

Related posts

by ruchi at October 31, 2014 12:06 AM

Olimex OlinuXino a10 Lime Minimal Debian 7 Image

The Olimex OlinuXino A10 LIME is an amazing, powerfull and cheap open source ARM development board. It costs EUR 30, and has 160 GPIO. The default Debian image from OlimeX is quite huge and bloated, over 2 GB, with X and all. I do not want a huge image, so I stripped it down to a 200 MB image with only dhcp and ssh and a few basic tools. It uses about 20 MB of RAM. This image allows you to start with almost nothing and build up only what you need.

October 31, 2014 12:00 AM

October 30, 2014


ZendCon - Profiling with XHProf

My slides from ZendCon 2014 on "Profiling with XHProf" are now available for download here:

by (Ilia Alshanetsky) at October 30, 2014 02:54 PM

Standalone Sysadmin

Repping for SysAdmins at the Space Coast! #NASASocial

Like most children, I was a huge fan of spaceships when I was little. I remember watching Space Shuttle launches on TV, and watching interviews with astronauts in school. I always wanted to go see a launch in person, but that was hard to do when you were a kid in West Virginia. As I got older, I might have found other interests, but I never lost my love of space, technology, sci-fi, and the merging of all of those things. When I took a year away from system administration, the first hobby I picked up was model rocketry. I didn't really see any other option, it was just natural.

Well, a while back, I saw a post from one of the NASA Social accounts about how they were inviting social media people to come to the initial test launch of the Orion spacecraft in December. I thought... "Hey, I'm a social media people...I should try to get into this!". I don't spend a TON of time talking about space-related activities here, since this is a system administration blog, but I do merge my interests as often as possible, like with my post on instrumentation, or Kerbal Space System Administration, and I understand that I'm not alone in having these two interests. I suspected that, if I were accepted to this program, that it would be of interest to my readers (meaning: you).

Well, this morning, I got the email. I'm accepted. How awesome is that?
(Hint: Very Awesome.)

So, at the beginning of December, I will be heading to Kennedy Space Center to attend a two-day event, where I'll get tours and talk to engineers and administrators, and get very up close and cozy with the space program, and see the Orion launch in person. Literally a lifelong dream. I'm so excited, you've got no idea. Really, you haven't. I'm not even sure it's hit me yet.

The code for this mission is EFT1, for Exploration Test Flight 1. This is the crew module that will take humanity to deep space. This test flight's profile is sending the capsule 5,800km (3,600 miles) into space (the International Space Station orbits at around 330km), then re-enter the atmosphere at 32,000km/h (20,000mph) and be slowed down through friction on its heat shield and 11 parachutes. The entire mission takes 4 hours.

If you follow me on any of my social media accounts, you can prepare to see a lot of space stuff soonish. If you're not interested, I'm sorry about that, and I won't take it personally if you re-follow later in December or next year. But you should stick around, because it's going to be a really fun trip. And I'm going to be blogging here as well, of course.

If you don't already, follow me on twitter, Facebook, Instagram, Flickr, and Google+. You can follow the conversation about this event by using the #NASASocial and #Orion hashtags.

So thank you all for reading my blog, for following me on social media, and for making it possible for me to do awesome things and share them with you. It's because of you that I can do stuff like this, and I'm eternally grateful. If you have any special requests on aspects to cover of this mission, or of my experiences, please comment below and let me know. I can't promise anything, but I can try to make it happen. Thanks again.

by Matt Simmons at October 30, 2014 01:20 PM

Chris Siebenmann

Quick notes on the Linux iptables 'ipset' extension

For a long time Linux's iptables firewall had an annoying lack in that it had no way to do efficient matching against a set of IP addresses. If you had a lot of IP addresses to match things against (for example if you were firewalling hundreds or thousands of IP addresses and IP address ranges off from your SMTP port), you needed one iptables rule for each entry and then they were all checked sequentially. This didn't make your life happy, to put it one way. In modern Linuxes, ipsets are finally the answer to this; they give you support for efficient sets of various things, including random CIDR netblocks.

(This entry suggests that ipsets only appeared in mainline Linux kernels as of 2.6.39. Ubuntu 12.04, 14.04, Fedora 20, and RHEL/CentOS 7 all have them while RHEL 5 appears to be too old.)

To work with ipsets, the first thing you need is the user level tool for creating and manipulating them. For no particularly sensible reason your Linux distribution probably doesn't install this when you install the standard iptables stuff; instead you'll need to install an additional package, usually called ipset. Iptables itself contains the code to use ipsets, but without ipset to create the sets you can't actually install any rules that use them.

(I wish I was kidding about this but I'm not.)

The basic use of ipsets is to make a set, populate it, and match against it. Let's take an example:

ipset create smtpblocks hash:net counters
ipset add smtpblocks
ipset add smtpblocks
iptables -A INPUT -p tcp --dport 25 -m set --match-set smtpblocks src -j DROP

(Both entries are currently on the Spamhaus EDROP list.)

Note that the set must exist before you can add iptables rules that refer to it. The ipset manpage has a long discussion of the various types of sets that you can use and the iptables-extensions manpage has a discussion of --match-set and the SET target for adding entries to sets from iptables rules. The hash:net I'm using here holds random CIDR netblocks (including /32s, ie single hosts) and is set to have counters.

It would be nice if there was a simple command to get just a listing of the members of an ipset. Unfortunately there isn't, as plain 'ipset list' insists on outputting a few lines of summary information before it lists the members. Since I don't know if these are constant I'm using 'ipset list -t save | grep "^add "', which seems ugly but seems likely to keep working forever.

Unfortunately I don't think there's an officially supported and documented ipset command for adding multiple entries into a set at once in a single command invocation; instead you're apparently expected to run 'ipset add ...' repeatedly. You can abuse the 'ipset restore' command for this if you want to by creating appropriately formatted input; check the output of 'ipset save' to see what it needs to look like. This may even be considered a stable interface by the ipset authors.

Ipset syntax and usage appears to have changed over time, so old discussions of it that you find online may not work quite as written (and someday these notes may be out of date that way as well).

PS: I can sort of see a lot of clever uses for ipsets, but I've only started exploring them right now and my iptables usage is fairly basic in general. I encourage you to read the ipset manpage and go wild.

Sidebar: how I think you're supposed to use list sets

As an illustrated example:

ipset create spamhaus-drop hash:net counters
ipset create spamhaus-edrop hash:net counters
[... populate both from spamhaus ...]

ipset create spamhaus list:set
ipset add spamhaus spamhaus-drop
ipset add spamhaus spamhaus-edrop

iptables -A INPUT -p tcp --dport 25 -m set --match-set spamhaus src -j DROP

This way your iptables rules can be indifferent about exactly what goes into the 'spamhaus' ipset, although of course this will be slightly less efficient than checking a single merged set.

by cks at October 30, 2014 03:31 AM

October 29, 2014

Rands in Repose

Brief Thoughts on Marvel

Marvel announced their Phase 3 plans for the Marvel Cinematic Universe yesterday and the best way to describe my reaction is they succeeded in diffusing my excitement about Avengers – Age of Ultron and I’m pretty excited about that sequel.

Some brief thoughts on the state of Marvel:

  • In 2009, Disney paid four billion dollars for Marvel. It turns out this was a tremendous deal. Check it out…
  • The first Avengers had a production budget of $220 million and worldwide total lifetime gross of $1.5 billion.
  • The last Iron Man (released last summer) had a production budget of $200 million a worldwide total gross of $1.2 billion.
  • Guardians of the Galaxy (released this year) had a production budget of $170 million and, so far, a worldwide total lifetime grow of $752 million.
  • You can check out the rest of the portfolio’s performance on Box Office Mojo, but the point is: it appears they’ve already made their money back in five years and then some.

Four billion dollars still felt aggressive in 2009, but think about what they were buying. They weren’t just buying a catalog of heroes, they were buying a massive collection of stories about these heroes, their origins, their adventures, and often, their deaths.

These stories have been tested. Marvel (and DC) have no issue mucking with continuity to improve the quality of the universe. It’s called retroactive continuity (or retcon) and it allows writers to resolve errors in chronology and reintroduce popular characters.

I think of retcons as bug fixes. It’s the writers not only making sure the stories all fit together, but also that the stories are relevant and entertaining. When you add the fact that comics are a visual medium, you understand that Marvel wasn’t just buying a catalog of heroes, they were buying a whole universe of compelling and tested scripts and story boards.

The cherry on top is the Marvel Cinematic universe is a retcon unto itself. The script writers of the movies are picking and choosing their facts and stories and knitting together what looks like decade long plot lines designed specifically for the big screen.

Can’t wait.


by rands at October 29, 2014 03:25 PM

Everything Sysadmin

2015 speaking gigs: Boston, Pennsylvania, Baltimore

Three new speaking gigs have been announced. January (BBLISA in Cambridge, MA), February (Bucks County, PA), and March (Baltimore-area). The full list is on or subscribe to the RSS feed to learn about any new speaking engagements.

The next 3 speaking gigs is always listed on "see us live" box at the top of

October 29, 2014 02:28 PM

Standalone Sysadmin

Appearance on @GeekWhisperers Podcast!

I was very happy to visit Austin, TX not long ago to speak at the Spiceworld Austin conference, held by Spiceworks. Conferences like that are a great place to meet awesome people you only talk to on the internet and to see old friends.

Part of the reason I was so excited to go was because John Mark Troyer had asked me if I wanted to take part in the Geek Whisperers Podcast. Who could say no?

With over 60 episodes to their name, Geek Whisperers fills an amazing niche of enterprise solutions, technical expertise, and vendor luminaries that spans every market that technology touches. Hosted by John, Amy "CommsNinja" Lewis, and fellow Bostonite Matt Brender (well, he lives in Cambridge, but that’s close, right?), they have been telling great tales and having a good time doing it for years. I’ve always respected their work, and I was absolutely touched that they wanted to have me on the show.

We met on the Tuesday of the conference and sat around for an hour talking about technology, tribes, and the progression of people and infrastructure. I had such a really good. time, and I hope they did, too.

You can listen to the full podcast on or through iTunes or Stitcher.

Please comment below if you have any questions about things we discussed. Thanks for listening!

by Matt Simmons at October 29, 2014 02:23 PM


Chris Siebenmann

Unnoticed nonportability in Bourne shell code (and elsewhere)

In response to my entry on how Bashisms in #!/bin/sh scripts aren't necessarily bugs, FiL wrote:

If you gonna use bashism in your script why don't you make it clear in the header specifying #!/bin/bash instead [of] #!/bin/sh? [...]

One of the historical hard problems for Unix portability is people writing non-portable code without realizing it, and Bourne shell code is no exception. This is true for even well intentioned people writing code that they want to be portable.

One problem, perhaps the root problem, is that very little you do on Unix will come with explicit (non-)portability warnings and you almost never have to go out of your way to use non-portable features. This makes it very hard to know whether or not you're actually writing portable code without trying to run it on multiple environments. The other problem is that it's often both hard to remember and hard to discover what is non-portable versus what is portable. Bourne shell programming is an especially good example of both issues (partly because Bourne shell scripts often use a lot of external commands), but there have been plenty of others in Unix's past (including 'all the world's a VAX' and all sorts of 64-bit portability issues in C code).

So one answer to FiL's question is that a lot of people are using bashisms in their scripts without realizing it, just as a lot of people have historically written non-portable Unix C code without intending to. They think they're writing portable Bourne shell scripts, but because their /bin/sh is Bash and nothing in Bash warns about things the issues sail right by. Then one day you wind up changing /bin/sh to be Dash and all sorts of bits of the world explode, sometimes in really obscure ways.

All of this sounds abstract, so let me give you two examples of accidentally Bashisms I've committed. The first and probably quite common one is using '==' instead of '=' in '[ ... ]' conditions. Many other languages use == as their string equality check, so at some point I slipped and started using it in 'Bourne' shell scripts. Nothing complained, everything worked, and I thought my shell scripts were fine.

The second I just discovered today. Bourne shell pattern matching allows character classes, using the usual '[...]' notation, and it even has negated characters classes. This means that you can write something like the following to see if an argument has any non-number characters in it:

case "$arg" in
   *[^0-9]*) echo contains non-number; exit 1;;

Actually I lied in that code. Official POSIX Bourne shell doesn't negate character classes with the usual '^' character that Unix regular expressions use; instead it uses '!'. But Bash accepts '^' as well. So I wrote code that used '^', tested it, had it work, and again didn't realize that I was non-portable.

(Since having a '^' in your character class is not an error in a POSIX Bourne shell, the failure mode for this one is not a straightforward error.)

This is also a good example of how hard it is to test for non-portability, because even when you use 'set -o posix' Bash still accepts and matches this character class in its way (with '^' interpreted as class negation). The only way to test or find this non-portability is to run the script under a different shell entirely. In fact, the more theoretically POSIX compatible shells you test on the better.

(In theory you could try to have a perfect memory for what is POSIX compliant and not need any testing at all, or cross-check absolutely everything against POSIX and never make a mistake. In practice humans can't do that any more than they can write or check perfect code all the time.)

by cks at October 29, 2014 04:43 AM

Google Webmasters

Tracking mobile usability in Webmaster Tools

Webmaster Level: intermediate

Mobile is growing at a fantastic pace - in usage, not just in screen size. To keep you informed of issues mobile users might be seeing across your website, we've added the Mobile Usability feature to Webmaster Tools.

The new feature shows mobile usability issues we’ve identified across your website, complete with graphs over time so that you see the progress that you've made.

A mobile-friendly site is one that you can easily read & use on a smartphone, by only having to scroll up or down. Swiping left/right to search for content, zooming to read text and use UI elements, or not being able to see the content at all make a site harder to use for users on mobile phones. To help, the Mobile Usability reports show the following issues: Flash content, missing viewport (a critical meta-tag for mobile pages), tiny fonts, fixed-width viewports, content not sized to viewport, and clickable links/buttons too close to each other.

We strongly recommend you take a look at these issues in Webmaster Tools, and think about how they might be resolved; sometimes it's just a matter of tweaking your site's template! More information on how to make a great mobile-friendly website can be found in our Web Fundamentals website (with more information to come soon).

If you have any questions, feel free to join us in our webmaster help forums (on your phone too)!

by Google Webmaster Central ( at October 29, 2014 04:22 AM

Ubuntu Geek


ZendCon - Deep Dive into Browser Performance

My slides from ZendCon 2014 about "Deep Dive into Browser Performance" are now available for download here:

by (Ilia Alshanetsky) at October 29, 2014 12:11 AM

October 28, 2014

Everything Sysadmin

Apple Pay and CurrentC

I predict one year from today CurrentC won't be up and running and, in fact, history will show it was just another attempt to stall and prevent any kind of mobile payment system in the U.S. from being a success. I'm not saying that there won't be NFC payment systems, just that they'll be marginalized and virtually usess as a result.

October 28, 2014 08:28 PM


Oracle Open World 2014

This was my first time attending OOW and I must say I did like it. What a big marketing extravaganza...! It wasn't all marketing though, there were many interesting technical sessions too, but most importantly it was a big opportunity to meet with Oracle engineers and management, both the people I met in the past and some new faces I only exchanged emails with. It was also interesting to talk to other customers and see what they've been up to, and some of them are doing very interesting things. On Sunday there was also a Solaris Customer Advisory Board meeting which was very interesting.

One of the things that surprised me was how many other vendors were present there, having their stands - virtually everyone, from startups to large tier one vendors. I guess it is a good opportunity for everyone to meet with their customers (and potentially new customers).

I also presented there on how we use Solaris and took part of a Solaris Customer Panel - both were good experiences.

For more details see Markus Flierl post.

by milek ( at October 28, 2014 12:27 PM

Chris Siebenmann

My current somewhat tangled feelings on operator.attrgetter

In a comment on my recent entry on sort comparison functions, Peter Donis asked a good question:

Is there a reason you're not using operator.attrgetter for the key functions? It's faster than a lambda.

One answer is that until now I hadn't heard of operator.attrgetter. Now that I have it's something I'll probably consider in the future.

But another answer is embedded in the reason Peter Donis gave for using it. Using operator.attrgetter is clearly a speed optimization, but speed isn't always the important thing. Sometimes, even often, the most important thing to optimize is clarity. Right now, for me attrgetter is less clear than the lambda approach because I've just learned about it; switching to it would probably be a premature optimization for speed at the cost of clarity.

In general, well, 'attrgetter' is a clear enough thing that I suspect I'll never be confused about what 'lst.sort(key=operator.attrgetter("field"))' does, even if I forget about it and then reread some code that uses it; it's just pretty obvious from context and the name itself. There's a visceral bit of me that doesn't like it as much as the lambda approach because I don't think it reads as well, though. It's also more black magic than lambda, since lambda is a general language construct and attrgetter is a magic module function.

(And as a petty thing it has less natural white space. I like white space since it makes things more readable.)

On the whole this doesn't leave me inclined to switch to using attrgetter for anything except performance sensitive code (which these sort()s aren't so far). Maybe this is the wrong decision, and if the Python community as a whole adopts attrgetter as the standard and usual way to do .sort() key access it certainly will become a wrong decision. At that point I hope I'll notice and switch myself.

(This is an sense an uncomfortable legacy of CPython's historical performance issues with Python code. Attrgetter is clearly a performance hack in general; if lambda was just as fast as it I'd argue that you should clearly use lambda because it's a general language feature instead of a narrowly specialized one.)

by cks at October 28, 2014 04:12 AM

Ubuntu Geek

October 27, 2014

Aaron Johnson

What I did this weekend: 10/26/2014

  • Littlest dude had his birthday this weekend so we went to Legoland, which was packed but we managed to have a good time, drove home, made my sausage / sweet potato / egg hash for dinner (adults only). Good day.
  • Mommy got sick and stayed in bed so we kept super busy today. Birthday presents kept the vikings satisfied for hours while I made breakfast, mopped, did dishes, bathrooms, mowed the lawns, pulled weeds and then we did a trip to the tire store (exciting!), dropped off the car to get 2 new tires, had lunch at The Griffin, got 2 geocaches in / around Caversham Court and then picked up the car and headed back home. Sort of the opposite of a weekend driving through Iceland.

by ajohnson at October 27, 2014 09:34 PM

Everything Sysadmin

Wait, did you mean Wed the 15th or Thu the 16th?

How many times have you seen this happen?

Email goes out that mentioned a date like "Wed, Oct 16". Since Oct 16 is a Thursday, not a Wednesday (this year), there is a flurry of email asking, "Did you mean Wed the 15th or Thu the 16th?" A correction goes out but the damage is done. Someone invariantly "misses the update" and shows up a day early or late, or is otherwise inconvenienced. Either way cognitive processing is wasted for anyone involved.

The obvious solution is "people should proofread better" but it is a mistake that everyone makes. I see the mistake at least once a month, and sometimes I'm the guilty party.

If someone could solve this problem it would be a big win.

Google's gmail will warn you if you use the word "attachment" and don't attach a file. Text editing boxes in all modern web browsers and operating systems have some kind of live spell-check that put a red mark under a word that is misspelled. Some do real-time grammar checking too.

How hard would it be to add a check for "Wed, Oct 16" and similar errors? Yes, there are many date formats, and in some cases one would have to guess the year.

It would also be nice if we could write "FILL, Oct 16" and the editor would fill in the day of the week. Or a context-sensitive menu (i.e. the left click menu) would offer to add the day of the week for you. If the time is included, it should offer to link to

Ok Gmail, Chrome, Apple and Microsoft: Who's going to be the first to implement this?

October 27, 2014 02:28 PM

Yellow Bricks

(Inter-VM) TPS Disabled by default, what should you do?

We’ve all probably seen the announcement around inter-VM(!!) TPS (transparent page sharing) being disabled by default in future releases of vSphere, and the recommendation to disable it in current versions. The reason for this is the fact there was a research paper published which demonstrates how it is possible to get access to data under certain highly controlled conditions. As the KB article describes:

Published academic papers have demonstrated that by forcing a flush and reload of cache memory, it is possible to measure memory timings to determine an AES encryption key in use on another virtual machine running on the same physical processor of the host server if Transparent Page Sharing is enabled. This technique works only in a highly controlled environment using a non-standard configuration.

There were many people who blogged about what the potential impact is on your environment or designs. Typically in the past people would take a 20 to 30% memory sharing in to account when sizing their environment. With inter-VM TPS disabled of course this goes out of the window. Frank described this nicely in this post. However, as Frank also described and I mentioned in previous articles when large pages are being used (usually the case) then TPS is not used by default and only under pressure…

The under pressure part is important if you ask me as TPS is the first memory reclaiming technique used when a host is under pressure. If TPS cannot sufficiently reduce the memory pressure then ballooning is leveraged, followed by compression and swapping ultimately. Personally I would like to avoid swapping at all costs and preferably compression as well. Ballooning typically doesn’t result in a huge performance degradation so it could be acceptable, but TPS is something I prefer as it just breaks up large pages in to small pages and collapses those when possible. Performance loss is hardly measurable in that case. Of course TPS would be way more effective when pages between VMs can be shared rather then just within the VM.

Anyway, the question remains should you have (inter-VM) TPS disabled or not? When you assess the risk you need to ask yourself first who has access to your virtual machines as the technique requires you to login to a virtual machine. Before we look at the scenarios, not that I mentioned “inter-VM” a couple of times now, TPS is not completely disabled in future versions. It will be disabled for inter-VM sharing by default, but can be enabled. More to be found on that in this article on the vSphere blog.

Lets explore 3 scenarios:

  1. Server virtualisation (private)
  2. Public cloud
  3. Virtual Desktops

In the case of “Server virtualisation”, in most scenarios I would expect that only the system administrators and/or application owners have access to the virtual machines. The question then is, why would they go to this level when they have access to the virtual machines anyway? So in the scenario where Server Virtualization is your use case, and access to your virtual machines is restricted to a limited number of people, I would definitely reconsider enabling inter-VM TPS.

In a public cloud environment this however is different of course. You can imagine that a hacker could buy a virtual machine and try to retrieve the AES encryption key. What he (the hacker) does with it next of course is even then still the question. Hopefully the cloud provider ensures that that the tenants are isolated from each other from a security/networking point of view. If that is the case there shouldn’t be much they could do with it. Then again, it could be just one of the many steps they have to take to break in to a system so I would probably not want to take the risk, although the risk is low. This is one of the scenarios where I would leave inter-VM TPS disabled.

Third and last scenario is Virtual Desktops. In the case of a virtual desktop many different users have access to virtual machines… The question though is if you are running any applications or accessing applications which are leveraging AES encryption or not. I cannot answer that for you, so I will leave that up in the air… you will need to assess that risk.

I guess the answer to whether you should or should not disable (inter-VM) TPS is as always: it depends. I understand why inter-VM TPS was disabled, but if the risk is low I would definitely consider enabling it.

"(Inter-VM) TPS Disabled by default, what should you do?" originally appeared on Follow me on twitter - @DuncanYB.

Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at October 27, 2014 05:53 AM

Chris Siebenmann

Practical security and automatic updates

One of the most important contributors to practical, real world security is automatically applied updates. This is because most people will not take action to apply security fixes; in fact most people will probably not do so even if asked directly and just required to click 'yes, go ahead'. The more work people have to go through to apply security fixes, the fewer people will do so. Ergo you maximize security fixes when people are required to take no action at all.

(Please note that sysadmins and developers are highly atypical users.)

But this relies on users being willing to automatically apply updates, and that in turn requires that updates must be harmless. The ideal update either changes nothing besides fixing security issues and other bugs or improves the user's life. Updates that complicate the user's life at the same time that they deliver security fixes, like Firefox updates, are relatively bad. Updates that actually harm the user's system are terrible.

Every update that does harm to someone's system is another impetus for people to disable automatic updates. It doesn't matter that most updates are harmless and it doesn't matter that most people aren't affected by even the harmful updates, because bad news is much more powerful than good news. We hear loudly about every update that has problems; we very rarely hear about updates that prevented problems, partly because it's hard to notice when it happens.

(The other really important thing to understand is that mythology is extremely powerful and extremely hard to dislodge. Once mythology has set in that leaving automatic updates on is a good way to get screwed, you have basically lost; you can expect to spend huge amounts of time and effort persuading people otherwise.)

If accidentally harmful updates are bad, actively malicious updates are worse. An automatic update system that allows malicious updates (whether the maliciousness is the removal of features or something worse) is one that destroys trust in it and therefor destroys practical security. As a result, malicious updates demand an extremely strong and immediate response. Sadly they often don't receive one, and especially when the 'update' removes features it's often even defended as a perfectly okay thing. It's not.

PS: corollaries for, say, Firefox and Chrome updates are left as an exercise to the reader. Bear in mind that for many people their web browser is one of the most crucial parts of their computer.

(This issue is why people are so angry about FTDI's malicious driver appearing in Windows Update (and FTDI has not retracted their actions; they promise future driver updates that are almost as malicious as this one). It's also part of why I get so angry when Unix vendors fumble updates.)

by cks at October 27, 2014 05:42 AM

Google Webmasters

Updating our technical Webmaster Guidelines

Webmaster level: All

We recently announced that our indexing system has been rendering web pages more like a typical modern browser, with CSS and JavaScript turned on. Today, we're updating one of our technical Webmaster Guidelines in light of this announcement.

For optimal rendering and indexing, our new guideline specifies that you should allow Googlebot access to the JavaScript, CSS, and image files that your pages use. This provides you optimal rendering and indexing for your site. Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings.

Updated advice for optimal indexing

Historically, Google indexing systems resembled old text-only browsers, such as Lynx, and that’s what our Webmaster Guidelines said. Now, with indexing based on page rendering, it's no longer accurate to see our indexing systems as a text-only browser. Instead, a more accurate approximation is a modern web browser. With that new perspective, keep the following in mind:

  • Just like modern browsers, our rendering engine might not support all of the technologies a page uses. Make sure your web design adheres to the principles of progressive enhancement as this helps our systems (and a wider range of browsers) see usable content and basic functionality when certain web design features are not yet supported.
  • Pages that render quickly not only help users get to your content easier, but make indexing of those pages more efficient too. We advise you follow the best practices for page performance optimization, specifically:
  • Make sure your server can handle the additional load for serving of JavaScript and CSS files to Googlebot.

Testing and troubleshooting

In conjunction with the launch of our rendering-based indexing, we also updated the Fetch and Render as Google feature in Webmaster Tools so webmasters could see how our systems render the page. With it, you'll be able to identify a number of indexing issues: improper robots.txt restrictions, redirects that Googlebot cannot follow, and more.

And, as always, if you have any comments or questions, please ask in our Webmaster Help forum.

by Google Webmaster Central ( at October 27, 2014 03:56 AM

Ubuntu Geek

October 26, 2014

Rands in Repose

FriendDA v2?

If you don’t care about the FriendDA you should stop reading this now.

The FriendDA was written in late 2008 and it was intended as an experiment to place a smidge of formality on the discussion of perceived precious ideas. Happily, the FriendDA has legs. Since the original publication there has a small, but steady flow of traffic to site, a group of folks have taken up the task of translating the document to various languages (FriendDA in Hebrew), and I’ve made small changes to the original text none of which I believe has not changed the original intent.

I was recently approached with the idea that the FriendDA with small modifications could offer some legal protection and while this contradicts one of the core tenets of the FriendDA, the idea of increasing the usefulness of the FriendDA is appealing to me.

The FriendDA really isn’t for me. It’s intended to be a useful social tool for others who want to stop the moment before they share their important idea and say, “FriendDA?” With the response being either, “What’s that?” or “Of course.”  In the “What’s that?” case, the point is to talk about the intent of the FriendDA which briefly is:

  • I’m disclosing a bright idea to you

  • Don’t screw me

  • Or else

None of the consequences are intended to be legal – they’re intended be social. The point of the FriendDA is for we humans to learn to deal with our trust issues sans legal intervention. Still, the idea of giving the FriendDA more teeth is interesting to me and I want your opinion.

There are three changes (in bold) being proposed to the .7 version of the FriendDA. The first two are:

  • Line 9:  Adapting some or all of The Idea for your own purposes unless I say you can.
  • Line 15: The term of this agreement shall continue until I or someone I authorize makes The Idea public.

I have no issue with either of these changes. The first prevents the Advisor from adapting The Idea for someone else’s purposes which would be nefarious and screw-ish. The second change makes it clear that that the term lasts until the Keeper of the Idea takes the Idea public.

The last change is the big one:

Line 18: 

  • Was: This agreement has absolutely no legal binding. However, upon breach or violation of the agreement, I will feel free to do any of the following:
  • Proposed: This agreement may possibly have some amount of legal binding. However, it is likely that upon breach or violation of the agreement, I will do no more than any of the following:

The paragraph from the introductory article that this change directly contradicts is:

The FriendDA is a non-binding, warm blanket agreement that offers absolutely no legal protection. I’d suggest if the idea of legal protection is even crossing your mind that the FriendDA is totally inappropriate for your current needs.

A legal advisor suggests, 

Therefore, while the doc at does say that “This agreement has absolutely no legal binding….” – it actually might. It has all the parts required for a binding contract, namely mutual promises (which can be seen as consideration) and an intent for the parties to be in agreement about what can happen if the agreement is breached, even if they’re mostly psychological. And the agreement also has definite terms and obligations on both parties. Therefore, noting that it might be legally binding makes it a positive to state what will happen if the Advisor breaches. 

Again, the proposed change gives the FriendDA slightly more teeth and if you care about the FriendDA, I’d like your opinion on this last change. Once I’ve gathered enough signal, I’ll update the site along with a new subsection which tracks changes from version to version.

Thank you,

by rands at October 26, 2014 07:00 PM

Server Density

Chris Siebenmann

Things that can happen when (and as) your ZFS pool fills up

There's a shortage of authoritative information on what actually happens if you fill up a ZFS pool, so here is what I've both gathered about it from other people's information and experienced.

The most often cited problem is bad performance, with the usual cause being ZFS needing to do an increasing amount of searching through ZFS metaslab space maps to find free space. If not all of these are in memory, a write may require pulling some or all of them into memory, searching through them, and perhaps finding not enough space. People cite various fullness thresholds for this starting to happen, eg anywhere from 70% full to 90% full. I haven't seen any discussion about how severe this performance impact is supposed to be (and on what sort of vdevs; raidz vdevs may behave differently than mirror vdevs here).

(How many metaslabs you have turns out to depend on how your pool was created and grown.)

A nearly full pool can also have (and lead to) fragmentation, where the free space is in small scattered chunks instead of large contiguous runs. This can lead to ZFS having to write 'gang blocks', which are a mechanism where ZFS fragments one large logical block into smaller chunks (see eg the mention of them in this entry and this discussion which corrects some bits). Gang blocks are apparently less efficient than regular writes, especially if there's a churn of creation and deletion of them, and they add extra space overhead (which can thus eat your remaining space faster than expected).

If a pool gets sufficiently full, you stop being able to change most filesystem properties; for example, to set or modify the mountpoint or change NFS exporting. In theory it's not supposed to be possible for user writes to fill up a pool that far. In practice all of our full pools here have resulted in being unable to make such property changes (which can be a real problem under some circumstances).

You are supposed to be able to remove files from a full pool (possibly barring snapshots), but we've also had reports from users that they couldn't do so and their deletion attempt failed with 'No space left on device' errors. I have not been able to reproduce this and the problem has always gone away on its own.

(This may be due to a known and recently fixed issue, Illumos bug #4950.)

I've never read reports of catastrophic NFS performance problems for all pools or total system lockup resulting from a full pool on an NFS fileserver. However both of these have happened to us. The terrible performance issue only happened on our old Solaris 10 update 8 fileservers; the total NFS stalls and then system lockups have now happened on both our old fileservers and our new OmniOS based fileservers.

(Actually let me correct that; I've seen one report of a full pool killing a modern system. In general, see all of the replies to my tweeted question.)

By the way: if you know of other issues with full or nearly full ZFS pools (or if you have additional information here in general), I'd love to know more. Please feel free to leave a comment or otherwise get in touch.

by cks at October 26, 2014 05:36 AM

Keepalived notify script, execute action on failover

Keepalived supports running scripts on VRRP state change. This can come in handy when you need to execute an action when a failover occurs. In my case, I have a VPN running on a Virtual IP and want to make sure the VPN only runs on the node with the Virtual IP.

October 26, 2014 12:00 AM

October 25, 2014

Ubuntu Geek

Trouble with tribbles

Tribblix progress

I recently put out a Milestone 12 image for Tribblix.

It updates illumos, built natively on Tribblix. There's been a bit of discussion recently about whether illumos needs actual releases, as opposed to being continuously updated. It doesn't have releases, so when I come to make a Tribblix release I simply check out the current gate, build, and package it. After all, it's supposed to be ready to ship at any time.

Note that I don't maintain a fork of illumos-gate, I build it essentially as-is. This is the same for all the components I build for Tribblix - I keep true to unmodified upstream as much as possible.

The one change I have made is to SVR4 packaging. I've removed the dependency on openssl and wanboot (bug #5188), which is a good thing. It means that you can't use signed SVR4 packages, but I've never encountered one. Nor can pkgadd now directly retrieve a package via http, but the implementation via wanboot was spectacularly dire, and you're much better off using curl or wget, which allows proper repository management (as zap does). Packaging is a little quicker now, but this change also makes it much easier to update openssl in future (it's difficult to update something your packaging system is linked against).

Tribblix is now firmly committed to gcc4 (as opposed to the old gcc3 in OpenSolaris). I've rebuilt gcc to fix visibility support. If you've ever seen 'warning: visibility attribute not supported in this configuration' then you'll have stumbled across this. Basically, you need to ensure objdump is found during the gcc build - either by making sure it's in the path or by setting OBJDUMP to point to it.

I've added a new style of zones - alternate root zones. These are sparse root zones, but instead of inheriting from the global zone you can use an alternate installed image. More on that later.

There's the usual slew of updates to various packages, including the obviously sensitive bash and openssl.

There's an interesting fix to python. I put software that might come in multiple versions underneath /usr/versions and use symlinks so that applications can be found in the normal locations. Originally, /usr/bin/python was a symlink that went to ../versions/python-x.y.x/bin/python. This works fine most of the time. However, if you called it as /bin/python it couldn't find its modules, so the symlink has to be ../../usr/versions/python-x.y.x/bin/python which makes things work as desired.

The package catalogs now contain package sizes and checksums, allowing verification of downloaded packages. I need to update zap to actually use this data, and to retry or resume failed or incomplete downloads. (It's a shame that curl doesn't automatically resume incomplete downloads the way that wget does.)

At a future milestone, upgrades will be supported (regular package updates have worked for a while, I'm talking about a whole distro upgrade here). It's possible to upgrade by hand already, but it requires a few extra workarounds (such as forcing postremove scripts to always exit 0) to make it work properly. I've got most of the preparatory work in place now. Upgrading zones looks a whole lot more complicated, though (and I haven't really seen it done well elsewhere).

Now, off to work on the next update.

by Peter Tribble ( at October 25, 2014 11:27 PM

Rands in Repose

Carl's Whine Rack

CentOS 7, openssh/openssl

Yesterday I finally gave CentOS 7 a try as a Virtualbox VM. (In the following, when I talk about a guest or a host, it's in the virtualization vernacular.)

I did what I usually do with VB guests: I gave it two network interfaces. The first is configured as NAT, so that the guest can reach the internet without the host needing a second IP for a bridged interface (bridging would be fine at home, but might cause me some trouble at work). The second is configured as host-only with a static IP, so that the host (and other guests) can initiate to the guest. (There’s probably a much easier way of doing this, but it’s worked so far.)

My CentOS experience is primarily with CentOS 5, and several things were really different in C7. (CentOS and Red Hat documentation is typically pretty good and will no doubt help me through some of the following. These are just some of the things I’m stumbling on at the moment.)

There’s no /etc/cron.daily/rpm, which creates a list of packages in /var/log/rpmpkgs. I use that a lot, so I copied that over from a C5 box.

I had a pretty hard time with networking. Neither interface seemed to come up on its own at first. I had to set ONBOOT=yes in the corresponding /etc/sysconfig/network-scripts files, and then the second interface mangled the first interface’s NAT connection. I ended up setting ONBOOT to yes for the first interface (the NAT connection) and to no for the second (host-only) interface. I put an ifconfig statement in rc.local to bring up the second interface, and that (eventually) worked.

ifconfig, netstat, and probably a bunch of other useful stuff is in the net-tools package, which isn’t included in a minimal install. And although there’s an rc.local, it’s not executable, and won’t run at boot until you “chmod +x” the thing.

And the interface names are now really weird. Instead of something memorable, traditional, predictable, and sensible like eth0 and eth1, now they are called enp0s3 and enp0s8. (I just had to look those up, because I couldn’t remember them.)

The new C7 guest has a very long list of iptables rules, but /etc/sysconfig/iptables doesn’t exist, so I don’t know where those rules are coming from. Thankfully port 22 is open by default, but I don’t like to run openssh on the default port, so at some point I’ll need to figure out how to fiddle with iptables rules.

I use GNU screen all the time. (I know the cool kids like tmux, but, frankly, screw them.) I typically have a screen session in which I’m logged in to several different hosts, and each window is named for the remote host. C7 rewrites the window name in screen, so “Ctrl-A ‘, hostname” no longer works. I don’t know if I need to (somehow) tell screen (on my Ubuntu host) not to allow the window process to rewrite the window title, or if I need to (somehow) tell bash in the C7 guest to be less assertive.

I’m also having some trouble building openssh from source in C7. The version of openssh that comes with C5 lacks some desirable features in the newer versions, so we tend to build it from source. In just the last version or two of openssh, something has changed such that it won’t build against the version of openssl that comes with C5. So the other day (before messing with C7) I built the newest version of openssl on a C5 box and built openssh against that. That worked, but I see now that by default openssl doesn’t create shared libraries, so the openssh I built linked to openssl statically (which made sshd nearly three times bigger than a dynamically-linked sshd).

So far I’ve been unable to build openssh against a source-built openssl on C7. I get one error if I try to link statically, and another error if I try to link dynamically. The version of openssl that comes with C7 is pretty current, so I could just build against that and probably have no problem. Likewise I could just use C7’s version of openssh. But although I’ve enjoyed the stability of C5, everything about it is pretty old at this point. I think that in the future I’d like to build many or all network services from source. Since openssh, apache, stunnel, and several others need openssl, I’d like to keep that current, too.

So I have some work ahead of me. I think C5 hits end-of-life some time in 2017, so I’ve got some time, but the C5 EOL will probably sneak up on me if I let it.

by mbrisby ( at October 25, 2014 12:34 PM

Geeking with Greg

At what point is an over-the-air TV antenna too long to be legal?

You can get over-the-air HDTV signals using an antenna. This antenna gets a better, stronger signal with less interference if it is direct line-of-sight and as near as possible to the broadcast towers. So, you might want an antenna that is up high or even some distance away to get the best signal.

But if you try to do this, you immediately run into a question: At what point does that antenna become too long to be legal or the signal from the antenna is transmitted in a way where it is no longer legal?

Let's say I put an antenna behind my TV hooked up with a wire. That's obviously legal and what many people currently do.

Let's say I put an antenna outside on top of a tree or my garage and run a wire inside. Still seems obviously legal.

Let's say I put an antenna on top of my roof. Still clearly fine.

Let's say I put it on my neighbor's roof and run a wire to my TV. Still ok?

Let's say I put the antenna on my neighbor's roof, but have the antenna connect to my WiFi network and transmit the signal using my local area network instead of using a direct wired cable connection. Still ok?

Let's say I put the antenna on my neighbor's roof, but have the antenna connect to my neighbor's WiFi network and transmit the signal over their WiFi, over the internet, then to my WiFi, instead of using a direct wired cable connection. Still ok?

Let's say I put my antenna on my neighbor's roof, but my neighbor won't do this for free. I have to pay a small amount of rent to my neighbor for the space on his roof used by my antenna. I also have the antenna connect to my neighbor's WiFi network and transmit its signal over their WiFi, over the internet, then to my WiFi, instead of using a direct wired cable connection. Still ok?

Let's say, like before, I put my antenna on my neighbor's roof, pay the neighbor rent for the space on his roof, use the internet to transmit the antenna's signal. But, this time, I buy the antenna from my neighbor at the beginning (and, like before, I own it now). Is that okay?

Let's say I put my antenna on my neighbor's roof, pay the neighbor rent for the space on his roof, use the internet to transmit the antenna's signal, but now I rent or lease the antenna from my neighbor. Still ok? If this is not ok, which part is not ok? Is it suddenly ok if I replace the internet connection with a direct microwave relay or hardwired connection?

Let's say I do all of the last one, but use a neighbor's roof three houses away. Still ok?

Let's say I do all of the last one, but use a roof on a building five blocks away. Still ok?

Let's say I rent an antenna on top of a skyscraper in downtown Seattle and have the signal sent to me over the internet. Not ok?

The Supreme Court recently ruled Aereo is illegal. Aereo put small antennas in a building and rented them to people. The only thing they did beyond the last thing above is time-shifting, so they would not necessary send the signal from the antenna immediately, but instead store it, and only transmit it when demanded.

You might think it's the time shifting that's the problem, but that didn't seem to be what the Supreme Court said. Rather, they said the intent of the 1976 amendments to US copyright law prohibit community antennas (which is one antenna that sends its signal to multiple homes), labelling those a "public performance". They said Aereo's system was similar in function to a community antenna, despite actually having multiple antennas, and violated the intent of the 1976 law.

So, the question is, where is the line? Where does my antenna become too distant, transmit using the wrong methods, or involve too many payments to third parties in the operation of the antenna that it becomes illegal? Can it not be longer than X meters? Not transmit its signal in particular ways? Not require rent for the equipment or space on which the antenna sits? Not store the signal at the antenna and transmit it only on demand? What is the line?

I think this question is interesting for two reasons. First, as an individual, I would love to have a personal-use over-the-air HDTV antenna that gets a much better reception than the obstructed and inefficient placement behind my TV, but I don't know at what point it becomes illegal for me to place an antenna far away from the TV. Second, I suspect many others would like a better signal from their HDTV antenna too, and I'd love to see a startup (or any group) that helped people set up these antennas, but it is very unclear what it might be legal for a startup to do.


by Greg Linden ( at October 25, 2014 09:22 AM

Chris Siebenmann

The difference in available pool space between zfs list and zpool list

For a while I've noticed that 'zpool list' would report that our pools had more available space than 'zfs list' did and I've vaguely wondered about why. We recently had a very serious issue due to a pool filling up, so suddenly I became very interested in the whole issue and did some digging. It turns out that there are two sources of the difference depending on how your vdevs are set up.

For raidz vdevs, the simple version is that 'zpool list' reports more or less the raw disk space before the raidz overhead while 'zfs list' applies the standard estimate that you expect (ie that N disks worth of space will vanish for a raidz level of N). Given that raidz overhead is variable in ZFS, it's easy to see why the two commands are behaving this way.

In addition, in general ZFS reserves a certain amount of pool space for various reasons, for example so that you can remove files even when the pool is 'full' (since ZFS is a copy on write system, removing files requires some new space to record the changes). This space is sometimes called 'slop space'. According to the code this reservation is 1/32nd of the pool's size. In my actual experimentation on our OmniOS fileservers this appears to be roughly 1/64th of the pool and definitely not 1/32nd of it, and I don't know why we're seeing this difference.

(I found out all of this from a Ben Rockwood blog entry and then found the code in the current Illumos codebase to see what the current state was (or is).)

The actual situation with what operations can (or should) use what space is complicated. Roughly speaking, user level writes and ZFS operations like 'zfs create' and 'zfs snapshot' that make things should use the 1/32nd reserved space figure, file removes and 'neutral' ZFS operations should be allowed to use half of the slop space (running the pool down to 1/64th of its size), and some operations (like 'zfs destroy') have no limit whatever and can theoretically run your pool permanently and unrecoverably out of space.

The final authority is the Illumos kernel code and its comments. These days it's on Github so I can just link to the two most relevant bits: spa_misc.c's discussion of spa_slop_shift and dsl_synctask.h's discussion of zfs_space_check_t.

(What I'm seeing with our pools would make sense if everything was actually being classified as a 'allowed to use half of the slop space' operation. I haven't traced the Illumos kernel code at this level so I have no idea how this could be happening; the comments certainly suggest that it isn't supposed to be.)

(This is the kind of thing that I write down so I can find it later, even though it's theoretically out there on the Internet already. Re-finding things on the Internet can be a hard problem.)

by cks at October 25, 2014 06:06 AM

October 24, 2014

Ubuntu Geek

New Features in Ubuntu 14.10 Desktop and Server

Codenamed "Utopic Unicorn", 14.10 continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.
Read the rest of New Features in Ubuntu 14.10 Desktop and Server (673 words)

© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to
Post tags: , , ,

Related posts

by ruchi at October 24, 2014 11:16 PM

Aaron Johnson

Day 9 and 10: Borgarnes, Mosfellsbær, Bláa Lónið and Reykjavík: Iceland

Woke up not too early at Sundabakki Guesthouse, had a nice breakfast and enjoyed homemade “mama” cakes from the owner of the house who has 5 of her own children and 14 grandchildren, 13 of which are boys so we got extra special nice treatment. Hit the road at 8:30ish so that we could get back to Reykjavík with a chance to see some of the sights in the city.

First stop on way back was at a museum in Borgarnes called the Settlement Center which had two very nice walk through exhibits with audio guides which our band of vikings didn’t make it completely through. We did have a very nice snack break in their coffee shop while the vikings played on the floor.

Next, Karen really wanted to get an Icelandic sweater so she found THE place (called Álafoss) in a little town called Mosfellsbær, which turned out to be a really neat stop. She shopped while walked the vikings around and then we (the vikings) discovered a shop where a guy (Palli Kristjánsson) made and sold all kinds of custom knives, which was really interesting for me and the oldest, not so much the little ones who just wanted to put sheep horns on their head:

I ended up buying a really beautiful Santoku knife with a handle made from the horn of a reinder, the hoof of an Icelandic horse, ebony and marbled padauk. I’m keeping it in a box until we get back home.

We finally found Mommy, who got her sweater and then piled back into the car to drive the rest of the way to Reykjavík. The tunnel Hvalfjörður was closed for re-paving which added an hour or so to the drive but we got back to Reykjavík in the afternoon, checked back into our hotel, drove over to the Perlan to look out over all of Reykjavík:

and also had ice cream. :)

Finally, on our last drive we headed out to jump into the tourist trap that is Bláa Lónið (The Blue Lagoon) which is about 40 minutes from downtown but was well worth it for the weary travelers:

and then ended up back in the city for dinner at Íslenski barinn, which was fantastic.

On our last day (Sunday), we had breakfast, returned the rental car and then walked around downtown, checking out Sólfar (Sun Voyager):

visiting the Maritime Museum (free for adults with kids, not highly recommended but nice if you need to waste an hour before you have to get on a plane), getting hot dogs (which are apparently some kind of Icelandic specialty but weren’t any better than what you’d get in Chicago):

and then spending our last bits of cash at a crepe shop which just happened to arrive with a giant pile of ice cream:

All told, a great trip, highly recommended, even in October although I think we got really lucky with the weather. I’d love to go back in July or August and hike around some glaciers and spend some time in the highlands, maybe in a couple years.


  • Museums : 2
  • Hot dogs : 5
  • Giant piles of ice cream: 2
  • Geocaches: 0!

by ajohnson at October 24, 2014 07:33 PM

Byron Miller

#DOES14 Conference Notes

DevOps is a thing in the Enterprise and DevOps Enterprise Summit #DOES14 certainly made the case showing organizations such as Disney, GE, Target and groups within the US Government working on DevOps styled initiatives. I got home super late and i’m super drained from too much allergy/sinus meds but I wanted to share some thoughts here and […]

by byronm at October 24, 2014 04:36 PM

Administered by Joe. Content copyright by their respective authors.