Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

April 24, 2014

Aaron Johnson

Google Blog

Explore new careers with the first virtual Take Your Classroom to Work Day

For 21 years, Take Your Child To Work Day has helped kids understand what moms and dads do all day after they leave the house. And even if kids don't realize it at the time, it also serves an important role in helping youngsters learn about what kinds of jobs they could do when they grow up. Unfortunately, not all kids are lucky enough to get these opportunities.

Today, we’re giving kids everywhere a chance to “visit” some of the world’s most exciting workplaces. Working with Forbes, Connected Classrooms is hosting 18 virtual field trips to places like the Georgia Aquarium, the Metropolitan Museum of Art, the Stanford National Lab and the Chicago Bulls locker room, using Google Hangouts. Professionals from all walks of life will discuss their day-to-day roles and how they got there, so students—regardless of budget or geography—can be exposed to a wide range of careers and get excited about their future.

The full list of events is available on Forbes’ site, but here’s a preview of what you can expect:


We hope you’ll tune in at 6am PT for the first career hangout, and check out Connected Classrooms throughout the day for new, live field trips.

by Emily Wood (noreply@blogger.com) at April 24, 2014 06:30 AM

Chris Siebenmann

How Yahoo's and AOL's DMARC 'reject' policies affect us

My whole interest in understanding DMARC started with the simple question of how Yahoo's and AOL's change to a DMARC 'reject' policy would affect us and our users, and how much of an effect it would have. The answer turns out to be that it will have some effects but nothing major.

The most important thing is that this change doesn't significantly affect either our users forwarding their email to places that pay attention to DMARC or our simple mailing lists because neither of them normally modify email on the way through (which means the DKIM signatures stay intact, which means that email really from Yahoo or AOL will still pass DMARC at the eventual destination). Of course it's possible that some people are forwarding email in ways that modify the message and thus may have problems, but if so they're doing something out of the ordinary; our simple mail forwarding doesn't do this.

(We allow users to run programs from their .forward files, so people can do almost arbitrarily complex things if they want to.)

There is one exception to this. Email that our commercial anti-spam system detects as being either spam or a virus has its Subject: header modified, which will invalidate any previously valid DKIM signature, which means that it will fail to forward through us to DMARC respecting places (such as GMail). This would only affect people who forward all email (not just non-spam email) and then only if the email was legitimately from Yahoo or AOL in the first place (and got scored or mis-scored as spam). I think that this is a sufficiently small thing that I'm not worried about it, partly because places like GMail now seem to be even stricter than our anti-spam system is so some percentage of potentially dodgy email is already not being forwarded successfully.

People who forward their email to DMARC-respecting places will be affected in one additional way. The simple way to put it is that our forwarding is now imperfect, in that we'll accept some legitimate messages but can't forward them successfully. These would be emails from legitimate Yahoo or AOL users that were either sent from outside those places or that got modified in transit by, eg, mailing lists. A user who forwards their email to GMail is now losing these emails more or less silently (to the user). In extreme cases it's possible that they'll get unsubscribed from a mailing list due to these bounces.

This also affects any local user who was sending email out through our local mail gateway using their AOL or Yahoo From: address. To put it one way, I don't think we have very many people in this situation and I don't think that they'll have many problems fixing their configurations to work again.

(I'd like to monitor the amount of forwarding rejections but i can't think of a good way to dig the information out of our Exim logs, since mailing lists generally change the envelope sender address. This makes it tempting to have our inbound SMTP gateway do DMARC checks purely so I can see how many incoming messages fail them.)

PS: writing this entry has been a useful exercise in thinking through the full implications of our setup, as I initially forgot that our anti-spam filtering would invalidate DKIM signatures under some circumstances.

by cks at April 24, 2014 03:12 AM

April 23, 2014

Ubuntu Geek

Yellow Bricks

PernixData feature announcements during Storage Field Day


During Storage Field Day today PernixData announced a whole bunch of features that they are working on and will be released in the near future. In my opinion there were four major features announced:

  • Support for NFS
  • Network Compression
  • Distributed Fault Tolerant Memory
  • Topology Awareness

Lets go over these one by one:

Support for NFS is something that I can be brief about I guess; as it is what it says it is. Something that has come up multiple times in conversations seen on twitter around Pernix and it looks like they have managed to solve the problem and will support NFS in the near future. One thing I want to point out, PernixData does not introduce a virtual appliance in order to support NFS or create an NFS server and proxy the IOs, sounds like magic right… Nice work guys!

It gets way more interesting with Network compression. What is it, what does it do? Network Compression is an adaptive mechanism that will look at the size of the IO and analyze if it makes sense to compress the data before replicating it to a remote host. As you can imagine especially with larger block sizes (64K and up) this could significantly reduce the data that is transferred over the network. When talking to PernixData one of the questions I had was well what about the performance and overhead… give me some details, this is what they came back with as an example:

  • Write back with local copy only = 2700 IOps
  • Write back + 1 replica = 1770 IOps
  • Write back + 1 replica + network compression = 2700 IOps

As you can see the number of IOps went down when a remote replica was added. However, it went up again to “normal” values when network compression was enabled, of course this test was conducted using large blocksizes. When it came to CPU overhead it was mentioned that the overhead so far has been demonstrated to be negligible.You may ask yourself why, it is fairly simple: the cost of compression weighs up against the CPU overhead and results in an equal performance due to lower network transfer requirements. What also helps here is that it is an adaptive mechanism that does a cost/benefit analyses before compressing. So if you are doing 512 byte or 4KB IOs then network compression will not kick in, keeping the overhead low and the benefits high!

I personally got really excited about this feature: DFTM = Distributed Fault Tolerant Memory. Say what? Yes, distributed fault tolerant memory! FVP, indeed besides virtualizing flash, can now also virtualize memory and create an aggregated pool of resources out of it for caching purposes. Or in a more simplistic way: what they allow you to do is reserve a chunk of host memory as virtual machine cache. Once again happens on a hypervisor level, so no requirement to run a virtual appliance, just enable and go! I would want to point out though that there is “cache tiering” at the moment, but I guess Satyam can consider that as a feature request. Also, when you create an FVP cluster hosts within that cluster will either provide “flash caching” capabilities or “memory caching” capabilities. This means that technically virtual machines can use “local flash” resources while the remote resources are “memory” based (or the other way around). I would avoid this at all cost personally though as it will give some strange unpredictable performance result.

So what does this add? Well crazy performance for instance…. We are talking 80k IOps easily with a nice low latency of 50-200 microseconds. Unlike other solutions, FVP doesn’t restrict the size of your cache either. By default it will make a recommendation of 50% unreserved capacity to be used per host. Personally I think this is a bit high, as most people do not reserve memory this will typically result 50% of your memory to be recommended… but fortunately FVP allows you to customize this as required. So if you have 128GB of memory and feel 16GB of memory is sufficient for memory caching then that is what you assign to FVP.

Another feature that will be added is Topology Awareness. Basically what this allows you to do is group hosts in a cluster and create failure domains. An example may make this a bit easier to grasp: Lets assume you have 2 blade chassis each with 8 hosts, when you enable “write back caching” you probably want to ensure that your replica is stored on a blade in the other chassis… and that is exactly what this feature allows you to do. Specify replica groups, add hosts to the replica groups, easy as that!

And then specify for your virtual machine where the replica needs to reside. Yes you can even specify that the replica needs to reside within its failure domain if there are requirements to do so, but in the example below the other “failure domain” is chosen.

Is that awesome or what? I think it is, and I am very impressed by what PernixData has announced. For those interested, the SFD video should be online soon, and those who are visiting the Milan VMUG are lucky as Frank mentioned that he will be presenting on these new features at the event. All in all, an impressive presentation again by PernixData if you ask me… awesome set of features to be added soon!

<Will add video when released>

"PernixData feature announcements during Storage Field Day" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

by Duncan Epping at April 23, 2014 09:48 PM

Mark Shuttleworth

U talking to me?

This upstirring undertaking Ubuntu is, as my colleague MPT explains, performance art. Not only must it be art, it must also perform, and that on a deadline. So many thanks and much credit to the teams and individuals who made our most recent release, the Trusty Tahr, into the gem of 14.04 LTS. And after the uproarious ululation and post-release respite, it’s time to open the floodgates to umpteen pent-up changes and begin shaping our next show.

The discipline of an LTS constrains our creativity – our users appreciate the results of a focused effort on performance and stability and maintainability, and we appreciate the spring cleaning that comes with a focus on technical debt. But the point of spring cleaning is to make room for fresh ideas and new art, and our next release has to raise the roof in that regard. And what a spectacular time to be unleashing creativity in Ubuntu. We have the foundations of convergence so beautifully demonstrated by our core apps teams – with examples that shine on phone and tablet and PC. And we have equally interesting innovation landed in the foundational LXC 1.0, the fastest, lightest virtual machines on the planet, born and raised on Ubuntu. With an LTS hot off the press, now is the time to refresh the foundations of the next generation of Linux: faster, smaller, better scaled and better maintained. We’re in a unique position to bring useful change to the ubiquitary Ubuntu developer, that hardy and precise pioneer of frontiers new and potent.

That future Ubuntu developer wants to deliver app updates instantly to users everywhere; we can make that possible. They want to deploy distributed brilliance instantly on all the clouds and all the hardware. We’ll make that possible. They want PAAS and SAAS and an Internet of Things that Don’t Bite, let’s make that possible. If free software is to fulfil it’s true promise it needs to be useful for people putting precious parts into production, and we’ll stand by our commitment that Ubuntu be the most useful platform for free software developers who carry the responsibilities of Dev and Ops.

It’s a good time to shine a light on umbrageous if understandably imminent undulations in the landscape we love – time to bring systemd to the centre of Ubuntu, time to untwist ourselves from Python 2.x and time to walk a little uphill and, thereby, upstream. Time to purge the ugsome and prune the unusable. We’ve all got our ucky code, and now’s a good time to stand united in favour of the useful over the uncolike and the utile over the uncous. It’s not a time to become unhinged or ultrafidian, just a time for careful review and consideration of business as usual.

So bring your upstanding best to the table – or the forum – or the mailing list – and let’s make something amazing. Something unified and upright, something about which we can be universally proud. And since we’re getting that once-every-two-years chance to make fresh starts and dream unconstrained dreams about what the future should look like, we may as well go all out and give it a dreamlike name. Let’s get going on the utopic unicorn. Give it stick. See you at vUDS.

by mark at April 23, 2014 05:16 PM

Everything Sysadmin

Reddit AMA about LOPSA-East

Ask me and the entire planning committee anything.

Thanks to everyone that participated. You can read the results at the link above.

April 23, 2014 04:28 PM

Standalone Sysadmin

No updates lately - super busy!

I've been writing less lately than normal, and given my habits of not posting, that's saying something!

Lately, I've been feeling less like a sysadmin and more like a community manager, honestly. On top of the normal LOPSA Board Member duties I've had, I'm serving as co-chair of the LISA'14 Tutorials committee AND the Invited Talks committee. PLUS I've been doing a lot of work with PICC, the company that manages the Cascadia IT Conference and LOPSA-East (which is next week, so if you haven't yet, register now. Prices are going up starting on Monday!) .

All of this leaves not much time at all for doing actual sysadmin work, and even less for writing about it.

As an overview of the stuff I've been dealing with at work, let me just implore you, if you're using Cisco Nexus switches, Do Not Use Switch Profiles. I've written about them before, but it would be impossible for me to tell you not to use them emphatically enough. They're terrible. I'll talk about how terrible some time later, but trust me on this.

Also, I've been doing a whole lot of network migration that I'll also write about at some time in the future, but I'll just say that it's really demoralizing to perform the same migration three times, but I'm awfully glad that I had a plan to rollback. At the moment, I'm working on writing some python scripts to make per-port changes simpler so that I can offload it to students. I'm glad that Cisco has the native Nexus Python API, but their configuration support is severely lacking - basically equivalent to cli(). Also, students migrating hosts to the new core...what could possibly go wrong? ;-)

Alright, no time to write more. I will work on writing more frequently, anyway!

by Matt Simmons at April 23, 2014 11:18 AM

Google Blog

Go back in time with Street View

If you’ve ever dreamt of being a time traveler like Doc Brown, now’s your chance. Starting today, you can travel to the past to see how a place has changed over the years by exploring Street View imagery in Google Maps for desktop. We've gathered historical imagery from past Street View collections dating back to 2007 to create this digital time capsule of the world.
If you see a clock icon in the upper left-hand portion of a Street View image, click on it and move the slider through time and select a thumbnail to see that same place in previous years or seasons.

Now with Street View, you can see a landmark's growth from the ground up, like the Freedom Tower in New York City or the 2014 World Cup Stadium in Fortaleza, Brazil. This new feature can also serve as a digital timeline of recent history, like the reconstruction after the devastating 2011 earthquake and tsunami in Onagawa, Japan. You can even experience different seasons and see what it would be like to cruise Italian roadways in both summer and winter.
Construction of the Freedom Tower, New York City
Destruction in Onagawa, Japan after the 2011 earthquake

Forget going 88 mph in a DeLorean—you can stay where you are and use Google Maps to virtually explore the world as it is—and as it was. Happy (time) traveling!

by Emily Wood (noreply@blogger.com) at April 23, 2014 07:00 AM

Going solar with SunPower

Just because Earth Day is over doesn’t mean we’re done doing good things for the planet. Yesterday we announced our biggest renewable energy purchase yet: an agreement with our Iowa utility partners to supply our data center facilities there with up to 407 megawatts of wind energy.

Today, we’re taking another step towards a clean energy future with a major new investment. Together with SunPower Corporation we’re creating a new $250 million fund to help finance the purchase of residential rooftop solar systems—making it easier for thousands of households across the U.S. to go solar. Essentially, this is how it works: Using the fund ($100 million from Google and $150 million from SunPower), we buy the solar panel systems. Then we lease them to homeowners at a cost that’s typically lower than their normal electricity bill. So by participating in this program, you don’t just help the environment—you can also save money.
A home sporting SunPower solar panels

SunPower delivers solar to residential, utility and commercial customers and also manufacturers its own solar cells and panels.They’re known for having high-quality, high reliability panels which can generate up to 50 percent more power per unit area, with guaranteed performance and lower degradation over time. That means that you can install fewer solar panels to get the same amount of energy. And SunPower both makes the panels and manages the installation, so the process is seamless.

This is our 16th renewable energy investment and our third residential rooftop solar investment (the others being with Solar City and Clean Power Finance). Overall we’ve invested more than $1 billion in 16 renewable energy projects around the world, and we’re always on the hunt for new opportunities to make more renewable energy available to more people—Earth Day and every day.

by Emily Wood (noreply@blogger.com) at April 23, 2014 06:30 AM

Chris Siebenmann

At least partially understanding DMARC

DMARC is suddenly on my mind because of the news that AOL changed its DMARC policy to 'reject', following the lead of Yahoo which did this a couple of weeks ago. The short version is that a DMARC 'reject' policy is what I originally thought DKIM was doing: it locks down email with a From: header of your domain so that only you can send it. More specifically, all such email must not merely have a valid DKIM signature but a signature that is for the same domain as the From: domain; in DMARC terminology this is called being 'aligned'. Note that the domain used to determine the DMARC policy is the From: domain, not the DKIM signature domain.

(I think that DMARC can also be used to say 'yes, really, pay attention to my strict SPF settings' if you're sufficiently crazy to break all email forwarding.)

This directly affects anyone who wants to send email with a From: of their Yahoo or AOL address but not do it through Yahoo/AOL's SMTP servers. Yahoo and AOL have now seized control of that and said 'no you can't, we forbid it by policy'. Any mail system that respects DMARC policies will automatically enforce this for AOL and Yahoo.

(Of course this power grab is not the primary goal of the exercise; the primary goal is to cut off all of the spammers and other bad actors that are attaching Yahoo and AOL From: addresses to their email.)

This indirectly affects anyone who has, for example, a mailing list (or a mail forwarding setup) that modifies the message Subject: or adds a footer to the message as it goes through the list. Such modifications will invalidate the original DKIM signature of legitimate email from a Yahoo or AOL user and then this bad DKIM signature will cause the message to be rejected by downstream mailers that respect DMARC. The only way to get such modified emails past DMARC is to change the From: header away from Yahoo or AOL, at which point their DMARC 'reject' policies don't apply.

DMARC by itself does not break simple mail relaying and forwarding (including for simple mailing lists), ie all things where the message and its headers are unmodified. An unmodified message's DKIM signature is still valid even if it doesn't come directly from Yahoo or AOL (or whoever) so everything is good as far as DMARC is concerned (assuming SPF sanity).

Note that Yahoo and AOL are not the only people with a DMARC 'reject' policy. Twitter has one, for example. You can check a domain's DMARC policy (if any) by looking at the TXT record on _dmarc.<domain>, eg _dmarc.twitter.com. I believe the 'p=' bit is the important part.

PS: I suspect that more big free email providers are going to move to publishing DMARC 'reject' policies, assuming that things don't blow up spectacularly for Yahoo and AOL. Which I doubt they will.

by cks at April 23, 2014 05:13 AM

April 22, 2014

Ubuntu Geek

What is new in ubuntu 14.04 (Trusty Tahr) desktop

Ubuntu 14.04 (Trusty Tahr) was released on 17th April 2014.Ubuntu 14.04 LTS will be supported for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Core, Kubuntu, Edubuntu, and Ubuntu Kylin. All other flavours will be supported for 3 years.
(...)
Read the rest of What is new in ubuntu 14.04 (Trusty Tahr) desktop (361 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: ,

Related posts

by ruchi at April 22, 2014 11:38 PM

Server Density

Android Server Monitoring App released

After the release of our monitoring app for iPhone 2 months ago, the same app is now available on Android!

It allows you to see your notification center on the move so you can quickly see open and closed alerts assigned to you and across your whole account. It includes push notification alerts and custom sounds so you can always be woken up!

Android Server Monitoring App Use Case

The app is free for all v2 customers (and users on the trial). Find out more on the product page of our website.

The post Android Server Monitoring App released appeared first on Server Density Blog.

by David Mytton at April 22, 2014 02:56 PM

Google Webmasters

Introducing our global Google+ page for webmasters

Webmaster Level: All

We’ve recently launched our global Google Webmasters Google+ page. Have you checked it out yet? Our page covers a plethora of topics:
Follow us at google.com/+GoogleWebmasters and let us know in the comments what else you’d like to see on our page! If you speak Italian, Japanese, Russian or Spanish, be sure to also join one of our webmaster communities to stay up-to-date on language and region-specific news.

Google Webmasters from around the world
Hello from around the world!

by Mary (noreply@blogger.com) at April 22, 2014 01:55 PM

Ubuntu Geek

The Nubby Admin

Definitive List of Web-Based Server Control Panels

(Updated April 21, 2014)

As someone who has worked in web hosting, I’ve had my eye on just about every web-based control panel ever created. Most people will likely think of cPanel when they hear the phrase “server control panel” and have visions of web hosts dancing in their heads. Server control panels can be used for much more than web hosting, however. Control panels can allow people to administer systems with the click of a button having little interaction with the gorier details. Some might think that kind of scenario is categorically wrong, but I disagree in some circumstances. There are some *NIX oriented colleagues that I’d tackle before they got too close to a Windows server. For them, WebsitePanel might be a better option. There are also some folks that have need of their own server(s) and are happy to perform their own button mashing to reboot services and etc. I’m reminded of Jordan Sissel’s SysAdvent post “Share Skills and Permissions with Code.” In those scenarios, server control panels are excellent.

The nature of server control panels makes them most desirable by web hosting companies. As such, most of the web-based server control panels that I have found are slanted in that direction and might take some creativity to warp to your needs. Others appear to be more easily used as a general “E-Z Mode” SysAdmin front-end (Open Panel comes to mind). Don’t discard a control panel simply because it is slanted to web hosting. Some of them are much fuller than that.

Nevertheless, here the latest version of my ever growing list of web-based server control panels:

FOSS Control Panels

  • CentOS Web Panel (AKA CWP. CentOS Linux only [duh]. Unkown license, but it’s “Free”)
  • DTC (Linux, FreeBSD and Mac OS X Server. GPL license. Stands for “Domain Technologie Control.” Looks like a great feature set. I don’t know why it’s not more popular.)
  • EHCP (Linux only. GPL license. Stands for “Easy Hosting Control Panel”)
  • Froxlor (Linux and BSD. GPL License. A fork of SysCP. )
  • GNU Panel (Linux only. BSD license. Just kidding! It’s GPL.)
  • ISPConfig (Linux only. BSD license. Made by the HowToForge folks. HTTP, SMTP, FTP, DNS and OpenVZ virtualization are supported among many other features)
  • IspCP Omega (Linux only. Fork of VHCS. Old VHCS code is MPL, new code is GPL2. The goal is to port everything and make it GPL2.)
  • Open Panel (Linux only. GPL license. Their pre-made OpenApps looks cool. I don’t know why this hasn’t made more waves than it has!)
  • RavenCore (Linux and BSD. GPLv2.) RavenCore’s only home on the internet is apparently SourceForge. The domain listed for the project, www.ravencore.com, doesn’t respond. Take that for what it’s worth.
  • SysCP (Linux only. GPL license.)
  • VestaCP (Linux only, GPLv3 License) Has paid support options, but the control panel itself is free.
  • VHCS (URL Removed! Google says that the domain has been harboring viruses and other evil things) (Linux only. MPL license. Stands for “Virtual Hosting Control System”)
  • WebController (Windows. only GPL. SourceForge project with an appalling website. Looks like it’s abandoned but I’m not sure.)
  • Web-CP (Linux only. Not sure what license, but I assume GPL since it was a fork of the older web://cp product that itself was GPL. Web-CP looks abandoned. The last update on the site was 2005 and the latest bug closed in Mantis is 2006. The wiki is full of spam [I've never seen spam for breast enlargement and pistachios on the same page before - Thanks Web-CP!])
  • zpanel (Windows and POSIX-based OSs – that supposedly includes Max OS X, but a commentor below disputes that.)

Control Panels with a Free and Paid Edition

  • Ajenti (LGPLv3 with special clauses. Linux and BSD.) Annoying licensing model that’s free for your own servers at home or internal work servers, however as soon as you attempt to do any kind of hosting on it you have to cough up money. Seems like a decent product though.
  • ApPHP Admin Panel (Free, Advanced, and Pro version. Linux. )
  • ServerPilot (Ubuntu Linux only) This isn’t so much a server control panel, as it is a management pane for developers who deploy PHP applications on Ubuntu. It is not self hosted. There’s a free edition that has basic management features for your server, and paid editions with more features.
  • Webmin(Primarily POSIX-based OSs, however a limited Windows version exists)
    • Usermin Module (POSIX only. Simple webmail interface and user account modification for non-root users)
    • Virtualmin Module (POSIX only. Allows for multi-tenant use of a server much like a shared web host)
    • Cloudmin Module (POSIX only. Creats VPSs using Xen, KVM and OpenVZ among others)

Commercial Control Panels

  • Core Admin: Commercial control panel, but has a free web edition. Manage many servers from one portal and delegate permissions to different users.
  • cPanel / WHM (Linux and FreeBSD. The granddaddy of control panels started back in 1996 as an in-house app that eventually got licensed. WHM controls the entire server. cPanel is user-oriented.)
    • WHMXtra (Not a control panel on its own, but it’s a significant third-party add-on to WHM)
  • DirectAdmin (Linux and BSD.)
  • Ensim (Control panel that handles the management of cloud services Microsoft Hyper-V, Active Directory, Lync, Mozy, Anti Virus / Anti Spam Solutions like F-Secure, MessageLabs, Barracuda and a ton of other things. It’s really for $n aaS providers to build a business around.)
  • Enkompass (Windows only. cPanel’s Windows product.)
  • H-Sphere (Windows, Linux and BSD. Originally made by Positive Software before being bought by Parallels. I’m not sure how this software compares / competes with Parallels’s Plesk. This is an all-in-one provisioning, billing and control panel tool. Obviously focused solely on web hosts.)
  • HMS Panel (Linux only.)
  • Hosting Controller (Windows and Linux. Also supports managing Microsoft Exchange, BlackBerry Enterprise Server, SharePoint, Office Communication Server, Microsoft Dynamics and more.)
  • HyperVM (Linux only. Virtualization management platform. Uses Xen and OpenVZ. Sister product to Kloxo.)
  • InterWorx (Linux only. Can manage Ruby on Rails.)
  • Kloxo (Linux only. More than just a server management platform, this is a large web hosting platform that is geared very much for a client / provider relationship.
  • Layered Panel (Control panel geared towards free hosts that inject ads into their customers sites. Linux.)
  • Live Config (Linux)
  • Machsol (Unusual in this list because it’s a control panel to manage the hosting of major enterprise server applications like Exchange, Sharepoint and BES.)
  • Parallels Helm (Windows. One of the many acquisitions that Parallels has made.)
  • Parallels Plesk (Linux and Windows. Probably the biggest competitor to cPanel.)
  • SolusVM (Linux only. Manages VPSs and VPS clusters using OpenVZ, Xen and KVM.)
  • vePortal VPS Control Panel
  • vePortal veCommander
  • WebsitePanel (Windows only. The former dotnetpanel after it was revised by SMB SAAS Systems Inc. and released as a SourceForge project.)
  • xopanel (Windows, Linux, BSD. Unsure about license.) Actually, I’m unsure about a lot concerning this product. The product and website are all in Turkish and don’t seem to have an English counterpart. That’s a shame because the product looks good.
  • xpanel (Rather emaciated looking control panel with very low price. Only advertised to run on Fedora.)

Billing / Automation Tools for Control Panels

These are billing and automation tools that tightly integrate with control panels.

Misc. Inclusions

  • Aventurin{e} (Linux only. This is actually a pre-made image that you drop onto a server. It allows you to provision VPSs.
  • BlueOnyx (Linux only. The successor to BlueQuartz below. This isn’t a control panel itself, but a full-fledged Linux distribution. However, since it’s geared to web hosting companies, it has a web interface for your to manage most of the server’s functions. I debated if I should include it, but decided in the affirmative for the sake of being thorough.)
  • BlueQuartz (Linux appliance. Based on the EOL CentOS 4.)
  • Cast-Control (Streaming media control panel. Does ShoutCast, Icecast and more.)
  • CentovaCast (Internet Radio streaming control panel. Based on ShoutCast.)
  • Fantistico (Automated application installation tool)
  • Installtron (Automated application installation tool)
  • SCPanel (ShoutCast internet radio hosting panel)
  • Softaculous (Automated application installation tool)
  • WHMXtra (Additional features for WHM)

Gaming Control Panels

Included because, hey, they’re control panels too!

Defunct Control Panels

  • CP+ (Linux only. Ancient control panel that has since been abandoned. The developer, psoft, is yet another Parallels acquisition. Only included for thoroughness.)

I’d like for this to become a definitive list of web-based control panels (regardless of their focus; server management or web hosting). Basically, if it can manage a server or services and has a web front-end, I’d like to know about it. I’d appreciate any social shares. Likes, +1s, Tweets, Stumbles, Digg’s and etc. are awesome. If you know of any control panels that I’ve missed (active or defunct, since I love history), or if you spot a control panel that I mis-categorized, please let me know in the comments below.

Advertisement:

by WesleyDavid at April 22, 2014 11:35 AM

Google Blog

Ok Glass… Let’s celebrate Earth Day

Part of honoring Earth Day is celebrating the people who dedicate their lives to protecting our planet’s most vulnerable species. You’ll find one of those people in the tall grasslands of Nepal’s Chitwan National Park, where Sabita Malla, a senior research officer at World Wildlife Fund (WWF), is hard at work protecting rhinos and Bengal tigers from poaching. She spends her days collecting data about wildlife in order to track the animals, assess threats, and provide support where needed. Now, she’s getting help from something a bit unexpected: Google Glass.

Last year, WWF started exploring how smart eyewear could help further its conservation mission in the Arctic and the Amazon as part of the Giving through Glass Explorer program. Now they’ve brought it to Nepal to see how it could help monitor wild rhinos. Take a peek:

Rhino monitoring can be a slow process, especially in habitats with tricky terrain, but data collection is crucial for making the right conservation decisions. Most parts of Chitwan National Park are inaccessible to vehicles, so Sabita and her team ride in on elephants, and have been collecting health and habitat data using pencil and paper.

Now custom-built Glassware (the Glass version of apps) called Field Notes can help Sabita do her work hands-free instead of gathering data in a notebook. That’s helpful for both accuracy and safety when you’re on an elephant. Using voice commands, Sabita and other researchers can take photos and videos, and map a rhino’s location, size, weight, and other notable characteristics. The notes collected can also be automatically uploaded to a shared doc back at the office, making it easier to collaborate with other researchers, and potentially a lot faster than typing up handwritten notes.

This is just one example of a nonprofit exploring how Glass can make their critical work easier. Today, we’re looking for more ideas from you.

If you work at a nonprofit and have an idea for how to make more of a difference with Glass, share your ideas at g.co/givingthroughglass by 11:59 PDT on May 20, 2014. Five U.S.-based nonprofits will get a Glass device, a trip to a Google office for training, a $25,000 grant, and help from Google developers to make your Glass project a reality.

To learn more about Google.org's ongoing collaboration with World Wildlife Fund, visit this site.

by Emily Wood (noreply@blogger.com) at April 22, 2014 06:00 AM

Chris Siebenmann

The question of language longevity for new languages

Every so often I feel a temptation to rewrite DWiki (the engine behind this blog) in Go. While there are all sorts of reasons not to (so many that it's at best a passing whimsy), one concern that immediately surfaces is the question of Go's likely longevity. I'd like the blog to still be here in, say, ten years, and if the engine is written in Go that needs Go to be a viable language in ten years (and on whatever platform I want to host the blog on).

Of course this isn't just a concern for Go; it's a concern for any new language and there's a number of aspects to it. To start with there's the issue of the base language. There are lots of languages that have come and gone, or come and not really caught on very much so that they're still around but not really more than a relatively niche language (even though people often love them very much and are very passionate about them). Even when a language is still reasonably popular there's the question of whether it's popular enough to be well supported on anything besides the leading few OS platforms.

(Of course the leading few OS platforms are exactly the ones that I'm most likely to be using. But that's not always the case; this blog is currently hosted on FreeBSD, for example, not Linux, and until recently it was on a relatively old FreeBSD.)

But you'd really like more than just the base language to still be around, because these days the base language is an increasingly small part of the big picture of packages and libraries and modules that you can use. We also want a healthy ecology of addons for the language, so that if you need support for, say, a new protocol or a new database binding or whatever you probably don't have to write it yourself. The less you have to do to evolve your program the more likely it is to evolve.

Finally there's a personal question: will the language catch on with you so that you'll still be working with it in ten years? Speaking from my own experience I can say that it's no fun to be stuck with a program in a language that you've basically walked away from, even if the language and its ecology is perfectly healthy.

Of course, all of this is much easier if you're writing things that you know will be superseded and replaced before they get anywhere near ten years old. Alternately you could be writing an implementation of a standard so that you could easily swap it out for something written in another language. In this sense a dynamically rendered blog with a custom wikitext dialect is kind of a worst case.

(For Go specifically I think it's pretty likely to be around and fully viable in ten years, although I have less of a sense of my own interest in programming in it. Of course ten years can be long time in computing and some other language could take over from it. I suspect that Rust would like to, for example.)

by cks at April 22, 2014 03:46 AM

April 21, 2014

Chris Siebenmann

Thinking about how to split logging up in multiple categories et al

I've used programs that do logging (both well and badly) and I've also written programs that did logging (also both reasonably well and badly) and the whole experience has given me some views on how I like logging split up to make it more controllable.

It's tempting to say that controlling logging is only for exceptional cases, like debugging programs. This is not quite true. Certainly this is the dominant case, but there are times when people have different interests about what to log even in routine circumstances. For example, on this blog I log detailed information about conditional GETs for syndication feeds because I like tracking down why (or why not) feed fetchers succeed at this. However this information isn't necessarily of interest to someone else running a DWiki instance so it shouldn't be part of the always-on mandatory logging; you should be able to control it.

The basic breakdown of full featured logging in a large system is to give all messages both a category and a level. The category is generally going to be the subsystem that they involve, while the level is the general sort of information that they have (informational, warnings, progress information, debugging details, whatever). You should be able to control the two together and separately, to say that you want only progress reports from all systems or almost everything from only one system and all the way through.

My personal view is that this breakdown is not quite sufficient by itself and there are a bunch of cases where you'll also want a verbosity level. Even if verbosity could in theory be represented by adding more categories and levels, in practice it's much easier for people to crank up the verbosity (or crank it down) rather than try to do more complex manipulations of categories and levels. As part of making life easier on people, I'd also have a single option that means 'turn on all logging options and log all debugging information (and possibly everything)'; this gives people a simple big stick to hit a problem with when they're desperate.

If your language and programming environment doesn't already have a log system that makes at least the category plus level breakdown easy to do, I wouldn't worry about this for relatively small programs. It's only fairly large and complex programs with a lot of subsystems where you start to really want this sort of control.

Sidebar: the two purposes of specific control

There are two reasons to offer people specific control over logging. The first is what I mentioned: sometimes not all information is interesting to a particular setup. I may want information on syndication feed conditional GETs while you may want 'time taken' information for all requests. Control over logging allows the program to support both of us (and the person who doesn't care about either) without cluttering up logs with stuff that we don't want. This is log control for routine logs, stuff that you're going to use during normal program operation.

The second reason is that a big system can produce too much information at full logging flow when you're trying to troubleshoot it, so much that useful problem indicators are lost in the overall noise. Here categorization and levels are a way of cutting down on the log volume so that people can see the important things. This is log control for debugging messages.

(There is an overlap between these two categories. You might log all SQL queries that a system does and the time they take for routine metrics, even though this was originally designed for debugging purposes.)

by cks at April 21, 2014 06:40 AM

April 20, 2014

Rands in Repose

The Diving Save

Angela quit. She walked in on a Monday morning, went straight into Alex’s office, resignation letter in hand, and said, “I have a great offer from another company that I’ve accepted. My last day is a week from Friday.”

Alex reacted. After listening to Angela’s resignation, he told her simply and clearly, “I know you don’t want to hear this. I know you’re not asking for a counter offer, but let me see what I can do.” Alex got on the phone with his boss. He called HR, and he called Legal. During those calls, he got approval to give Angela a substantive raise, a stock grant, a bonus, and a promotion.

He did all of this before lunch.

After lunch, he took this counter offer to Angela and he did the most important thing. He showed her all the components of the counter offer and he told her a story. It was a story of why she should stay at the company as an engineer, her role in the company’s success, and all of the opportunity that was ahead of her. He told this story amazingly well.

A week later, Angela walked into Alex’s office and told him that she was staying. She stayed for years.

This is a well-performed Diving Save.

Diving Save Disclaimers

Diving Saves are usually a sign of poor leadership. People rarely just up and leave. There are a slew of obvious warning signs I’ve documented elsewhere, but the real first question you have to ask yourself once you get over the shock of an unexpected resignation is: “Did you really not see it coming? Really?

There are unavoidable Diving Saves. There are lots of companies that are foaming at the mouth crazy about the idea of recruiting your bright people. Sometimes these companies successfully sneak in and recruit a perfectly happy person who really wants to stay, but the offer is just so… bright and shiny. They have to accept.

Whether you screwed up or they were enticed by something bright and shiny, you start the Diving Save answering a lot of questions in a very short amount of time. The big question you have to answer is:

Do You Really Want to Do This?

A Diving Save requires a lot of urgent work that needs to be done immediately, and there’s a good chance this person still might leave. It’s going to be a shock when they open their mouth and the unexpected words “I resign” come out. You’re likely going to have a strong emotional reaction, but before you act you need to quickly answer a lot of different questions, like…

  • What value is this person creating? What is the unique work this person is doing that would be hard to replace? List three things this person has built or accomplished in the last year. Do you need more of these things?

  • What would the obvious and non-obvious impact to the team be if this person left? Move away from the work – how does this person fit into the team? What role do they fill that doesn’t have a title? Are they serving as essential connective tissue? Who do they balance out? Who can they translate for? Who would storm into your office absolutely furious if this person left?

  • What would the impact to the company be if this person left? Now that you’ve considered team impact, what about company impact? What is the story that is going to be told by people outside of the team regarding this person’s departure? Do you care about this story? How is attrition in your company? How many people have left in the last six months? Could this person’s departure trigger an exodus?

  • What are the crazy, unpredictable side effects of this person leaving? Get paranoid now. What are the least likely things that might occur when this person leaves? These theories can be goofball, but now is a good time to be paranoid – someone just walked in and quit and you weren’t expecting it. What else might be up? How might this person’s departure accelerate these hopefully unlikely scenarios?

  • What is the impact of performing a Diving Save? Ideal professional protocol involves a departing person who legitimately cares about the health of the team and does as much as possible to prevent disrupting it with their potential departure. However, if this was the way that people worked, the fact that a successful Diving Save had been performed would never be known to anyone except a small handful of people who needed to know.

People don’t work this way. You have to assume that much of the narrative and compensation that you’re about offer will become known by the team. I understand why people share this confidential information, but I wish they wouldn’t. You need to first understand that some set of people are going to know you performed gymnastics to give this person a reason to stay (whether they stay or not), and also consider the impact of the team knowing this. Think about it like this: if you were to explain to a random person – in great detail – the potential intricacies of your Diving Save, would they agree that you are doing the right thing?

A Compelling, Unwavering Story and A Show of Force

Once you’ve chosen to perform a Diving Save, you need to adopt a specific mindset: This person is not leaving. It’s a bit of a delusional perspective, since there is a very good chance they might leave, but for now your opinion is a resolute no way.

It took quite a bit of courage for this person to resign. They had to completely decide to leave, and, by doing so, they have mentally prepared themselves. Your job is to stand in clear mental opposition to that decision. Your job is to elegantly and professionally disassemble their plan, and you’ll do so with three interrelated tasks: Story, Compensation, and Curveballs.

Story

Your first and most important task is to thoroughly answer the question: “Why should they stay?” You can start thinking about that answer by asking the question: “How did that other company sneak their way into this person’s head?” What was attractive to them about leaving? Was it the team? The opportunity? The compensation?

You had a unique opportunity to gather a lot of this data when they were resigning, but you were likely slightly in shock. Now is a good time to reflect on their resignation. What reasons, however small, did they give for leaving? You may or may not integrate this data into your Story, but as a starting point it is essential that you understand the mental conditions that led to their attempted resignation.

I start by framing the Story around opportunity. What is the narrative that I can tell about how this person can contribute to the company and to their team? What are the obvious opportunities ahead of them and what do they have to look forward to? Why have they stopped being able to see these opportunities themselves? What does winning look like? What are they going to build, why does it matter, and how it is going to help them grow?

The opportunity narrative is the cornerstone of your Diving Save. It frames everything that follows, and if you can’t define a compelling story regarding both the short-term and long-term prospects for this person, I’m not sure why you’re frantically working on a Diving Save.

Try a draft of your Story on a trusted, well-informed someone. They’ve got to have enough context about the person to know whether your story is credible and compelling. If, after a few practice rounds, your trusted someone isn’t sold, you don’t have a Story, yet.

Compensation

Salary, stock, bonus, and promotions. These are easy things to understand. They are a simple way to measure relative value, but completing this task starts with a warning: Is this situation completely about Compensation?

Here’s the situation I’d like you to avoid. If the person’s current compensation is fair, and simply raising it convinces them to stay, is this someone you want to stay? While Compensation is easy to compare, the reason I wanted you to start with developing the Story is because you want them to stay because of the compelling narrative, not simply because of compelling Compensation.

If it’s only about money, it will always be about money. You might be able to build a compelling Compensation package, and they might accept it, but if the only reason they are staying is money, not opportunity and growth, you’re delaying the inevitable. There is nothing preventing another enterprising company from building an even brighter and shinier offer and – guess what – they’ll be leaving… again.

If it’s not entirely about Compensation, my advice is, oddly: go big on Compensation. Let’s remember all the components:

  • Base salary
  • Bonuses
  • Title
  • Role
  • Stock.

For each of these aspects, you need to first ask yourself: “Is this fair? Is what I’ve compensated this person fair relative to their work?” If it’s not, what do you need to do to get it there? It’d be also good to understand why has it not been fair all this time? This is your baseline and it might clear up some friction between you and the employee, but you’re not done yet.

The next question is: “If this person kicks ass for the next two years, what would their compensation look like?” That’s probably a lot of money in salary, bonuses, and perhaps stock. That’s likely a promotion, too. I’m not saying this is your Diving Save package, but I want you to think about each aspect of compensation after two amazing years and then pick and choose the aspects that you believe will resonate.

As you’re planning the aspects of Compensation, you also want to understand your wiggle room relative to all the components. As we’ll see during the Pitch, you may need to adapt on the fly.

Two non-monetary aspects to consider: title and role. Title is the name that they can put on their business card. I believe in high tech this is becoming an increasingly irrelevant perk, but some folks grin when it says “Senior” in their title. So, go for it. The more important part of the promotion is the role: what is the work they are going to be doing? This is better defined and explained as part of your Story, but since title/role and compensation are often intertwined, you might be explaining it again here in Compensation.

The intent of the compensation package is a show of force. When you present the details of this offer, they should feel two things: you’ve considered everything, and holy shit.

Curveballs

With Story and Compensation built, your last task is Curveballs. This is a grab bag of hard-to-predict situations and questions that arise during a Diving Save, which might sound like, “Hey, I already gave my new employer a start date,” or “I already told a lot of people I was leaving. We had a party. What are they going to think if I stay?”

The core issue that you need to address is that this person likely fully decided to leave. This resignation (hopefully) wasn’t a ploy to get a raise; they were legit leaving and had altered their mindset appropriately. They were gone. You can start by working out in your head what you can do to make it socially ok and comfortable for this person to stay. Build a compelling reason and a one-liner answer for when the person inevitable gets asked why they chose not to leave.

If you’ve built a solid Story and have compelling Compensation, a lot of other Curveballs conveniently go away. If you understand your narrative about their career and reasoning behind how to compensate them, a lot of questions regarding “What are people going to think?” will be answered automatically. They are going to think you went with the best job available.

Before you pitch them on why they should stay, it’s worth replaying in your head every single word spoken when they resigned. You can’t predict all the Curveballs until you get to the Pitch, but from the time they resigned to the time it took you to build and present a response, small thoughts may have turned into scary Curveballs. The more you can predict and address before you pitch, the better your Story.

The Pitch

In the Angela Diving Save, one of the more impressive aspects of the action was the sense of urgency. Alex moved on the Diving Save swiftly and Angela noticed. He didn’t have to say a thing to for her to intuit that he believed that keeping her was his #1 priority. This sense of urgency is a great way to start a pitch – an unspoken but clear understanding that you, the hiring manager, are taking this situation seriously.

How you need to pitch the details of the Story and the Compensation, how you need to handle Curveballs, is entirely dependent on what this unique person – who is not resigning – needs to hear. Your Pitch needs to:

Be tailored to the person. What does this particular beautiful and unique snowflake need to hear? Have you addressed every single one of their concerns? Are they actually mad about something completely unrelated to their role and need to vent? I don’t know what specific advice to give you because I don’t know the specific person, but if you fail to hear them, you’re not going to save them.

Be delivered and then be adapted. When you get to the particulars of the Story and the Compensation, I like to tell the whole story. This is what I’m thinking… I deliver everything as one complete narrative, and then I shut the fuck up. The first few words out of their mouth, their immediate reaction, is going to tell you a lot. If they say the following, this is my initial reaction:

  • Wow. We’re in good shape.
  • Well. We’re in bad shape.
  • Hmmmm. We’re in particularly bad shape.

Of course, you can’t base your reaction on a single word, but every word matters. You need to listen hard to what they’re saying, and if you’ve done your homework, you can adapt your Save on the fly. They’re interested in more stock and less base? Great, what about X?

You’re going to want an answer at the end of the Pitch, but in my experience, this rarely occurs. They’ve had weeks to think about and deliver a resignation and you’ve had 36 hours filled with email, phone calls, and meetings to work out a response. Once you’ve Pitched, give them time to consider. Make sure you’ve answered every single one of their questions and then pick a time to meet again.

Saved

They stayed. All that work paid off. Congratulations. Don’t pat yourself on the back too hard. A Diving Save is not a professional move at which you want to learn to excel. If you’re the guy at the company who is great at Diving Saves, you’re also the guy who is working at a company where folks apparently need to resign in order to have a real conversation about their roles.

Last worry: you need to remember that this person who recently decided to stay also recently decided to leave. They completely imagined and started acting in a world where they were leaving the company. We humans are fond of structure, habit, and familiarity, and this recently-saved person picked a new company full of strangers and opaque opportunity. This human had to make a brave leap to make this choice, and just because they chose to stay for now doesn’t mean they’ll remain.

by rands at April 20, 2014 03:59 PM

Server Density

Chris Siebenmann

A heresy about memorable passwords

In the wake of Heartbleed, we've been writing some password guidelines at work. A large part of the discussion in them is about how to create memorable passwords. In the process of all of this, I realized that I have a heresy about memorable passwords. I'll put this way:

Memorability is unimportant for any password you use all the time, because you're going to memorize it no matter what it is.

I will tell you a secret: I don't know what my Unix passwords are. Oh, I can type them and I do so often, but I don't know exactly what they are any more. If for some reason I had to recover what one of them was in order to write it out, the fastest way to do so would be to sit down in front of a computer and type it in. Give me just a pen and paper and I'm not sure I could actually do it. My fingers and reflexes know them far better better than my conscious mind.

If you pick a new password based purely at random with absolutely no scheme involved, you'll probably have to write it down on a piece of paper and keep referring to that piece of paper for a while, perhaps a week or so. After the week I'm pretty confidant that you'll be able to shred the piece of paper without any risk at all, except perhaps if you go on vacation for a month and have it fall out of your mind. Even then I wouldn't be surprised if you could type it by reflex when you come back. The truth is that people are very good at pushing repetitive things down into reflex actions, things that we do automatically without much conscious thought. My guess is that short, simple things can remain in conscious memory (this is at least my experience with some things I deal with); longer and more complex things, like a ten character password that involves your hands flying all over the keyboard, those go down into reflexes.

Thus, where memorable passwords really matter is not passwords you use frequently but passwords you use infrequently (and which you're not so worried about that you've seared into your mind anyways).

(Of course, in the real world people may not type their important passwords very often. I try not to think about that very often.)

PS: This neglects threat models entirely, which is a giant morass. But for what it's worth I think we still need to worry about password guessing attacks and so reasonably complex passwords are worth it.

by cks at April 20, 2014 06:12 AM

April 19, 2014

Yellow Bricks

Heartbleed Security Bug fixes for VMware


It seems to be patch Saturday as today a whole bunch of updates of products were released. All of these updates relate to the heartbleed security bug fix. There is no point in listing every single product as I assume you all know the VMware download page by now, but I do want to link the most commonly used for your convenience:

Time to update, but before you do… if you are using NFS based storage make sure to read this first before jumping straight to vSphere 5.5 U1a!

"Heartbleed Security Bug fixes for VMware" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

by Duncan Epping at April 19, 2014 06:45 PM

Rands in Repose

Yellow Bricks

Alert: vSphere 5.5 U1 and NFS issue!


Some had already reported on this on twitter and the various blog posts but I had to wait until I received the green light from our KB/GSS team. An issue has been discovered with vSphere 5.5 Update 1 that is related to loss of connection of NFS based datastores. (NFS volumes include VSA datastores.)

This is a serious issue, as it results in an APD of the datastore meaning that the virtual machines will not be able to do any IO to the datastore at the time of the APD. This by itself can result in BSOD’s for Windows guests and filesystems becoming read only for Linux guests.

Witnessed log entries can include:

2014-04-01T14:35:08.074Z: [APDCorrelator] 9413898746us: [vob.storage.apd.start] Device or filesystem with identifier [12345678-abcdefg0] has entered the All Paths Down state.
2014-04-01T14:35:08.075Z: [APDCorrelator] 9414268686us: [esx.problem.storage.apd.start] Device or filesystem with identifier [12345678-abcdefg0] has entered the All Paths Down state.
2014-04-01T14:36:55.274Z: No correlator for vob.vmfs.nfs.server.disconnect
2014-04-01T14:36:55.274Z: [vmfsCorrelator] 9521467867us: [esx.problem.vmfs.nfs.server.disconnect] 192.168.1.1/NFS-DS1 12345678-abcdefg0-0000-000000000000 NFS-DS1
2014-04-01T14:37:28.081Z: [APDCorrelator] 9553899639us: [vob.storage.apd.timeout] Device or filesystem with identifier [12345678-abcdefg0] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast failed.
2014-04-01T14:37:28.081Z: [APDCorrelator] 9554275221us: [esx.problem.storage.apd.timeout] Device or filesystem with identifier [12345678-abcdefg0] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast failed.

If you are hitting these issues than VMware recommends reverting back to vSphere 5.5. Please monitor the following KB closely for more details and hopefully a fix in the near future: http://kb.vmware.com/kb/2076392

 

"Alert: vSphere 5.5 U1 and NFS issue!" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.

by Duncan Epping at April 19, 2014 08:29 AM

Chris Siebenmann

Cross-system NFS locking and unlocking is not necessarily fast

If you're faced with a problem of coordinating reads and writes on an NFS filesystem between several machines, you may be tempted to use NFS locking to communicate between process A (on machine 1) and process B (on machine 2). The attraction of this is that all they have to do is contend for a write lock on a particular file; you don't have to write network communication code and then configure A and B to find each other.

The good news is that this works, in that cross system NFS locking and unlocking actually works right (at least most of the time). The bad news is that this doesn't necessarily work fast. In practice, it can take a fairly significant amount of time for process B on machine 2 to find out that process A on machine 1 has unlocked the coordination file, time that can be measured in tens of seconds. In short, NFS locking works but it can require patience and this makes it not necessarily the best option in cases like this.

(The corollary of this is that when you're testing this part of NFS locking to see if it actually works you need to wait for quite a while before declaring things a failure. Based on my experiences I'd wait at least a minute before declaring an NFS lock to be 'stuck'. Implications for impatient programs with lock timeouts are left as an exercise for the reader.)

I don't know if acquiring an NFS lock on a file after a delay normally causes your machine's kernel to flush cached information about the file. In an ideal world it would, but NFS implementations are often not ideal worlds and the NFS locking protocol is a sidecar thing that's not necessarily closely integrated with the NFS client. Certainly I wouldn't count on NFS locking to flush cached information on, say, the directory that the locked file is in.

In short: you want to test this stuff if you need it.

PS: Possibly this is obvious but when I started testing NFS locking to make sure it worked in our environment I was a little bit surprised by how slow it could be in cross-client cases.

by cks at April 19, 2014 04:38 AM

April 18, 2014

Google Blog

Through the Google lens: this week’s search trends

What did you search for this week? What about everyone else? Starting today, we’ll be sharing a regular look back at some of the top trending items on Google Search. Let’s dive in.

From afikomen to 1040EZ
People were looking for information on Palm Sunday and Good Friday ahead of Easter; searches for both days were even higher than searches for the Pope himself. Turning to another religious tradition, with Passover beginning on Monday we saw searches rise over 100 percent for Seder staples like [charoset recipe], [brisket passover] and of course [matzo balls]. Alongside these celebrations, U.S. citizens observed another annual rite of spring: taxes were due on April 15, leading to a rise in searches for [turbotax free], [irs] and (whoops) [turbotax extension].
But what made this year different from all other years? A rare lunar eclipse known as the “blood moon,” when the Earth’s shadow covers the moon, making it look red, and which occurred on Tuesday. There were more than 5 million searches on the topic, as people were eager to learn more. (Hint: if you missed seeing the blood moon this time around, keep your eyes on the sky in October. This is the first lunar eclipse in a “lunar tetrad,” a series of four total lunar eclipses each taking place six lunar months apart.)
Say goodbye and say hello
This week marked the first anniversary of last year’s Boston Marathon bombing, and commemorations led searches for the term [boston strong] to rise once again. And just yesterday, we were saddened by the passing of Gabriel Garcia Marquez, the Colombian writer best known for his masterpiece “100 Years of Solitude”—not to mention responsible for high schoolers across the U.S. knowing the term “magical realism.” On a happier note, former First Daughter Chelsea Clinton announced she’s expecting.

Entertainment that makes you go ZOMG
“Game of Thrones” fans—at least those who hadn’t read the books—were treated to a bombshell in this past Sunday’s episode when (spoiler alert) yet another wedding turned murderous. Searches for [who killed joffrey] skyrocketed as people struggled to process the loss of the boy king we love to hate. On the more sedate end of the Sunday TV spectrum, we welcomed back AMC’s “Mad Men,” which continues to provide viewers with plenty of innuendo, allusion and fashion to chew on—and search for—in between episodes.

The trailer for the highly anticipated film version of “Gone Girl” dropped this week—vaulting searches for [gone girl trailer] nearly 1,000 percent—as did a clip from another book-to-movie remake, “The Fault in Our Stars.” Between these two films we expect no dry eyes in June and no intact fingernails come October. At least we’ve got something funny to look forward to: as news broke this week that Fox 2000 is developing a sequel to the 1993 comedy classic "Mrs. Doubtfire," searches on the subject have since spiked.
And that’s it for this week in search. If you’re interested in exploring trending topics on your own, check out Google Trends. And starting today, you can also sign up to receive emails on your favorite terms, topics, or Top Charts for any of 47 countries.

by Emily Wood (noreply@blogger.com) at April 18, 2014 04:23 PM

bc-log

Using sysdig to Troubleshoot like a boss

If you haven't seen it yet there is a new troubleshooting tool out called sysdig. It's been touted as strace meets tcpdump and well, it seems like it is living up to the hype. I would actually rather compare sysdig to SystemTap meets tcpdump, as it has the command line syntax of tcpdump but the power of SystemTap.

In this article I am going to cover some basic and cool examples for sysdig, for a more complete list you can look over the sysdig wiki. However, it seems that even the sysdig official documentation is only scratching the surface of what can be done with sysdig.

Installation

In this article we will be installing sysdig on Ubuntu using apt-get. If you are running an rpm based distribution you can find details on installing via yum on sysdig's wiki.

Setting up the apt repository

To install sysdig via apt we will need to setup the apt repository maintained by Draios the company behind sysdig. We can do this by running the following curl commands.

# curl -s https://s3.amazonaws.com/download.draios.com/DRAIOS-GPG-KEY.public | apt-key add -  
# curl -s -o /etc/apt/sources.list.d/draios.list http://download.draios.com/stable/deb/draios.list

The first command above will download the Draios gpg key and add it to apt's key repository. The second will download an apt sources file from Draios and place it into the /etc/apt/sources.list.d/ directory.

Update apt's indexes

Once the sources list and gpg key are installed we will need to re-sync apt's package indexes, this can be done by running apt-get update.

# apt-get update

Kernel headers package

The sysdig utility requires the kernel headers package, before installing we will need to validate that the kernel headers package is installed.

Check if kernel headers is installed

The system that I am using for this example already had the kernel headers packaged installed, to validate if they are installed on your system you can use the dpkg command.

    # dpkg --list | grep header
    ii  linux-generic                       3.11.0.12.13                     amd64        Complete Generic Linux kernel and headers
    ii  linux-headers-3.11.0-12             3.11.0-12.19                     all          Header files related to Linux kernel version 3.11.0
    ii  linux-headers-3.11.0-12-generic     3.11.0-12.19                     amd64        Linux kernel headers for version 3.11.0 on 64 bit x86 SMP
    ii  linux-headers-generic               3.11.0.12.13                     amd64        Generic Linux kernel headers

It is important to note that the kernel headers package must be for the specific kernel version your system is running. In the output above you can see the linux-generic package is version 3.11.0.12 and the headers package is for 3.11.0.12. If you have multiple kernels installed you can validate which version your system is running with the uname command.

# uname -r
3.11.0-12-generic

Installing the kernel headers package

To install the headers package for this specific kernel you can use apt-get. Keep in mind, you must specify the kernel version listed from uname -r.

# apt-get install linux-headers-<kernel version>

Example:

# apt-get install linux-headers-3.11.0-12-generic

Installing sysdig

Now that the apt repository is setup and we have the required dependencies, we can install the sysdig command.

# apt-get install sysdig

Using sysdig

Basic Usage

The syntax for sysdig is similar to tcpdump in particular the saving and reading of trace files. All of sysdig's output can be saved to a file and read later just like tcpdump. This is useful if you are running a process or experiencing an issue and wanted to dig through the information later.

Writing trace files

To write a file we can use the -w flag with sysdig and specify the file name.

Syntax:

# sysdig -w <output file>

Example:

# sysdig -w tracefile.dump

Like tcpdump the sysdig command can be stopped with CTRL+C.

Reading trace files

Once you have written the trace file you will need to use sysdig to read the file, this can be accomplished with the -r flag.

Syntax:

# sysdig -r <output file>

Example:

    # sysdig -r tracefile.dump
    1 23:44:57.964150879 0 <NA> (7) > switch next=6200(sysdig) 
    2 23:44:57.966700100 0 rsyslogd (358) < read res=414 data=<6>[ 3785.473354] sysdig_probe: starting capture.<6>[ 3785.473523] sysdig_probe: 
    3 23:44:57.966707800 0 rsyslogd (358) > gettimeofday 
    4 23:44:57.966708216 0 rsyslogd (358) < gettimeofday 
    5 23:44:57.966717424 0 rsyslogd (358) > futex addr=13892708 op=133(FUTEX_PRIVATE_FLAG|FUTEX_WAKE_OP) val=1 
    6 23:44:57.966721656 0 rsyslogd (358) < futex res=1 
    7 23:44:57.966724081 0 rsyslogd (358) > gettimeofday 
    8 23:44:57.966724305 0 rsyslogd (358) < gettimeofday 
    9 23:44:57.966726254 0 rsyslogd (358) > gettimeofday 
    10 23:44:57.966726456 0 rsyslogd (358) < gettimeofday

Output in ASCII

By default sysdig saves the files in binary, however you can use the -A flag to have sysdig output in ASCII.

Syntax:

# sysdig -A

Example:

# sysdig -A > /var/tmp/out.txt
# cat /var/tmp/out.txt
1 22:26:15.076829633 0 <NA> (7) > switch next=11920(sysdig)

The above example will redirect the output to a file in plain text, this can be helpful if you wanted to save and review the data on a system that doesn't have sysdig installed.

sysdig filters

Much like tcpdump the sysdig command has filters that allow you to filter the output to specific information. You can find a list of available filters by running sysdig with the -l flag.

Example:

    # sysdig -l

    ----------------------
    Field Class: fd

    fd.num            the unique number identifying the file descriptor.
    fd.type           type of FD. Can be 'file', 'ipv4', 'ipv6', 'unix', 'pipe', 'e
                      vent', 'signalfd', 'eventpoll', 'inotify' or 'signalfd'.
    fd.typechar       type of FD as a single character. Can be 'f' for file, 4 for 
                      IPv4 socket, 6 for IPv6 socket, 'u' for unix socket, p for pi
                      pe, 'e' for eventfd, 's' for signalfd, 'l' for eventpoll, 'i'
                       for inotify, 'o' for uknown.
    fd.name           FD full name. If the fd is a file, this field contains the fu
                      ll path. If the FD is a socket, this field contain the connec
                      tion tuple.
<truncated output>

Filter examples

Capturing a specific process

You can use the "proc.name" filter to capture all of the sysdig events for a specific process. In the example below I am filtering on any process named sshd.

Example:

    # sysdig -r tracefile.dump proc.name=sshd
    530 23:45:02.804469114 0 sshd (917) < select res=1 
    531 23:45:02.804476093 0 sshd (917) > rt_sigprocmask 
    532 23:45:02.804478942 0 sshd (917) < rt_sigprocmask 
    533 23:45:02.804479542 0 sshd (917) > rt_sigprocmask 
    534 23:45:02.804479767 0 sshd (917) < rt_sigprocmask 
    535 23:45:02.804487255 0 sshd (917) > read fd=3(<4t>10.0.0.12:55993->162.0.0.80:22) size=16384
Capturing all processes that open a specific file

The fd.name filter is used to filter events for a specific file name. This can be useful to see what processes are reading or writing a specific file or socket.

Example:

# sysdig fd.name=/dev/log
14 11:13:30.982445884 0 rsyslogd (357) < read res=414 data=<6>[  582.136312] sysdig_probe: starting capture.<6>[  582.136472] sysdig_probe:

Capturing all processes that open a specific filesystem

You can also use comparison operators with filters such as contains, =, !=, <=, >=, < and >.

Example:

    # sysdig fd.name contains /etc
    8675 11:16:18.424407754 0 apache2 (1287) < open fd=13(<f>/etc/apache2/.htpasswd) name=/etc/apache2/.htpasswd flags=1(O_RDONLY) mode=0 
    8678 11:16:18.424422599 0 apache2 (1287) > fstat fd=13(<f>/etc/apache2/.htpasswd) 
    8679 11:16:18.424423601 0 apache2 (1287) < fstat res=0 
    8680 11:16:18.424427497 0 apache2 (1287) > read fd=13(<f>/etc/apache2/.htpasswd) size=4096 
    8683 11:16:18.424606422 0 apache2 (1287) < read res=44 data=admin:$apr1$OXXed8Rc$rbXNhN/VqLCP.ojKu1aUN1. 
    8684 11:16:18.424623679 0 apache2 (1287) > close fd=13(<f>/etc/apache2/.htpasswd) 
    8685 11:16:18.424625424 0 apache2 (1287) < close res=0 
    9702 11:16:21.285934861 0 apache2 (1287) < open fd=13(<f>/etc/apache2/.htpasswd) name=/etc/apache2/.htpasswd flags=1(O_RDONLY) mode=0 
    9703 11:16:21.285936317 0 apache2 (1287) > fstat fd=13(<f>/etc/apache2/.htpasswd) 
    9704 11:16:21.285937024 0 apache2 (1287) < fstat res=0

As you can see from the above examples filters can be used for both reading from a file or the live event stream.

Chisels

Earlier I compared sysdig to SystemTap, Chisels is why I made that reference. Similar tools like SystemTap have a SystemTap only scripting language that allows you to extend the functionality of SystemTap. In sysdig these are called chisels and they can be written in LUA which is a common programming language. I personally think the choice to use LUA was a good one, as it makes extending sysdig easy for newcomers.

List available chisels

To list the available chisels you can use the -cl flag with sysdig.

Example:

    # sysdig -cl

    Category: CPU Usage
    -------------------
    topprocs_cpu    Top processes by CPU usage

    Category: I/O
    -------------
    echo_fds        Print the data read and written by processes.
    fdbytes_by      I/O bytes, aggregated by an arbitrary filter field
    fdcount_by      FD count, aggregated by an arbitrary filter field
    iobytes         Sum of I/O bytes on any type of FD
    iobytes_file    Sum of file I/O bytes
    stderr          Print stderr of processes
    stdin           Print stdin of processes
    stdout          Print stdout of processes
    <truncated output>

The list is fairly long even though sysdig is still pretty new, and since sysdig is on GitHub you can easily contribute and extend sysdig with your own chisels.

Display chisel information

While the list command gives a small description of the chisels you can display more information using the -i flag with the chisel name.

Example:

    # sysdig -i bottlenecks

    Category: Performance
    ---------------------
    bottlenecks     Slowest system calls

    Use the -i flag to get detailed information about a specific chisel

    Lists the 10 system calls that took the longest to return dur
    ing the capture interval.

    Args:
    (None)

Running a chisel

To run a chisel you can run sysdig with the -c flag and specify the chisel name.

Example:

    # sysdig -c topprocs_net
    Bytes     Process
    ------------------------------
    296B      sshd

Running a chisel with filters

Even with chisels you can still use filters to run chisels against specific events.

Capturing all network traffic from a specific process

The below example shows using the echo_fds chisel against the processes named apache2.

# sysdig -A -c echo_fds proc.name=apache2
------ Read 444B from 127.0.0.1:57793->162.243.109.80:80

GET /wp-admin/install.php HTTP/1.1
Host: 162.243.109.80
Connection: keep-alive
Cache-Control: max-age=0
Authorization: Basic YWRtaW46ZUNCM3lyZmRRcg==
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8

Capturing network traffic exchanged between a specific ip

We can also use the the echo_fds chisel to show all network traffic for a single ip using the fd.cip filter.

# sysdig -A -c echo_fds fd.cip=127.0.0.1
------ Write 1.92KB to 127.0.0.1:58896->162.243.109.80:80

HTTP/1.1 200 OK
Date: Thu, 17 Apr 2014 03:11:33 GMT
Server: Apache
X-Powered-By: PHP/5.5.3-1ubuntu2.3
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 1698
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=utf-8

Originally Posted on BenCane.com: Go To Article

by Benjamin Cane at April 18, 2014 01:30 PM

Chris Siebenmann

What modern filesystems need from volume management

One of the things said about modern filesystems like btrfs and ZFS is that their volume management functionality is a layering violation; this view holds that filesystems should stick to filesystem stuff and volume managers should stick to that. For the moment let's not open that can of worms and just talk about what (theoretical) modern filesystems need from an underlying volume management layer.

Arguably the crucial defining aspect of modern filesystems like ZFS and btrfs is a focus on resilience against disk problems. A modern filesystem no longer trusts disks not to have silent errors; instead it checksums everything so that it can at least detect data faults and it often tries to create some internal resilience by duplicating metadata or at least spreading it around (copy on write is also common, partly because it gives resilience a boost).

In order to make checksums useful for healing data instead of just simply detecting when it's been corrupted, a modern filesystem needs an additional operation from any underlying volume management layer. Since the filesystem can actually identify the correct block from a number of copies, it needs to be able to get all copies or variations of a set of data blocks from the underlying volume manager (and then be able to tell the volume manager which is the correct copy). In mirroring this is straightforward; in RAID 5 and RAID 6 it gets a little more complex. This 'all variants' operation will be used both during regular reads if a corrupt block is detected and during a full verification check where the filesystem will deliberately read every copy to check that they're all intact.

(I'm not sure what the right primitive operation here should be for RAID 5 and RAID 6. On RAID 5 you basically need the ability to try all possible reconstructions of a stripe in order to see which one generates the correct block checksum. Things get even more convoluted if the filesystem level block that you're checksumming spans multiple stripes.)

Modern filesystems generally also want some way of saying 'put A and B on different devices or redundancy clusters' in situations where they're dealing with stripes of things. This enables them to create multiple copies of (important) metadata on different devices for even more protection against read errors. This is not as crucial if the volume manager is already providing redundancy.

This level of volume manager support is a minimum level, as it still leaves a modern filesystem with the RAID-5+ rewrite hole and a potentially inefficient resynchronization process. But it gets you the really important stuff, namely redundancy that will actually help you against disk corruption.

by cks at April 18, 2014 06:18 AM

Raymii.org

IPSEC/L2TP VPN on Ubuntu 14.04 with OpenSwan, xl2tpd and ppp

This is a guide on setting up an IPSEC/L2TP vpn server with Ubuntu 14.04 using Openswan as the IPsec server, xl2tpd as the l2tp provider and ppp or local users / PAM for authentication. It has a detailed explanation with every step. We choose the IPSEC/L2TP protocol stack because of recent vulnerabilities found in pptpd VPNs and because it is supported on all major operating systems by default. More than ever, your freedom and privacy when online is under threat. Governments and ISPs want to control what you can and can't see while keeping a record of everything you do, and even the shady-looking guy lurking around your coffee shop or the airport gate can grab your bank details easier than you may think. A self hosted VPN lets you surf the web the way it was intended: anonymously and without oversight.

April 18, 2014 12:00 AM

April 17, 2014

Rich Bowen

ApacheCon NA 2014 Keynotes

This year at ApacheCon, I had the unenviable task of selecting the keynotes. This is always difficult, because you want to pick people who are inspirational, exciting speakers, but people who haven't already been heard by everyone at the event. You also need to give some of your sponsors the stage for a bit, and hope that they don't take the opportunity to bore the audience with a sales pitch.

I got lucky.

(By the way, videos of all of these talks will be on the Apache YouTube channel very soon - https://www.youtube.com/user/TheApacheFoundation)

We had a great lineup, covering a wide range of topics.

Day One:

0022_ApacheCon

We started with Hillary Mason, talking about Big Data. Unlike a lot of droney Big Data talks, she defined Big Data in terms of using huge quantities of data to solve actual human problems, and gave a historical view of Big Data going back to the first US Census. Good stuff.

0084_ApacheCon

Next, Samisa Abeysinghe talked about Apache Stratos, and the services and products that WSO2 is building on top of them. Although he had the opportunity to do nothing more than promote his (admittedly awesome) company, Samisa talked more about the Stratos project and the great things that it's doing in the Platform As A Service space. We love WSO2.

0127_ApacheCon

And to round out the first day of keynotes, James Watters from Pivotal talked about the CloudFoundry foundation that he's set up, and why he chose to do that rather than going with an existing foundation. Among other things. I had talked some with James prior to the conference about his talk, and he came through with a really great talk.

Day Two:

0602.ApacheCon

Day Two started with something a little different. Upayavira talked about the tool that geeks seldom mention - their minds - and how to take care of it. He talked about mindfullness - the art of being where you are when you are, and noticing what is going on around you. He then led us through several minutes of quiet contemplation and focusing of our minds. While some people thought this was a little weird, most people I talked with appreciated this calm centering way to start the morning.

0635.ApacheCon

Mark Hinkle, from Citrix, talked about community and code, and made a specific call to the foundation to revise its sponsorship rules to permit companies like Citrix to give us more money in a per-project targeted fashion.

0772.ApacheCon

And Jim Zemlin rounded out the day two keynotes by talking about what he does at the Linux Foundation, and how different foundations fill different niches in the Open Source software ecosystem. This is a talk I personally asked him to do, so I was very pleased with how it turned out. Different foundations do things differently, and I wanted him to talk some about why, and why some projects may fit better in one or another.

At the end of day three, we had two closing keynotes. We've done closing keynotes before with mixed results - a lot of people leave before. But we figured that with more content on the days after that, people would stay around. So it was disappointing to see how empty the rooms were. But the talks were great.

1052_ApacheCon

Allison Randal, a self-proclaimed Unix Graybeard (no, really!) talked about the cloud, and how it's just the latest incarnation of a steady series of small innovations over the last 50 years or so, and what we can look for in the coming decade. She spoke glowingly about Apache and its leadership role in that space.

1105_ApacheCon

Then Jason Hibbets finished up by talking about his work in Open Source Cities, and how Open Source methodologies can work in real-world collaboration to make your home town so much better. I'd heard this presentation before, but it was still great to hear the things that he's been doing in his town, and how they can be done in other places using the same model.

So, check the Apache YouTube channel in a week or so - https://www.youtube.com/user/TheApacheFoundation - and make some time to watch these presentations. I was especially pleased with Hillary and Upayavira's talks, and recommend you watch those if you are short on time and want to pick just a few.

by rbowen at April 17, 2014 04:05 PM


Administered by Joe. Content copyright by their respective authors.