Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

March 25, 2015

Yellow Bricks

Cloud native inhabitants

Advertise here with BSA

When ever I hear the term “cloud native” I think about my kids. It may sounds a bit strange as many of you will think about “apps” probably first when “cloud native” is dropped. Cloud native to me is not about an application, but about a problem which has been solved and a solution which is offered in a specific way. A week or so ago someone made a comment on twitter around how “Generation X” will adopt cloud faster than the current generation of IT admins…

Some even say that “Generation X” is more tech savvy, just look at how a 3 year old handles an iPad, they are growing up with technology. To be blunt… that has nothing to do with the technical skills of the 3 year old kid, but is more about the intuitive user interface that took years to develop. It comes natural to them as that is what they are exposed to from day 1. They see there mom or dad swiping a screen daily, mimicking them doesn’t require deep technical understanding of how an iPad works, they move their finger from right to left… but I digress.

My kids don’t know what a video tape is and even a CD to play music is so 2008, which for them is a lifetime, my kids are cloud native inhabitants. They use Netflix to watch TV, they use Spotify to listen to music, they use Facebook to communicate with friends, they use Youtube / Gmail and many other services running somewhere in the cloud. They are native inhabitants of the cloud. They won’t adopt cloud technology faster, for them it is a natural choice as it is what they are exposed to day in day out.

"Cloud native inhabitants" originally appeared on Follow me on twitter - @DuncanYB.

Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at March 25, 2015 03:53 PM

Everything Sysadmin

The State of DevOps Report

Where does it come from?

Have you read the 2014 State of DevOps report? The analysis is done by some of the world's best IT researchers and statisticians.

Be included in the 2015 edition!

A lot of the data used to create the report comes from the annual survey done by Puppet Labs. I encourage everyone to take 15 minutes to complete this survey. It is important that your voice and experience is represented in next year's report. Take the survey

But I'm not important enough!

Yes you are. If you think "I'm not DevOps enough" or "I'm not important enough" then it is even more important that you fill out the survey. The survey needs data from sites that are not "DevOps" (whatever that means!) to create the basis of comparison.

Well, ok, I'll do it then!

Great! Click the link:

Thank you,

March 25, 2015 02:29 PM


Sysadmin Open Source Landscape

Idea I decided to take up a small side project. That is the reason why I didn’t write new blog posts in last month or two. (No I didn’t run out of topics to talk about ) The idea of a project came out of all those questions on the web, where people ask what […]

by Alen Krmelj at March 25, 2015 09:18 AM

Chris Siebenmann

A significant amount of programming is done by superstition

Ben Cotton wrote in a comment here:

[...] Failure to adhere to a standard while on the surface making use of it is a bug. It's not a SySV init bug, but a bug in the particular init script. Why write the information at all if it's not going to be used, and especially if it could cause unexpected behavior? [...]

The uncomfortable answer to why this happens is that a significant amount of programming in the real world is done partly through what I'll call superstition and mythology.

In practice, very few people study the primary sources (or even authoritative secondary sources) when they're programming and then work forward from first principles; instead they find convenient references, copy and adapt code that they find lying around in various places (including the Internet), and repeat things that they've done before with whatever variations are necessary this time around. If it works, ship it. If it doesn't work, fiddle things until it does. What this creates is a body of superstition and imitation. You don't necessarily write things because they're what's necessary and minimal, or because you fully understand them; instead you write things because they're what people before you have done (including your past self) and the result works when you try it.

(Even if you learned your programming language from primary or high quality secondary sources, this deep knowledge fades over time in most people. It's easy for bits of it to get overwritten by things that are basically folk wisdom, especially because there can be little nuggets of important truth in programming folk wisdom.)

All of this is of course magnified when you're working on secondary artifacts for your program like Makefiles, install scripts, and yes, init scripts. These aren't the important focus of your work (that's the program code itself), they're just a necessary overhead to get everything to go, something you usually bang out more or less at the end of the project and probably without spending the time to do deep research on how to do them exactly right. You grab a starting point from somewhere, cut out the bits that you know don't apply to you, modify the bits you need, test it to see if it works, and then you ship it.

(If you say that you don't take this relatively fast road for Linux init scripts, I'll raise my eyebrows a lot. You've really read the LSB specification for init scripts and your distribution's distro-specific documentation? If so, you're almost certainly a Debian Developer or the equivalent specialist for other distributions.)

So in this case the answer to Ben Cotton's question is that people didn't deliberately write incorrect LSB dependency information. Instead they either copied an existing init script or thought (through superstition aka folk wisdom) that init scripts needed LSB headers that looked like this. When the results worked on a System V init system, people shipped them.

This isn't something that we like to think about as programmers, because we'd really rather believe that we're always working from scratch and only writing the completely correct stuff that really has to be there; 'cut and paste programming' is a pejorative most of the time. But the reality is that almost no one has the time to check authoritative sources every time; inevitably we wind up depending on our memory, and it's all too easy for our fallible memories to get 'contaminated' with code we've seen, folk wisdom we've heard, and so on.

(And that's the best case, without any looking around for examples that we can crib from when we're dealing with a somewhat complex area that we don't have the time to learn in depth. I don't always take code itself from examples, but I've certainly taken lots of 'this is how to do <X> with this package' structural advice from them. After all, that's what they're there for; good examples are explicitly there so you can see how things are supposed to be done. But that means bad examples or imperfectly understood ones add things that don't actually have to be there or that are subtly wrong (consider, for example, omitted error checks).)

by cks at March 25, 2015 06:17 AM

Google Blog

It's time to put America’s small businesses on the map

If you searched for “Dependable Care near Garland, TX” a few months ago, you would have seen a lot of search results—but not the one that mattered to Marieshia Hicks. Marieshia runs Dependable Care Health Service in Garland, and it was her business that was missing. But that all changed last month when she attended a workshop at the Garland Chamber of Commerce called Let’s Put Garland on the Map.

The workshop, run by our Get Your Business Online team, showed her how to use Google My Business—a tool that allows business owners to control the info listed about their business on Google Search and Maps—to help more people find Dependable. Marieshia added an updated phone number, hours of operation, and a description to her business listing. Within a few months, she had more customers come through the door and referrals from doctors who could reach her. This one simple adjustment made a difference. In Marieshia’s words: “It’s huge.”

Huge might be an understatement. Four out of five people use search engines to find local information, like business hours and addresses, and research shows that businesses with complete listings are twice as likely (PDF) to be considered reputable by customers. Consumers are 38 percent more likely to visit and 29 percent more likely to consider purchasing from businesses with complete listings. Yet only 37 percent of businesses (PDF) have claimed a local business listing on a search engine. That’s a lot of missed opportunities for small businesses.

With this in mind, our Get Your Business Online team set out in 2011 to help businesses like Marieshia’s get found online. We’ve gone to every state in the U.S. and worked with thousands of business owners to create free websites and update their Google Search and Maps listings. But there’s a lot more work to do to help businesses take advantage of the vast opportunities yielded by the web. So today, we’re introducing Let’s Put Our Cities on the Map, a new program to help 30,000 cities get their local businesses online.

If we want to help every business in the U.S., we need to reach businesses where they are. So this tailor-made program provides each city with a custom website where local businesses can find helpful resources, including a new diagnostic tool that shows businesses how they appear on Search and Maps, a step-by-step guide for getting online with Google My Business, and a free website and domain name for one year with our partner, Startlogic.

We’re also forming partnerships with local organizations—like chambers and small business development centers—and equipping them with free trainings and customized city materials to run workshops just like the one Marieshia attended in Garland. These local partners know the challenges for local businesses more than anyone—and they recognize the value of getting businesses online. After all, getting Dependable’s information online not only means the world for Marieshia, it means even more for the city of Garland. Complete business info can help generate economic value up to $300,000 a year for a small city or up to $7 million for a large one (PDF). So when our local businesses are online, our local economies benefit.

If you have a favorite local business—a day care, a dentist, a dry cleaner—show your support by helping them get their info online and on the map. Visit your city’s website at to find out how you can get involved.

Let’s put our cities on the map!

by Google Blogs ( at March 25, 2015 05:00 AM

Ubuntu Geek

Mydumper – Mysql Database Backup tool

Sponsored Link
Mydumper is a tool used for backing up MySQL database servers much faster than the mysqldump tool distributed with MySQL. It also has the capability to retrieve the binary logs from the remote server at the same time as the dump itself.
Read the rest of Mydumper – Mysql Database Backup tool (467 words)

© ruchi for Ubuntu Geek, 2015. | Permalink | No comment | Add to
Post tags: , , ,

Related posts

by ruchi at March 25, 2015 12:28 AM

Sonia Hamilton

Devops and Old Git Branches

A guest blog post I wrote on managing git branches when doing devops.

When doing Devops we all know that using source code control is a “good thing” — indeed it would be hard to imagine doing Devops without it. But if you’re using Puppet and R10K for your configuration management you can end up having hundreds of old branches lying around — branches like XYZ-123, XYZ-123.fixed, XYZ-123.fixed.old and so on. Which branches to cleanup, which to keep? How to easily cleanup the old branches? This article demonstrates some git configurations and scripts  that make working with hundreds of git branches easier…

Go to Devops and Old Git Branches to read the full article.

by Sonia Hamilton at March 25, 2015 12:10 AM

March 24, 2015

Evaggelos Balaskas

trying ipv6 only web

Although it feels really lonely … not a lot content yet.

All you need is an ISP that gives you an IPv6 address space, pppd and some free time !

You need to find out that your CPE can work like a modem so that PPPoE can pass through.

Point-to-Point Protocol Daemon


mtu 1492
# debugging
# authentication
name “”
# device

The noip means no IPv4 ip
+ipv6 means IPv6

You should replace the USERNAME & DOMAIN according your credentials.

you need to edit /etc/ppp/pap-secrets to add your password for your account: * PASSWORD


If your ISP doesnt provide you with IPv6 DNS servers, edit your /etc/resolv.conf to add opendns servers:




# pon ipv6

Plugin loaded.
RP-PPPoE plugin version 3.8p compiled against pppd 2.4.7
pppd options in effect:
debug # (from /etc/ppp/peers/ipv6)
dump # (from /etc/ppp/peers/ipv6)
plugin # (from /etc/ppp/peers/ipv6)
noauth # (from /etc/ppp/peers/ipv6)
-chap # (from /etc/ppp/peers/ipv6)
name # (from /etc/ppp/peers/ipv6)
eth0 # (from /etc/ppp/peers/ipv6)
eth0 # (from /etc/ppp/peers/ipv6)
asyncmap 0 # (from /etc/ppp/options)
mtu 1492 # (from /etc/ppp/peers/ipv6)
lcp-echo-failure 4 # (from /etc/ppp/options)
lcp-echo-interval 30 # (from /etc/ppp/options)
hide-password # (from /etc/ppp/peers/ipv6)
noip # (from /etc/ppp/peers/ipv6)
defaultroute # (from /etc/ppp/peers/ipv6)
proxyarp # (from /etc/ppp/options)
usepeerdns # (from /etc/ppp/peers/ipv6)
+ipv6 # (from /etc/ppp/peers/ipv6)
noipx # (from /etc/ppp/options)


# clear ; ip -6 a && ip -6 r

the result:

1: lo: mtu 65536
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: ppp0: mtu 1492 qlen 3
inet6 2a02:580:31a:0:744e:f2f1:bc63:dbdd/64 scope global mngtmpaddr dynamic
valid_lft 3465sec preferred_lft 2865sec
inet6 fe80::744e:f2f1:bc63:dbdd/10 scope link
valid_lft forever preferred_lft forever

2a02:580:31a::/64 dev ppp0 proto kernel metric 256 expires 3464sec
fe80::/10 dev ppp0 metric 1
fe80::/10 dev ppp0 proto kernel metric 256
default via fe80::90:1a00:1a0:80be dev ppp0 proto ra metric 1024 expires 1664sec



Tag(s): ipv6

March 24, 2015 10:29 PM


Can Interrogators Teach Digital Security Pros?

Recently Bloomberg published an article titled The Dark Science of Interrogation. I was fascinated by this article because I graduated from the SERE program at the US Air Force Academy in the summer of 1991, after my freshman year there. SERE teaches how to resist the interrogation methods used against prisoners of war. When I attended the school, the content was based on techniques used by Korea and Vietnam against American POWs in the 1950s-1970s.

As I read the article, I realized the subject matter reminded me of another aspect of my professional life.

In intelligence, as in the most mundane office setting, some of the most valuable information still comes from face-to-face conversations across a table. In police work, a successful interrogation can be the difference between a closed case and a cold one. Yet officers today are taught techniques that have never been tested in a scientific setting. For the most part, interrogators rely on nothing more than intuition, experience, and a grab bag of passed-down methods.

“Most police officers can tell you how many feet per second a bullet travels. They know about ballistics and cavity expansion with a hollow-point round,” says Mark Fallon, a former Naval Criminal Investigative Service special agent who led the investigation into the USS Cole attack and was assistant director of the federal government’s main law enforcement training facility. “What as a community we have not yet embraced as effectively is the behavioral sciences...”

Christian Meissner, a psychologist at Iowa State University, coordinates much of HIG’s research. “The goal,” he says, “is to go from theory and science, what we know about human communication and memory, what we know about social influence and developing cooperation and rapport, and to translate that into methods that can be scientifically validated.” Then it’s up to Kleinman, Fallon, and other interested investigators to test the findings in the real world and see what works, what doesn’t, and what might actually backfire.

Does this sound familiar? Security people know how many flags to check in a TCP header, or how many bytes to offset when writing shell code, but we don't seem to "know" (in a "scientific" sense) how to "secure" data, networks, and so on.

One point of bright light is the Security Metrics community. The mailing list is always interesting for those trying to bring counting and "science" to the digital security profession. Another great project is the Index of Cyber Security run by Dan Geer and Mukul Pareek.

I'm not saying there is a "science" of digital security. Others will disagree. I also don't have any specific recommendations based on what I read in the interrogation article. However, I did resonate with the article's message that "street wisdom" needs to be checked to see if it actually works. Scientific methods can help.

I am taking small steps in that direction with my PhD in the war studies department at King's College London.

by Richard Bejtlich ( at March 24, 2015 05:38 PM

/sys/net Adventures

Firefox flash broken ? How to install Pepper Flash on Fedora

Since two weeks, I can't read any YouTube videos on Firefox, other flash sites are either broken or don't play sound...

Adobe has stopped the development of flash for Linux which means that only bug and security fixed are applied. For an unknown reason my last "yum update" just broke my flash support and ability to watch YouTube video.

The alternative is to use Pepper Flash which is flash wrapper to the Google Chrome flash plugin. Google continues to support flash for linux by shipping the plugin inside Chrome.

Haven't found any detailed tutorial for Fedora so here it goes.


You need to have the official Adobe Flash plugin installed.

# rpm -q flash-plugin


Exit any Firefox processes.

Install Pepper Flash Repositories :

# yum install
# yum install

Install Chromium :

# yum clean all && yum install chromium 

Install Pepper Flash :

# yum install chromium-pepper-flash.x86_64  

Install FreshPlayer :

# yum install freshplayerplugin.x86_64  

Start Firefox, go to the "Add on" configuration menu, you should now have two "Shockwave Flash" plugins. One is the official Adobe Plugin (old version 11.x), the other one is Pepper Flash plugin (up to date version) binded to Check they a both activated.

Finally go to and check that you are using the up to date version.

Hopefully you can play Youtube videos and other flash based sites.

Hope that helps !

by Greg ( at March 24, 2015 04:28 PM

Everything Sysadmin

2015 DevOps Survey

Have you taken the 2015 DevOps survey? The data from this survey influences many industry executives and helps push them towards better IT processes (and removing the insanity we find in IT today). You definitely want your voice represented. It takes only 15 minutes.

Take the 2015 DevOps Survey Now

March 24, 2015 02:30 PM

Yellow Bricks

Startup intro: Rubrik. Backup and recovery redefined

Advertise here with BSA

Some of you may have seen the article by The Register last week about this new startup called Rubrik. Rubrik just announced what they are working on and announced their funding at the same time:

Rubrik, Inc. today announced that it has received $10 million in Series A funding and launched its Early Access Program for the Rubrik Converged Data Management platform. Rubrik offers live data access for recovery and application development by fusing enterprise data management with web-scale IT, and eliminating backup software. This marks the end of a decade-long innovation drought in backup and recovery, the backbone of IT. Within minutes, businesses can manage the explosion of data across private and public clouds.

The Register made a comment, which I want to briefly touch on. They mentioned it was odd that a venture capitalist is now the CEO for a startup and how it normally is the person with the technical vision who heads up the company. I can’t agree more with The Register. For those who don’t know Rubrik and their CEO, the choice for Bipul Sinha may come as a surprise it may seem a bit odd. Then there are some who may say that it is a logical choice considering they are funded by Lightspeed… Truth of the matter is that Bipul Sinha is the person with the technical vision. I had the pleasure to see his vision evolve from a couple of scribbles on the whiteboard to what Rubrik is right now.

I still recall having a conversation with Bipul talking about the state of the “backup industry”, and I recall we agreed the different components of a datacenter had evolved over time but that the backup industry was still very much stuck in the old world. (We agreed backup and recovery solutions suck in most cases…) Back when we had this discussion there was nothing yet, no team, no name, just a vision. Knowing what is coming in the near future and knowing their vision I do think this quote from the press release embraces best what Rubrik is working on and it will do:

Today we are excited to announce the first act in our product journey. We have built a powerful time machine that delivers live data and seamless scale in a hybrid cloud environment. Businesses can now break the shackles of legacy and modernize their data infrastructure, unleashing significant cost savings and management efficiencies.

Of course Rubrik would not be possible without a very strong team of founding members. Arvind Jain, Arvind Nithrakashyap and Soham Mazumdar are probably the strongest co-founders one can wish. The engineering team has deep experience in building distributed systems, such as Google File System, Google Search, YouTube, Facebook Data Infrastructure, Amazon Infrastructure, and Data Domain File System. Expectations just raised a couple of notches right?!

I agree that even the statement above is still a bit fluffy so lets add some more details, what are they working on? Rubrik is working on a solution which combines backup software and a backup storage appliance in to a single solution and initially will target VMware environments. They are building (and I hate using this word) a hyperconverged backup solution and it will scale from 3 to 1000s of nodes. Note that this solution will be up and running in 15 minutes and includes the option to age out data to the public cloud. What impressed me most is that Rubrik can discover your datacenter without any agents, it scales-out in a fully automated fashion and will be capable of deduplicating / compressing data but also offer the ability to mount data instantly. All of this through a slick UI or you can leverage the REST APIs , fully programmable end-to-end.

I just went over “instant mount” quickly, but I want to point out that this is not just for “restoring VMs”. Considering the REST APIs you can also imagine that this would be a perfect solution to enable test/dev environments or running Tier 2/3 workloads. How valuable is it to have instant copies of your production data available and test your new code against production without any interruption to your current environment? To throw a buzzword in there: perfectly fit for a devops world and continuous development.

That is about all I can say for now unfortunately… For those who agree that backup/recovery has not evolved and are interested in a backup solution for tomorrow, there is an early access program and I urge you to sign up to learn more but also help shaping the product! The solution is targeting environments of 200 VMs and upwards, make sure you meet those requirements. Read more here and/or follow them on twitter (or Bipul).

Good luck Rubrik, I am sure this is going to be a great journey!

"Startup intro: Rubrik. Backup and recovery redefined" originally appeared on Follow me on twitter - @DuncanYB.

Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at March 24, 2015 02:16 PM

Server Density

Lessons from using Ansible exclusively for 2 years.

Today we’re really happy to be sharing an awesome guest post written by Corban Raun. Corban has been working with Ansible for ~2 years and is responsible for developing our Ansible playbook!

He’s been trying to automate systems administration since he started learning linux many years ago. If you’d like to learn more, say thanks or ask any questions you can find Corban on Twitter (@corbanraun) and on his website. So without further ado, here’s his great article about Ansible – enjoy!


As a Linux Systems Administrator, I came to a point in my career where I desperately needed a configuration management tool. I started looking at products like Puppet, Chef and SaltStack but I felt overwhelmed by the choice and wasn’t sure which tool to choose.

I needed to find something that worked, worked well, and didn’t take a lot of time to learn. All of the existing tools seemed to have their own unique way of handling configuration management, with many varying pros and cons. During this time a friend and mentor suggested I look into a lesser known product called Ansible.

No looking back

I have now been using Ansible exclusively for ~2 years on a wide range of projects, platforms and application stacks including Rails, Django, and Meteor web applications; MongoDB clustering; user management; CloudStack setup; and monitoring.

I also use Ansible to provision cloud providers like Amazon, Google, and DigitalOcean; and for any task or project that requires repeatable processes and a consistent environment (which is pretty much everything).

DevOps automation ansible

Credit DevOps Reactions – Continuous delivery

Ansible vs Puppet, Chef and Saltstack

One reason I chose Ansible was due to its ability to maintain a fully immutable server architecture and design. We will get to exactly what I mean later, but it’s important to note – my goal in writing this post is not compare or contrast Ansible with other products. There are many articles available online regarding that. In fact, some of the things I love about Ansible are available in other configuration management tools.

My hope with this article is actually to be able to give you some Ansible use cases, practical applications, and best practices; with the ulterior motive of persuading you that Ansible is a product worth looking into. That way you may come to your own conclusions about whether or not Ansible is the right tool for your environment.

Immutable Server Architecture

When starting a new project with Ansible, one of the first things to think about is whether or not you want your architecture to support Immutable servers. For the purposes of this article, having an Immutable server architecture means that we have the ability to create, destroy, and replace servers at any time without causing service disruptions.

As an example, lets say that part of your server maintenance window includes updating and patching servers. Instead of updating a currently running server, we should be able to spin up an exact server replica that contains the upgrades and security patches we want to apply. We can then replace and destroy the current running server. Why or how is this beneficial?

By creating a new server that is exactly the same as our current environment including the new upgrades, we can then proceed with confidence that the updated packages will not break or cause service disruption. If we have all of our server configuration in Ansible using proper source control, we can maintain this idea of Immutable architectures. By doing so we can keep our servers pure and unadulterated by those who might otherwise make undocumented modifications.

Ansible allows us to keep all of our changes centralized. One often unrealized benefit of this is that our Ansible configuration can be looked at as a type of documentation and disaster recovery solution. A great example of this can be found in the Server Density blog post on Puppet.

This idea of Immutable architecture also helps us to become vendor-agnostic, meaning we can write or easily modify an Ansible playbook which can be used across different providers. This includes custom datacenter layouts as well as cloud platforms such as Amazon EC2, Google Cloud Compute, and Rackspace. A really good example of a multi vendor Ansible playbook can be seen in the Streisand project.

Use Cases

Use Case #1: Security Patching

Ansible is an incredibly powerful and robust configuration management system. My favorite feature? Its simplicity. This can be seen by how easy it is to patch vulnerable servers.

Example #1: Shellshock

The following playbook was run against 100+ servers and patched the bash vulnerability in less than 10 minutes. The below example updates both Debian and Red Hat Linux variants. It will first run on half of all the hosts that are defined in an inventory file.

- hosts: all
  gather_facts: yes
  remote_user: craun
  serial: "50%"
  sudo: yes
    - name: Update Shellshock (Debian)
      apt: name=bash
      when: ansible_os_family == "Debian"

    - name: Update Shellshock (RedHat)
      yum: name=bash
      when: ansible_os_family == "RedHat"

Example #2: Heartbleed and SSH

The following playbook was run against 100+ servers patching the HeartBleed vulnerability. At the time, I also noticed that the servers needed an updated version of OpenSSH. The below example updates both Debian and RedHat linux variants. It will patch and reboot 25% of the servers at a time until all of the hosts defined in the inventory file are updated.

- hosts: all
  gather_facts: yes
  remote_user: craun
  serial: "25%"
  sudo: yes
    - name: Update OpenSSL and OpenSSH (Debian)
      apt: name={{ item }}
        - openssl
        - openssh-client
        - openssh-server
      when: ansible_os_family == "Debian"

    - name: Update OpenSSL and OpenSSH (RedHat)
      yum: name={{ item }}
        - openssl
        - openssh-client
        - openssh-server
      when: ansible_os_family == "RedHat"
    - name: Reboot servers
      command: reboot

Use Case #2: Monitoring

One of the first projects I used Ansible for was to simultaneously deploy and remove a monitoring solution. The project was simple: remove Zabbix and replace it with Server Density. This was incredibly easy with the help of Ansible. I ended up enjoying the project so much, I open sourced it.

One of the things I love about Ansible is how easy it is to write playbooks, and yet always have room to improve upon them. The Server Density Ansible playbook, is the result of many revisions to my original code that I started a little over a year ago. I continually revisit and make updates using newfound knowledge and additional features that have been released in the latest versions of Ansible.

Everything Else

Ansible has many more use cases than I have mentioned in this article so far, like provisioning cloud infrastructure, deploying application code, managing SSH keys, configuring databases, and setting up web servers. One of my favorite open source projects that uses Ansible is called Streisand. The Streisand project is a great example of how Ansible can be used with multiple cloud platforms and data center infrastructures. It shows how easy it is to take something difficult like setting up VPN services and turning it into a painless and repeatable process.

Already using a product like Puppet or SaltStack? You can still find benefits to using Ansible alongside other configuration management tools. Have an agent that needs to be restarted? Great! Ansible is agentless, so you could run something like:

ansible -i inventories/servers all -m service -a "name=salt-minion state=restarted" -u craun -K --sudo

From the command line to restart your agents. You can even use Ansible to install the agents required by other configuration management tools.

Best practices

In the last few years using Ansible I have learned a few things that may be useful should you choose to give it a try.

Use Ansible Modules where you can

When I first started using Ansible, I used the command and shell modules fairly regularly. I was so used to automating things with Bash that it was easy for me to fall into old habits. Ansible has many extremely useful modules. If you find yourself using the `command` and `shell` modules often in a playbook, there is probably a better way to do it. Start off by getting familiar with the modules Ansible has to offer.

Make your roles modular (i.e. reusable)

I used to maintain a separate Ansible project folder for every new application stack or project. I found myself copying the exact same roles from one project to another and making minor changes to them (such as Nginx configuration or vhost files). I found this to be inefficient and annoying as I was essentially repeating steps. It wasn’t until I changed employers that I learned from my teammates that there is much better way to set up projects. As an example, one thing Ansible lets you do is create templates using Jinja2. Let’s say we have an Nginx role with the following nginx vhost template:

server {
  listen 80;

  location / {
    return 302 https://$host$request_uri;

server {
  listen 443 ssl spdy;
  ssl_certificate    /etc/ssl/certs/mysite.crt;
  ssl_certificate_key    /etc/ssl/private/mysite.key;

  location / {
    root   /var/www/public;
    index  index.html index.htm;

While the above example is more than valid, we can make it modular by adding some variables:

server {
  listen 80;

  location / {
    return 302 https://$host$request_uri;

server {
  listen 443 ssl spdy;
  ssl_certificate    {{ ssl_certificate_path }};
  ssl_certificate_key    {{ ssl_key_path }};
  server_name {{ server_name }} {{ ansible_eth0.ipv4.address }};
  location / {
    root   {{ web_root }};
    index  index.html index.htm;

We can then alter these variables within many different playbooks while reusing the same Nginx role:

- hosts: website
  gather_facts: yes
  remote_user: craun
  sudo: yes
    ssl_certificate_path: "/etc/ssl/certs/mysite.crt"
    ssl_key_path: "/etc/ssl/private/mysite.key"
    server_name: ""
    web_root: "/var/www/public"
    - nginx

Test, Rinse, Repeat


Credit DevOps Reactions – Writing Unit Tests

Test your changes, and test them often. The practice and idea of testing out changes is not a new one. It can, however become difficult to test modifications when both sysadmins and developers are making changes to different parts of the same architecture. One of the reasons I chose Ansible is its ability to be used and understood by both traditional systems administrators and developers. It is a true development operations tool.

For example, it’s incredibly simple to integrate Ansible with tools like HashiCorp’s Vagrant. By combining the tools, you and your developers will be more confident that what is in production can be repeated and tested in a local environment. This is crucial when troubleshooting configuration and application changes. Once you have verified and tested your changes with these tools you should have relatively high confidence that your changes should not break anything (remember what immutable means?).

What now?

As mentioned previously, my goal was not to compare Ansible to other products; afteral you can find uses for it in environments where you already have other configuration management tools in place; and some of the features I have talked about are even available in other products.

Hopefully this article gave you an idea as to why Ansible may be useful in your server architecture. If you only take one thing from this article, let it be this: Ansible can help you maintain and manage any server architecture you can imagine, and it’s a great place to get started in the world of automation.

The post Lessons from using Ansible exclusively for 2 years. appeared first on Server Density Blog.

by Corban Raun at March 24, 2015 10:42 AM

Aaron Johnson

Links: 3-23-2015

by ajohnson at March 24, 2015 06:30 AM

Chris Siebenmann

What is and isn't a bug in software

In response to my entry on how systemd is not fully SysV init compatible because it pays attention to LSB dependency comments when SysV init does not, Ben Cotton wrote in a comment:

I'd argue that "But I was depending on that bug!" is generally a poor justification for not fixing a bug.

I strongly disagree with this view at two levels.

The first level is simple: this is not a bug in the first place. Specifically, it's not an omission or a bug that System V init doesn't pay attention to LSB comments; it's how SysV init behaves and has behaved from the start. SysV init runs things in the order they are in the rcN.d directory and that is it. In a SysV init world you are perfectly entitled to put whatever you want to into your script comments, make symlinks by hand, and expect SysV init to run them in the order of your symlinks. Anything that does not do this is not fully SysV init compatible. As a direct consequence of this, people who put incorrect information into the comments of their init scripts were not 'relying on a bug' (and their init scripts did not have a bug; at most they had a mistake in the form of an inaccurate comment).

(People make lots of mistakes and inaccuracies in comments, because the comments do not matter in SysV init (very little matters in SysV init).)

The second level is both more philosophical and more pragmatic and is about backwards compatibility. In practice, what is and is not a bug is defined by what your program accepts. The more that people do something and your program accepts it, the more that thing is not a bug. It is instead 'how your program works'. This is the imperative of actually using a program, because to use a program people must conform to what the program does and does not do. It does not matter whether or not you ever intended your program to behave that way; that it behaves the way it does creates a hard reality on the ground. That you left it alone over time increases the strength of that reality.

If you go back later and say 'well, this is a bug so I'm fixing it', you must live up to a fundamental fact: you are changing the behavior of your program in a way that will hurt people. It does not matter to people why you are doing this; you can say that you are doing it because the old behavior was a mistake, because the old behavior was a bug, because the new behavior is better, because the new behavior is needed for future improvements, or whatever. People do not care. You have broken backwards compatibility and you are making people do work, possibly pointless work (for them).

To say 'well, the old behavior was a bug and you should not have counted on it and it serves you right' is robot logic, not human logic.

This robot logic is of course extremely attractive to programmers, because we like fixing what are to us bugs. But regardless of how we feel about them, these are not necessarily bugs to the people who use our programs; they are instead how the program works today. When we change that, well, we change how our programs work. We should own up to that and we should make sure that the gain from that change is worth the pain it will cause people, not hide behind the excuse of 'well, we're fixing a bug here'.

(This shows up all over. See, for example, the increasingly aggressive optimizations of C compilers that periodically break code, sometimes in very dangerous ways, and how users of those compilers react to this. 'The standard allows us to do this, your code is a bug' is an excuse loved by compiler writers and basically no one else.)

by cks at March 24, 2015 05:47 AM

March 23, 2015

Le blog de Carl Chenet

Unverified backups are useless. Automatize the controls!

Follow me on  or Twitter  or Diaspora* Unverified backups are useless, every sysadmins know that. But manually verifying a backup means wasting time and resources. Moreover it’s boring. You should automatize it! Backup Checker is a command line software developed in Python 3.4 on GitHub (stars appreciated :) ) allowing users to verify the integrity of…

by Carl Chenet at March 23, 2015 11:00 PM

Rands in Repose

Yellow Bricks

What does support for vMotion with active/active (a)sync mean?

Advertise here with BSA

Having seen so many cool features being released over the last 10 years by VMware you sometimes wonder what more they can do. It is amazing to see what level of integration we’ve see between the different datacenter components. Many of you have seen the announcements around Long Distance vMotion support by now.

When I saw this slide something stood out to me instantly and that is this part:

  • Replication Support
    • Active/Active only
      • Synchronous
      • Asynchronous

What does this mean? Well first of all “active/active” refers to “stretched storage” aka vSphere Metro Storage Cluster. So when it comes to long distance vMotion some changes have been introduced for sync stretched storage. With stretched storage writes can come from both sides at any time to a volume and will be replicated synchronously. Some optimizations have been done to the vMotion process to avoid writes during switchover to avoid any delay during the process as a result of replication traffic.

For active/active asyncronous the story is a bit different. Here again we are talking about “stretched storage” but in this case the asynchronous flavour. One important aspect which was not mentioned in the deck is that async requires Virtual Volumes. Now, at the time of writing there is no vendor yet who has a VVol capable solution that offers active/active async. But more important, is this process any different than the sync process? Yes it is!

During the migration of a virtual machine which uses virtual volumes, with an “active/active async” configuration backing it, the array is informed that a migration of the virtual machine is taking place and is requested to switch from asynchronous replication to synchronous. This to ensure that the destination is in-sync with the source when the VM is switched over from side A to side B. Besides switching from async to sync when the migration has completed the array is informed that the migration has completed. This allows the array to switch the “bias” of the VM for instance, especially in a stretched environment this is important to ensure availability.

I can’t wait for the first vendor to announce support for this awesome feature!

"What does support for vMotion with active/active (a)sync mean?" originally appeared on Follow me on twitter - @DuncanYB.

Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at March 23, 2015 03:19 PM


Physical Locations of PCI SSDs

The latest update to Solaris 11 (SRU has a new feature - it can identify physical locations of F40 and F80 PCI SSDs cards - it registers them under the Topology Framework.

Here is an example diskinfo output on x4-2l server with 24 SSDs in front presented as JBOD, 2x SSDs in the rear mirrored with RAID controller (for OS), and 4x PCI F80 cards (each card presents 4 LUNs):

$ diskinfo
D:devchassis-path c:occupant-compdev
--------------------------------------- ---------------------
/dev/chassis/SYS/HDD00/disk c0t55CD2E404B64A3E9d0
/dev/chassis/SYS/HDD01/disk c0t55CD2E404B64B1ABd0
/dev/chassis/SYS/HDD02/disk c0t55CD2E404B64B1BDd0
/dev/chassis/SYS/HDD03/disk c0t55CD2E404B649E02d0
/dev/chassis/SYS/HDD04/disk c0t55CD2E404B64A33Ed0
/dev/chassis/SYS/HDD05/disk c0t55CD2E404B649DB5d0
/dev/chassis/SYS/HDD06/disk c0t55CD2E404B649DBCd0
/dev/chassis/SYS/HDD07/disk c0t55CD2E404B64AB2Fd0
/dev/chassis/SYS/HDD08/disk c0t55CD2E404B64AC96d0
/dev/chassis/SYS/HDD09/disk c0t55CD2E404B64A580d0
/dev/chassis/SYS/HDD10/disk c0t55CD2E404B64ACC5d0
/dev/chassis/SYS/HDD11/disk c0t55CD2E404B64B1DAd0
/dev/chassis/SYS/HDD12/disk c0t55CD2E404B64ACF1d0
/dev/chassis/SYS/HDD13/disk c0t55CD2E404B649EE1d0
/dev/chassis/SYS/HDD14/disk c0t55CD2E404B64A581d0
/dev/chassis/SYS/HDD15/disk c0t55CD2E404B64AB9Cd0
/dev/chassis/SYS/HDD16/disk c0t55CD2E404B649DCAd0
/dev/chassis/SYS/HDD17/disk c0t55CD2E404B6499CBd0
/dev/chassis/SYS/HDD18/disk c0t55CD2E404B64AC98d0
/dev/chassis/SYS/HDD19/disk c0t55CD2E404B6499B7d0
/dev/chassis/SYS/HDD20/disk c0t55CD2E404B64AB05d0
/dev/chassis/SYS/HDD21/disk c0t55CD2E404B64A33Fd0
/dev/chassis/SYS/HDD22/disk c0t55CD2E404B64AB1Cd0
/dev/chassis/SYS/HDD23/disk c0t55CD2E404B64A3CFd0
/dev/chassis/SYS/HDD24 -
/dev/chassis/SYS/HDD25 -
/dev/chassis/SYS/MB/PCIE1/F80/LUN0/disk c0t5002361000260451d0
/dev/chassis/SYS/MB/PCIE1/F80/LUN1/disk c0t5002361000258611d0
/dev/chassis/SYS/MB/PCIE1/F80/LUN2/disk c0t5002361000259912d0
/dev/chassis/SYS/MB/PCIE1/F80/LUN3/disk c0t5002361000259352d0
/dev/chassis/SYS/MB/PCIE2/F80/LUN0/disk c0t5002361000262937d0
/dev/chassis/SYS/MB/PCIE2/F80/LUN1/disk c0t5002361000262571d0
/dev/chassis/SYS/MB/PCIE2/F80/LUN2/disk c0t5002361000262564d0
/dev/chassis/SYS/MB/PCIE2/F80/LUN3/disk c0t5002361000262071d0
/dev/chassis/SYS/MB/PCIE4/F80/LUN0/disk c0t5002361000125858d0
/dev/chassis/SYS/MB/PCIE4/F80/LUN1/disk c0t5002361000125874d0
/dev/chassis/SYS/MB/PCIE4/F80/LUN2/disk c0t5002361000194066d0
/dev/chassis/SYS/MB/PCIE4/F80/LUN3/disk c0t5002361000142889d0
/dev/chassis/SYS/MB/PCIE5/F80/LUN0/disk c0t5002361000371137d0
/dev/chassis/SYS/MB/PCIE5/F80/LUN1/disk c0t5002361000371435d0
/dev/chassis/SYS/MB/PCIE5/F80/LUN2/disk c0t5002361000371821d0
/dev/chassis/SYS/MB/PCIE5/F80/LUN3/disk c0t5002361000371721d0

Let's create a ZFS pool on top of the F80s and see zpool status output:
(you can use the SYS/MB/... names when creating a pool as well)

$ zpool status -l XXXXXXXXXXXXXXXXXXXX-1
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Sat Mar 21 11:31:01 2015

mirror-0 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE4/F80/LUN0/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE1/F80/LUN1/disk ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE4/F80/LUN1/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE1/F80/LUN3/disk ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE4/F80/LUN3/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE1/F80/LUN2/disk ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE4/F80/LUN2/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE1/F80/LUN0/disk ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE2/F80/LUN3/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE5/F80/LUN0/disk ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE2/F80/LUN2/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE5/F80/LUN1/disk ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE2/F80/LUN1/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE5/F80/LUN3/disk ONLINE 0 0 0
mirror-7 ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE2/F80/LUN0/disk ONLINE 0 0 0
/dev/chassis/SYS/MB/PCIE5/F80/LUN2/disk ONLINE 0 0 0

errors: No known data errors

It also means that all FMA alerts should include the physical path as well, which should make identification of a given F80/LUN, if something goes wrong, so much easier.

by milek ( at March 23, 2015 02:40 PM

Rands in Repose

Medium as Frozen Pizza

Compelling piece by Matthew Butterick on the business and design of Medium.

On Medium’s use of minimalism:

As a fan of min­i­mal­ism, how­ever, I think that term is mis­ap­plied here. Min­i­mal­ism doesn’t fore­close ei­ther ex­pres­sive breadth or con­cep­tual depth. On the con­trary, the min­i­mal­ist pro­gram—as it ini­tially emerged in fine art of the 20th cen­tury—has been about di­vert­ing the viewer’s at­ten­tion from overt signs of au­thor­ship to the deeper pu­rity of the ingredients.

He continues:

Still, I wouldn’t say that Medium’s ho­mo­ge­neous de­sign is bad ex ante. Among web-pub­lish­ing tools, I see Medium as the equiv­a­lent of a frozen pizza: not as whole­some as a meal you could make your­self, but for those with­out the time or mo­ti­va­tion to cook, a po­ten­tially bet­ter op­tion than just eat­ing peanut but­ter straight from the jar.

The piece is less about typography and more about Medium’s business motivations, but the entire article is worth your time.


by rands at March 23, 2015 02:31 PM

Chris Siebenmann

Systemd is not fully backwards compatible with System V init scripts

One of systemd's selling points is that it's backwards compatible with your existing System V init scripts, so that you can do a gradual transition instead of having to immediately convert all of your existing SysV init scripts to systemd .service files. For the most part this works as advertised and much of the time it works. However, there are areas where systemd has chosen to be deliberately incompatible with SysV init scripts.

If you look at some System V init scripts, you will find comment blocks at the start that look something like this:

# Provides:        something
# Required-Start:  $syslog otherthing
# Required-Stop:   $syslog

These are a LSB standard for declaring various things about your init scripts, including start and stop dependencies; you can read about them here or here, no doubt among other places.

Real System V init ignores all of these because all it does is run init scripts in strictly sequential ordering based on their numbering (and names, if you have two scripts at the same numerical ordering). By contrast, systemd explicitly uses this declared dependency information to run some SysV init scripts in parallel instead of in sequential order. If your init script has this LSB comment block and declares dependencies at all, at least some versions of systemd will start it immediately once those dependencies are met even if it has not yet come up in numerical order.

(CentOS 7 has such a version of systemd, which it labels as 'systemd 208' (undoubtedly plus patches).)

Based on one of my sysadmin aphorisms, you can probably guess what happened next: some System V init scripts have this LSB comment block but declare incomplete dependencies. On a real System V init script this does nothing and thus is easily missed; in fact these scripts may have worked perfectly for a decade or more. On a systemd system such as CentOS 7, systemd will start these init scripts out of order and they will start failing, even if what they depend on is other System V init scripts instead of things now provided directly by systemd .service files.

This is a deliberate and annoying choice on systemd's part, and I maintain that it is the wrong choice. Yes, sure, in an ideal world the LSB dependencies would be completely correct and could be used to parallelize System V init scripts. But this is not an ideal world, it is the real world, and given that there's been something like a decade of the LSB dependencies being essentially irrelvant it was completely guaranteed that there would be init scripts out there that mis-declared things and thus that would malfunction under systemd's dependency based reordering.

(I'd say that the systemd people should have known better, but I rather suspect that they considered the issue and decided that it was perfectly okay with them if such 'incorrect' scripts broke. 'We don't support that' is a time-honored systemd tradition, per say separate /var filesystems.)

by cks at March 23, 2015 05:05 AM

Ubuntu Geek

March 22, 2015

Rands in Repose

Dear Data


Dear Data is a year-long project between Giorgia Lupi and Stefanie Posavec who are creating weekly analog data visualizations and sending them to each other on post cards.

I like everything about this project.


by rands at March 22, 2015 03:51 PM

Server Density

Chris Siebenmann

I now feel that Red Hat Enterprise 6 is okay (although not great)

Somewhat over a year ago I wrote about why I wasn't enthused about RHEL 6. Well, it's a year later and I've now installed and run a CentOS 6 machine for an important service that requires it, and as a result of that I have to take back some of my bad opinions from that entry. My new view is that overall RHEL 6 makes an okay Linux.

I haven't changed the details of my views from the first entry. The installer is still somewhat awkward and it remains an old-fashioned transitional system (although that has its benefits). But the whole thing is perfectly usable; both installing the machine and running it haven't run into any particular roadblocks and there's a decent amount to like.

I think that part of my shift is all of our work on our CentOS 7 machines has left me a lot more familiar with both NetworkManager and how to get rid of it (and why you want to do that). These days I know to do things like tick the 'connect automatically' button when configuring the system's network connections during install, for example (even though it should be the default).

Apart from that, well, I don't have much to say. I do think that we made the right decision for our new fileserver backends when we delayed them in order to use CentOS 7, even if this was part of a substantial delay. CentOS 6 is merely okay; CentOS 7 is decently nice. And yes, I prefer systemd to upstart.

(I could write a medium sized rant about all of the annoyances in the installer, but there's no point given that CentOS 7 is out and the CentOS 7 one is much better. The state of the art in Linux installers is moving forward, even if it's moving slowly. And anyways I'm spoiled by our customized Ubuntu install images, which preseed all of the unimportant or constant answers. Probably there is some way to do this with CentOS 6/7, but we don't install enough CentOS machines for me to spend the time to work out the answers and build customized install images and so on.)

by cks at March 22, 2015 06:35 AM

OpenSSL: Manually verify a certificate against a CRL

This article shows you how to manually verfify a certificate against a CRL. CRL stands for Certificate Revocation List and is one way to validate a certificate status. It is an alternative to the OCSP, Online Certificate Status Protocol.

March 22, 2015 12:00 AM

Keep messages secure with GPG

This article shows you how to get started with GPG and Mailvelope. It discusses public/private key crypto and shows you how to use the Mailvelope software to encrypt and decrypt GPG messages on any webmail provider.

March 22, 2015 12:00 AM

March 21, 2015

Chris Siebenmann

Spammers show up fast when you open up port 25 (at least sometimes)

As part of adding authenticated SMTP to our environment, we recently opened up outside access to port 25 (and port 587) to a machine that hadn't had them exposed before. You can probably guess what happened next: it took less than five hours before spammers were trying to rattle the doorknobs to see if they could get in.

(Literally. I changed our firewall to allow outside access around 11:40 am and the first outside attack attempt showed up at 3:35 pm.)

While I don't have SMTP command logs, Exim does log enough information that I'm pretty sure that we got two sorts of spammers visiting. The first sort definitely tried to do either an outright spam run or a relay check, sending MAIL FROMs with various addresses (including things like 'postmaster@<our domain>'); all of these failed since they hadn't authenticated first. The other sort of spammer is a collection of machines that all EHLO as 'ylmf-pc', which is apparently a mass scanning system that attempts to brute force your SMTP authentication. So far there is no sign that they've succeeded on ours (or are even trying), and I don't know if they even manage to start up a TLS session (a necessary prerequisite to even being offered the chance to do SMTP authentication). These people showed up second, but not by much; their first attempt was at 4:04 pm.

(I have some indications that in fact they don't. On a machine that I do have SMTP command logs on, I see ylmf-pc people connect, EHLO, and then immediately disconnect without trying STARTTLS.)

It turns out that Exim has some logging for this (the magic log string is '... authenticator failed for ...') and using that I can see that in fact the only people who have gotten far enough to actually try to authenticate are a few of our own users. Since our authenticated SMTP service is still in testing and hasn't been advertised, I suspect that some people are using MUAs (or other software) that simply try authenticated SMTP against their IMAP server just to see if it works.

There are two factors here that may mean this isn't what you'll see if you stand up just any server on a new IP, which is that this server has existed for some time with IMAP exposed (and under a well known DNS name at that, one that people would naturally try if they were looking for people's IMAP servers). It's possible that existing IMAP servers get poked far more frequently and intently than other random IPs.

(Certainly I don't see anything like this level of activity on other machines where I have exposed SMTP ports.)

by cks at March 21, 2015 05:04 AM


ZFS: Persistent L2ARC

Solaris SRU delivers persistent L2ARC. What is interesting about it is that it stores raw ZFS blocks, so if you enabled compression then L2ARC will also store compressed blocks (so it can store more data). Similarly with encryption.

by milek ( at March 21, 2015 01:27 AM

How I got a valid SSL certificate for my ISP's main domain,

I got a valid SSL certificate for a domain that is not mine by creating an email alias. In this article I'll explain what happened, why that was possible and how we all can prevent this.

March 21, 2015 12:00 AM

March 20, 2015


Managing Solaris with RAD

Solaris 11 provides "The Remote Administration Daemon, commonly referred to by its acronymand command name, rad, is a standard system service thatoffers secure, remote administrative access to an Oracle Solaris system."

RAD is essentially an API to programmatically manage and query different Solaris subsystems like networking, zones, kstat, smf, etc.

Let's see an example on how to use it to list all zones configured on a local system.

# cat

import rad.client as radcli
import rad.connect as radcon
import as zbind

with radcon.connect_unix() as rc:
zones = rc.list_objects(zbind.Zone())
for i in range(0, len(zones)):
zone = rc.get_object(zones[i])
print "zone: %s (%S)" % (, zone.state)
for prop in zone.getResourceProperties(zbind.Resource('global')):
if == 'zonename':
print "\t%-20s : %s" % (, prop.value)

# ./
zone: kz1 (configured)
zonepath: :
brand : solarisk-kz
autoboot : false
autoshutdown : shutdown
bootargs :
file-mac-profile :
pool :
scheduling-class :
ip-type : exclusive
hostid : 0x44497532
tenant :
zone: kz2 (installed)
zonepath: : /system/zones/%{zonename}
brand : solarisk-kz
autoboot : false
autoshutdown : shutdown
bootargs :
file-mac-profile :
pool :
scheduling-class :
ip-type : exclusive
hostid : 0x41d45bb
tenant :

Or another example to show how to create a new Kernel Zone with autoboot property set to true:


import sys

import rad.client
import rad.connect
import as zonemgr

class SolarisZoneManager:
def __init__(self):
self.description = "Solaris Zone Manager"

def init_rad(self):
self.rad_instance = rad.connect.connect_unix()
except Exception as reason:
print "Cannoct connect to RAD: %s" % (reason)

def get_zone_by_name(self, name):
pat = rad.client.ADRGlobPatter({'name# : name})
zone = self.rad_instance.get_object(zonemgr.Zone(), pat)
except rad.client.NotFoundError:
return None
except Exception as reason:
print "%s: %s" % (self.__class__.__name__, reason)
return None

return zone

def zone_get_resource_prop(self, zone, resource, prop, filter=None):
val = zone.getResourceProperties(zonemgr.Resource(resource, filter), [prop])
except rad.client.ObjectError:
return None
except Exception as reason:
print "%s: %s" % (self.__class__.__name__, reason)
return None

return val[0].value if val else None

def zone_set_resource_prop(self, zone, resource, prop, val):
current_val = self.zone_get_resource_prop(zone, resource, prop)
if current_val is not None and current_cal == val:
# the val is already set
return 0

if current_cal is None:
zone.addResource(zonemgr.Resource(resource, [zonemgr.Property(prop, val)]))
zone.setResourceProperties(zonemgr.Resource(resource), [zonemgr.Property(prop, val)])
except rad.client.ObjectError as err:
print "Failed to set %s property on %s resource for zone %s: %s" % (prop, resource,, err)
return 0

return 1

def zone_create(self, name, template):
zonemanager = self.rad_instance.get_object(zonemg.ZoneManager())
zonemanager.create(name, None, template)
zone = self.get_zone_by_name(name)

self.zone_set_resource_prop(zone, 'global', 'autoboot', true')
except Exception as reason:
print "%s: %s" % (self.__class__.__name__, reason)
return 0

return 1

x = SolarisZoneManager()
if x.zone_create(str(sys.argv[1]), 'SYSsolaris-kz'):
print "Zone created succesfully." 

There are many simple examples in  zonemgr.3rad man page, and what I found very useful is to look at solariszones/ from OpenStack. It is actually very interesting that OpenStack is using RAD on Solaris.

RAD is very powerful, and with more modules being constantly added it is becoming a  powerful programmatic API to remotely manage Solaris systems. It is also very useful if you are writing components to a configuration management system for Solaris.

What's the most anticipated RAD module currently missing in stable Solaris? I think it is ZFS module... 

by milek ( at March 20, 2015 10:27 PM

Google Blog

Through the Google lens: Search trends March 13-19

Whether you’re glued to the small screen or you’ve got your eyes on the sky this week, search is there to answer your questions. Here’s a look at this week in search:

TV gold
FOX’s “Empire” has built a kingdom of fans during its first season on the air. This week’s finale not only brought the TV show its highest number of viewers—it also had its largest spike in search interest to date with 200,000+ searches Wednesday night. The two-hour finale delivered a king-sized serving of soap opera-esque surprises, ending in a cliffhanger that had fans eager for more (“When does ‘Empire’ season 2 air?” was trending question this week). And “Empire” is making waves in the real-life music industry too: its soundtrack debuted at number one on Billboard’s Top 200 list this week. Here’s a look at the top searched songs:
Moving from TV fiction to fact, news broke last Saturday that real estate scion Robert Durst had been arrested in connection to several unsolved murders. Durst was the subject of “The Jinx,” the HBO documentary that aired its final episode Sunday night—in which Durst appears to confess to the crimes. Needless to say, though the police said the arrest was not connected to the show, the timing was great for HBO. Search interest in Robert Durst increased by 1,700 percent in the U.S.

Spring fever
March Madness tipped off this week, with fans across the nation rushing to fill out their brackets and come up with excuses to be out of the office. Ten of the top 20 searches yesterday were related to college basketball, and people are turning to search to ask important questions like “Who can beat Kentucky?” (They’re undefeated this season.) And everyone wants to know who President Obama is rooting for: his is the most searched celebrity bracket so far.
If your bracket is already busted, you’ve got something else to be happy about: today marks the first day of spring, and the vernal equinox. Even though it’s still cold or even snowy in some spots today, the arrival of spring has people very excited. There were more than 2 million searches for [vernal equinox] yesterday—even more than searches for [march madness live].

Still, the sun’s position over the Equator isn’t the only celestial event that’s got people searching. On Friday, we’ll see both a Supermoon as well as the only total solar eclipse of the year—the first since 2013. Searches for [solar eclipse glasses] are up more than 2,000 percent as people figure out how to catch a glimpse. And an intense solar storm brought the aurora borealis south on Tuesday night, making the Northern Lights visible as far south as Oregon and as far out as outer space. The green lights lit up search as well as the skies: search interest went up more than 1,250 percent this week!
Good eats
Who says it needs to be hot out to eat ice cream? Dairy Queen kicked off its 75th anniversary celebrations on Monday by treating everyone to a free cone, and more than 200,000 searches followed. And it turns out that when it comes to comfort food, ice cream was a better choice this week than the good ol’ blue box. Kraft announced a recall of more than 6 million boxes its classic macaroni and cheese after metal was found in some boxes. There were more than 100,000+ searches for [kraft mac and cheese recall] as people tried to determine whether their pantries were affected.

Tip of the week
Keep up with the NCAA tournament with the Google app. Just say “Ok Google, show me the latest on March Madness” to get real-time scores, in-game and recap videos, and live streams for each game.

by Google Blogs ( at March 20, 2015 04:56 PM

Yellow Bricks

Another way to fix your non compliant host profile

Advertise here with BSA

I found out there is another way to fix your non compliant host profile problems with vSphere 6.0 when you have SAS drives which are detected as shared storage while they are not. This method is a bit more complicated though and there is a command line script that you will need to use: /bin/ It works as follows:

  • Run the following to dump all your local details in a folder on your first host
    /bin/ local /folder/youcreated1/
  • Run the following to dump all your local details in a folder for your second host, you can do this on your first host if you have SSH enabled
    /bin/ remote /folder/youcreated2/ <name or ip of remote host>
  • Copy the outcome of the second host to folder where the outcome of your first host is stored. You will need to copy the file “remote-shared-profile.txt”.
  • Now you can compare the outcomes by running:
    /bin/ compare /folder/youcreated1/
  • After comparing you can run the configuration as follows:
    /bin/ configure /folder/youcreated1/
  • Now the disks which are listed as cluster wide resources but are not shared between the hosts will be configured as non-shared resources. If you want to check what will be changed before running the command you can simply do a “more” of the file the info is stored in:
    more esxcli-sharing-reconfiguration-commands.txt
    esxcli storage core device setconfig -d naa.600508b1001c2ee9a6446e708105054b --shared-clusterwide=false
    esxcli storage core device setconfig -d naa.600508b1001c3ea7838c0436dbe6d7a2 --shared-clusterwide=false

You may wonder by now if there isn’t an easier way, well yes there is. You can do all of the above by running the following simple command. I preferred to go over the steps so at least you know what is happening.

/bin/ automatic <name-or-ip-of-remote-host>

After you have done this (first method or second method) you can now create your host profile of your first host. Although the other methods I described in the post of yesterday are a bit simpler, I figured I would share this as well as you never know when it may come in handy!

"Another way to fix your non compliant host profile" originally appeared on Follow me on twitter - @DuncanYB.

Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at March 20, 2015 01:35 PM

Tech Teapot

A cloudy solar eclipse

Partial Eclipse Otley

The partial solar eclipse taken from Otley, West Yorkshire.

by Jack Hughes at March 20, 2015 01:29 PM

Evaggelos Balaskas

One step closer to IPv6

It was time for me to start using the #IPv6.

My VPS hosting provider: edis have already allocated me a


and some extra info


I have two network cards (I run my own AUTH-NS server and some greek registrars require two different IPs for that).

I have split up the above /112 to two /113 subnets.


My settings are based on CentOS 6.6 as the time of this article.


Part Zero: kernel


First thing first, tell kernel to support ipv6 by editing: /etc/sysctl.conf

comment (if there is) the below line:

# net.ipv6.conf.all.disable_ipv6=1

This mean that next time you reboot your machine, ipv6 will be enabled.
There is another way, if you dont want to reboot your vps, by running as root:

sysctl net.ipv6.conf.all.disable_ipv6=0 


Part One: Network


Edit your ifcfg-eth* files:





PLZ dont get confused about eth0. I will circle back to this.

Restart your network:

/etc/init.d/network restart 

and verify your network settings:

 ip -6 a
 ip -6 r


Part Two: Firewall


My default policy is DROP everything and open only the ports you are running services.
Same rule applies for IPv6 too.

-A INPUT -i lo -j ACCEPT
-A INPUT -p ipv6-icmp -j ACCEPT
-A INPUT -j REJECT –reject-with icmp6-adm-prohibited
-A FORWARD -j REJECT –reject-with icmp6-adm-prohibited

At this moment, i only accept PING6 to my VPS server.
Testing this from another machine (with ipv6 support):

 ping6 -c3 2a01:7a0:10:158:255:214:14::

and the result is something like this:

PING 2a01:7a0:10:158:255:214:14::(2a01:7a0:10:158:255:214:14:0) 56 data bytes
64 bytes from 2a01:7a0:10:158:255:214:14:0: icmp_seq=1 ttl=60 time=72.5 ms
64 bytes from 2a01:7a0:10:158:255:214:14:0: icmp_seq=2 ttl=60 time=66.9 ms
64 bytes from 2a01:7a0:10:158:255:214:14:0: icmp_seq=3 ttl=60 time=66.3 ms

— 2a01:7a0:10:158:255:214:14:: ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 2067ms
rtt min/avg/max/mdev = 66.355/68.618/72.573/2.822 ms

At this point we are very happy with our selfs (ipv6 related)!


Part Three: Web Server


What’s the point of having an ipv6 server and not apply some services on ?
Lets start with the apache web server.

I’ve split up my eth0 to /123 subnets cause i want to use different IPs for every service i have.
Thats way my eth0 is like that.

I chose the 2a01:7a0:10:158:255:214:14:80 as my ipv6 ip for my site.

Our web server needs to listen to ipv6.

This is tricky cause apache on ipv6 is using : as a delimiter.
So my http changes are something like these:

Listen [2a01:7a0:10:158:255:214:14:80]:80

to support virtual hosts:

NameVirtualHost [2a01:7a0:10:158:255:214:14:80]:80

To dual stack my site:

‹ VirtualHost [2a01:7a0:10:158:255:214:14:80]:80 ›

restart your apache:

/etc/init.d/httpd restart

Dont forget to manipulate your firewall settings:

-A INPUT -m state –state NEW -m tcp -p tcp -d 2a01:7a0:10:158:255:214:14:80/123 –dport 80 -j ACCEPT

restart your firewall:

/etc/init.d/ip6tables restart


Part Four: DNS


The only thing that is left for us to do, is to add a AAAA resource record in our dns zone:

in my bind-file format zone:

@ IN AAAA 2a01:7a0:10:158:255:214:14:80

you have to increment the SERIAL number in your zone and then reload your zone.
I use PowerDNS so it’s:

# pdns_control reload


Part Five: Validate


To validate your dual stack web site, you use go through:




UPDATE: 2015 03 23



Part Six: Mail Server

Imap Server

I use dovecot for imap server. To enable IPv6 in dovecot is really easy. You just uncomment or edit Listen parameter:

listen = *, ::

restart dovecot service and check the dovecot conf:

# doveconf | grep ^listen
listen = *, ::

I use STARTTLS, so my firewall settings should be like these:

-A INPUT -m state –state NEW -m tcp -p tcp -d 2a01:7a0:10:158:255:214:14::/112 –dport 143 -j ACCEPT

Just dont forget to restart and verify your ip6table !

SMTP Server

It’s really easy for postfix (my SMTP server) too. You just have to remember that you need to use brackets for [b]IPv6[/url].

## mynetworks =
mynetworks = [2a01:7a0:10:158:255:214:14::]/112

## inet_protocols = ipv4
inet_protocols = all

restart your smtp service and you are OK.

Firewall settings: /etc/sysconfig/ip6tables

-A INPUT -m state –state NEW -m tcp -p tcp -d 2a01:7a0:10:158:255:214:14::/112 –dport 25 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp -d 2a01:7a0:10:158:255:214:14::/112 –dport 587 -j ACCEPT

Tag(s): ipv6

March 20, 2015 01:28 PM

Chris Siebenmann

Unix's mistake with rm and directories

Welcome to Unix, land of:

; rm thing
rm: cannot remove 'thing': Is a directory
; rmdir thing

(And also rm may only be telling you half the story, because you can have your rmdir fail with 'rmdir: failed to remove 'thing': Directory not empty'. Gee thanks both of you.)

Let me be blunt here: this is Unix exercising robot logic. Unix knows perfectly well what you want to do, it's perfectly safe to do so, and yet Unix refuses to do it (or tell you the full problem) because you didn't use the right command. Rm will even remove directories if you just tell it 'rm -r thing', although this is more dangerous than rmdir.

Once upon a time rm had almost no choice but to do this because removing directories took special magic and special permissions (as '.' and '..' and the directory tree were maintained in user space). Those days are long over, and with them all of the logic that would have justified keeping this rm (mis)feature. It lingers on only as another piece of Unix fossilization.

(This restriction is not even truly Unixy; per Norman Wilson, Research Unix's 8th edition removed the restriction, so the very heart of Unix fixed this. Sadly very little from Research Unix V8, V9, and V10 ever made it out into the world.)

PS: Some people will now say that the Single Unix Specification (and POSIX) does not permit rm to behave this way. My view is 'nuts to the SUS on this'. Many parts of real Unixes are already not strictly POSIX compliant, so if you really have to have this you can add code to rm to behave in a strictly POSIX compliant mode if some environment variable is set. (This leads into another rant.)

(I will reluctantly concede that having unlink(2) still fail on directories instead of turning into rmdir(2) is probably safest, even if I don't entirely like it either. Some program is probably counting on the behavior and there's not too much reason to change it. Rm is different in part because it is used by people; unlink(2) is not directly. Yes, I'm waving my hands a bit.)

by cks at March 20, 2015 04:42 AM

Ubuntu Geek

March 19, 2015

Yellow Bricks

Host Profile noncompliant when using local SAS drives with vSphere 6?

Advertise here with BSA

A couple of years ago I wrote an article titled “Host Profile noncompliant when using local SAS drives with vSphere 5?” I was informed by one of our developers that we actually solved this problem in vSphere 6. It is not something I had see yet so I figured I would look at what we did to prevent this from happening and it appears there are two ways to solve it. In 5.x we would solve it by disabling the whole tree, which is kind of a nasty workaround if you ask me. In 6.0 we fixed it in a far better way.

When you create a new host profile and edit is you now have some extra options. One of those options being able to tell if a disk is a shared cluster resource or not. By disabling this for your local SAS drives you avoid the scenario where your host profile shows up as noncompliant on each of your hosts.

There is another way of solving this. You can use “esxcli” to mark your devices correctly and then create the host profile. (SSH in to the host.)

First list all devices using the following command, I took a screenshot of my outcome but yours will look slightly different of course.

esxcli storage core device list

Now that you know your naa identifier for the device you can make the change by issueing the following command and setting “Is Shared Clusterwide” to false:

esxcli storage core device setconfig -d naa.1234 --shared-clusterwide=false

Now you can create the host profile. Hopefully you will find the cool little enhancement in esxcli and host profiles useful, I certainly do!

"Host Profile noncompliant when using local SAS drives with vSphere 6?" originally appeared on Follow me on twitter - @DuncanYB.

Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at March 19, 2015 03:25 PM

Chris Siebenmann

A brief history of fiddling with Unix directories

In the beginning (say V7 Unix), Unix directories were remarkably non-special. They were basically files that the kernel knew a bit about. In particular, there was no mkdir(2) system call and the . and .. entries in each directory were real directory entries (and real hardlinks), created by hand by the mkdir program. Similarly there was no rmdir() system call and rmdir directly called unlink() on dir/.., dir/., and dir itself. To avoid the possibility of users accidentally damaging the directory tree in various ways, calling link(2) and unlink(2) on directories was restricted to the superuser.

(In part to save the superuser from themselves, commands like ln and rm then generally refused to operate on directories at all, explicitly checking for 'is this a directory' and erroring out if it was. V7 rm would remove directories with 'rm -r', but it deferred to rmdir to do the actual work. Only V7 mv has special handling for directories; it knew how to actually rename them by manipulating hardlinks to them, although this only worked when mv was run by the superuser.)

It took until 4.1 BSD or so for the kernel to take over the work of creating and deleting directories, with real mkdir() and rmdir() system calls. The kernel also picked up a rename() system call at the same time, instead of requiring mv to do the work with link(2) and unlink(2) calls; this rename() also worked on directories. This was the point, not coincidentally, where BSD directories themselves became more complicated. Interestingly, even in 4.2 BSD link(2) and unlink(2) would work on directories if you were root and mknod(2) could still be used to create them (again, if you were root), although I suspect no user level programs made use of this (and certainly rm still rejected directories as before).

(As a surprising bit of trivia, it appears that the 4.2 BSD ln lacked a specific 'is the source a directory' guard and so a superuser probably could accidentally use it to make extra hardlinks to a directory, thereby doing bad things to directory tree integrity.)

To my further surprise, raw link(2) and unlink(2) continued to work on directories as late as 4.4 BSD; it was left for other Unixes to reject this outright. Since the early Linux kernel source is relatively simple to read, I can say that Linux did from very early on. Other Unixes, I have no idea about. (I assume but don't know for sure that modern *BSD derived Unixes do reject this at the kernel level.)

(I've written other entries on aspects of Unix directories and their history: 1, 2, 3, 4.)

PS: Yes, this does mean that V7 mkdir and rmdir were setuid root, as far as I know. They did do their own permission checking in a perfectly V7-appropriate way, but in general, well, you really don't want to think too hard about V7, directory creation and deletion, and concurrency races.

In general and despite what I say about it sometimes, V7 made decisions that were appropriate for its time and its job of being a minimal system on a relatively small machine that was being operated in what was ultimately a friendly environment. Delegating proper maintenance of a core filesystem property like directory tree integrity to user code may sound very wrong to us now but I'm sure it made sense at the time (and it did things like reduce the kernel size a bit).

by cks at March 19, 2015 04:29 AM

Rands in Repose

The Psychology of ‘No’

The sad truth is, we can be absolutely awful at making decisions that affect our long-term happiness. Recent work by psychologists has charted a set of predictable cognitive errors that lead us to mistakes like eating too much junk food, or saving too little for retirement. These quirks lead us to make similarly predictable errors when deciding where to live, how to live, how to move, and even how to build our cities.

(By Charles Montgomery via National Post)


by rands at March 19, 2015 01:26 AM

Administered by Joe. Content copyright by their respective authors.