Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

November 26, 2015

Trouble with tribbles

Buggy basename

Every so often you marvel at the lengths people go to to break things.

Take the basename command in illumos, for example. This comes in two incarnations - /usr/bin/basename, and /usr/xpg4/bin/basename.

Try this:

# /usr/xpg4/bin/basename -- /a/b/c.txt

Which is correct, and:

# /usr/bin/basename -- /a/b/c.txt

Which isn't.

Wait, it gets worse:

# /usr/xpg4/bin/basename /a/b/c.txt .t.t

Correct. But:

# /usr/bin/basename /a/b/c.txt .t.t

Err, what?

Perusal of the source code reveals the answer to the "--" handling - it's only caught in XPG4 mode. Which is plain stupid, there's no good reason to deliberately restrict correct behaviour to XPG4.

Then the somewhat bizarre handling with the ".t.t" suffix. So it turns out that the default basename command is doing pattern matching rather then the expected string matching. So the "." will match any character, rather than being interpreted literally. Given how commonly "." is used to separate the filename from its suffix, and the common usage of basename to strip off the suffix, this is a guarantee for failure and confusion. For example:

# /usr/bin/basename /a/b/cdtxt .txt

The fact that there's a difference here  is actually documented in the man page, although not very well - it points you to expr(1) which doesn't tell you anything relevant.

So, does anybody rely on the buggy broken behaviour here?

It's worth noting that the ksh basename builtin and everybody else's basename implementation seems to do the right thing.

Fixing this would also get rid of a third of the lines of code and we could just ship 1 binary instead of 2.

by Peter Tribble ( at November 26, 2015 04:28 PM

Server Density

What’s in your Backpack? Modular vs. Monolithic Development

While building version 2.0 of our Server Monitoring agent, we reached a point where we had to make a choice.

We could either ship the new agent together with every supported plugin, in one single file. Or we could deploy just the core logic of the agent and let users install any further integrations, as they need them.

This turned out to be a pivotal decision for us. And it was much more than technical considerations that advised it.

Let’s start with some numbers.

How Much Does Your File Weigh?

Simple is better than complex.

The Zen of Python

The latest version of our agent allows Server Density to integrate with many applications and plugins. We made substantial improvements in the core logic and laid the groundwork for regular plugin iterations, new releases and updates.

All that extra oomph comes with a relatively small price in terms of file size. Version 2.0 has a 10MB on-disk footprint.

If we were to take the next step and push every compatible plugin into a single package, our agent would become ten times “heavier”. And it would only keep growing every time we support something new.

Moving is Living

Question: But agent footprint is not a real showstopper, is it? Why worry about file sizes when I can get everything I need in one go?


There is something to be said about the convenience of the monolithic approach. You get everything you need in one serving.

And yet, it is the nature of this “component multiplicity” that makes iterations of monolithic applications slower.

For example, when a particular item (say, the Python interpreter or Postgres library) is updated by the vendor, our users would have to wait for us to update our agent before they get those patches. Troubleshooting and responding to new threats would therefore—by definition—take longer. This delay creates potential attack vectors and vulnerabilities.

Even if we were on-the-button with every possible plugin update (an increasingly impossible feat as we continue to broaden our plugin portfolio), the majority of our users would then be exposed to more updates than they actually need.

Either way, it’s a lousy user experience.

To support all those new integrations—without introducing security risks or needless headaches for our users—was not easy. It took a significant amount of development time to come up with an elegant, modular solution that is simple—and yet functional—for our customers.

The result is a file that includes the bare minimum: agent code plus some specific Python modules.

Flexibility and Ease of Use

To take advantage of all the new supported integrations, users may choose to install additional plugins as needed.

Question: Doesn’t that present challenges in larger / diverse server environments?

Probably not.

Sysadmins continue to embrace Puppet manifests, Chef configuration deployment and Ansible automation—tools designed to keep track of server roles and requirements. It’s easier than ever to stay on top of what plugin goes to what server. Automation and configuration utilities can remove much of that headache. Since we tie into standard OS package managers (deb or RPM packages), we simply work with the existing tools everyone is already used to.

By packaging the plugins separately we get to focus on what we control: the logic inside our agent. Users only ever download what they need, and enjoy greater control of what’s sitting on their servers. The end-result is a flexible monitoring solution that adapts to our users (rather than the other way around).

The 1.x to 2.0 agent upgrade is not automatic. Existing installations will need to opt-in. We’ve made it easy to upgrade with a simple bash script. Fresh installs will default to version 2.0. The 1.x agent will still be available (but deprecated). All version 1.x custom plugins will continue to work with the new agent too.


Truth is ever to be found in simplicity, and not in the multiplicity and confusion of things.

Isaac Newton

The modular vs. monolithic debate has been going on for decades. We don’t have easy answers and it’s not our intention to dismiss the monolithic approach. There are plenty of examples of closed monolithic systems that work really well for well-defined target users.

Knowing our own target users (professional sysadmins), we know we can serve them better by following a modular approach. We think it pays to keep things small and simple for them, even if it takes significantly more development effort.

As we continue improving our back-end, our server monitoring agent will support more and more integrations. Employing a modular development, means prompt updates with fewer security risks. That’s what our customers expect, and that’s what drives our decisions.

What about you? What approach do you follow?

What's in Your Backpack?

The post What’s in your Backpack? Modular vs. Monolithic Development appeared first on Server Density Blog.

by David Mytton at November 26, 2015 10:56 AM

Ubuntu Geek

November 25, 2015

Evaggelos Balaskas

Sender Policy Framework

UPDATE Thu Nov 26 11:28:05 EET 2015

Does SPF break forwarding?
(like in mailing lists)

- Yes, it does break forwarding.

So learn from my mistake and think this through.

Wednesday, 25 November 2015

There is a very simply way to add spf [check] support to your postfix setup.
Below are my notes on CentOS 6.7

Step One: install python policy daemon for spf

# yum -y install pypolicyd-spf

Step Two: Create a new postfix service, called spfcheck

# vim + /etc/postfix/

spfcheck     unix  -       n       n       -       -       spawn
	user=nobody argv=/usr/libexec/postfix/policyd-spf

Step Three: Add a new smtp daemon recipient restrictions

# vim +/^smtpd_recipient_restrictions /etc/postfix/
smtpd_recipient_restrictions =
	check_policy_service unix:private/spfcheck
policy_time_limit = 3600

And that’s what we see in the end on a receiver’s source-view email:

Received-SPF: Pass (sender SPF authorized) identity=mailfrom;;
helo=server.mydomain.tld; envelope-from=user@mydomain.tld;

where is the IP of the sender mail server
server.mydomain.tld is the name of the sender mail server
user@mydomain.tld is the sender’s email address
and of-course is the receiver’s mail address

You can take a better look on postfix python SPF policy daemon by clicking here: python-postfix-policyd-spf

Tag(s): postfix, spf

November 25, 2015 08:29 PM

Everything Sysadmin

Why I don't care that Dell installs Rogue Certificates On Laptops

In recent weeks Dell has been found to have installed rogue certificates on laptops they sell. Not once, but twice. The security ramifications of this are grim. Such a laptop can have its SSL-encrypted connections sniffed quite easily. Dell has responded by providing uninstall instructions and an application that will remove the cert. They've apologized and that's fine... everyone makes mistakes, don't let it happen again. You can read about the initial problem in "Dell Accused of Installing 'Superfish-Like' Rogue Certificates On Laptops" and the re-occurance in "Second Root Cert-Private Key Pair Found On Dell Computer"

And here is why I don't care.

November 25, 2015 06:00 PM


Trying the Jenkins View Builder

As you add more jobs to Jenkins you’ll often want to start breaking them out in to smaller, more logically grouped, views. While the UI itself makes this simple it’s a manual task, and as automation loving admins we can do better than clicking around. In this post we’ll take a brief look at the jenkins-view-builder and see if it can make our lives any easier.

My test case will be a simple Jenkins view that should include any jobs whose names match the test-puppet-.*-function pattern. These will be grouped together under the ‘Puppet Functions (auto)’ view. We’ll start by installing jenkins-view-builder inside a python virtualenv using the commands below -

# create the virtualenv
$ virtualenv jenkins-views
New python executable in jenkins-views/bin/python2

$ cd jenkins-views/

# set paths based on the virtualenv
$ source bin/activate

# install the python code
$ pip install jenkins-view-builder

# and check it worked
$ jenkins-view-builder --version
jenkins-view-builder 0.1

We’ll now write our example using YAML. There are two useful starting points when writing your own views. First are the jenkins-view-builder examples/. These present a few different types of jenkins views and can be a useful starting point to crib from. A second approach is to use existing Jenkins views for inspiration. You can read through the Jenkins config file, /var/lib/jenkins/config.xml on CentOS, and extract the XML, which can then be easily converted to YAML.

Using a combination of these sources we end up with a YAML view config that looks like this -

# cat puppet-view.yaml
  - view:
      type: list
      name: Puppet Functions (auto)
      description: Puppet Functions via jenkins-view-builder
      includeRegex: "test-puppet-.*-function"
        - status
        - weather
        - job
        - last_success
        - last_failure
        - last_duration
        - build_button
      recurse: False

The trickiest part was figuring out the column name formats. We’ll now run this through jenkins-view-builder and take a look at the generated (and xmllint re-formatted) XML.

$ jenkins-view-builder test puppet-view.yaml
$ cat out/Puppet\ Functions\ \(auto\).xml | xmllint --format -

<?xml version="1.0"?>
  <owner class="hudson" reference="../../.."/>
  <name>Puppet Functions (auto)</name>
  <properties class="hudson.model.View$PropertyList"/>
    <comparator class="hudson.util.CaseInsensitiveComparator"/>
  <description>Puppet Functions View (via jenkins-view-builder)</description>

This looks both faithful to our YAML source config and very similar to UI created views we already have in Jenkins. Happy with our example we need to configure credentials jenkins-view-builder can use to access Jenkins and create our new view.

    $ cat jenkins.conf

Then we run the command to create the view -

    $ jenkins-view-builder update --conf jenkins.conf puppet-view.yaml
    Updating view data in Jenkins
    Starting new HTTP connection (1): jenkins
    Creating view Puppet Functions (auto)
    Starting new HTTP connection (1): jenkins

When this finishes you should be able to refresh your web browser tab and see the newly added view and all the jobs its regex matches.

by (Dean Wilson) at November 25, 2015 04:18 PM

Everything Sysadmin

We forget how big "big" is

Talk with any data-scientist and they'll rant about how they hate the phrase "big data". Odds are they'll mention a story like the following:

My employer came to me and said we want to do some 'big data' work, so we're hiring a consultant to build a Hadoop cluster. So I asked, "How much data do you have?" and he replied, "Oh, we never really measured. But it's big. Really big! BIIIIG!!

Of course I did some back of the envelope calculations and replied, "You don't need Hadoop. We can fit that in RAM if you buy a big enough Dell." he didn't believe me. So I went to and showed him a server that could support twice that amount, for less than the cost of the consultant.

We also don't seem to appreciate just how fast computers have gotten.

November 25, 2015 03:00 PM

Geeking with Greg

Quick links

What has caught my attention lately:
  • Tog (of the famous Tog on Interface) says Apple has lost its way on design: "Apple is destroying design. Worse, it is revitalizing the old belief that design is only about making things look pretty. No, not so! Design is a way of thinking, of determining people’s true, underlying needs, and then delivering products and services that help them." ([1] [2])

  • Good advice on adding features to a product: "'Great or Dead', as in, if we can't make a feature great, it should be killed off." ([1])

  • Great data on smartphone and tablet ownership. Sometimes it's hard to remember that only five years ago most people didn't have smartphones. ([1])

  • Advice for anyone thinking of doing a startup. Here's the conclusion: "So all you need is a great idea, a great team, a great product, and great execution. So easy! ;)" ([1])

  • Related, a Dilbert comic on the value of a startup idea ([1])

  • "People might think that human-level AI is close because they think AI is more magical than it actually is" ([1])

  • "VCs hate technical risk. They’re comfortable with market risk, but technical risk is really difficult for them to reconcile." ([1])

  • Google finds eliminating bad advertisements increases long-term revenue, concluding: "A focus on user satisfaction could help to reduce the ad load on the internet at large with long-term neutral, or even positive, business impact." ([1] [2])

  • "Crappy ad experiences are behind the uptick in ad-blocking tools" ([1])

  • On filter bubbles, a new study finds algorithms yield more diversity of content than people choosing news themselves ([1] [2] [3])

  • Facebook data center fun: "The inclusion of 480 4 TB drives drove the weight to over 1,100 kg, effectively crushing the rubber wheels." ([1])

  • Great data on who uses which social networks ([1])

  • "One of the great mysteries of the tech industry in recent years has been the seeming disinterest of Google, which is now called Alphabet, in competing with Amazon Web Services for corporate customers." ([1])

  • "Maybe part of AWS value prop is the outsourcing of outages: when half the net is offline, any individual down site doesn't look as bad." ([1])

  • "87% of Android devices are vulnerable to attack by malicious apps ... because manufacturers have not provided regular security updates" ([1])

  • Fun maps showing where tourists take photos compared to locals ([1] [2] [3])

  • Multiple camera lenses, an idea soon coming to mobile phones too? ([1])

  • Another interesting camera technology: ""17 different wavelengths ... software analyzes the images and finds ones that are most different from what the naked eye sees, essentially zeroing in on ones that the user is likely to find most revealing" ([1])

  • And another: "Take a short image sequence while slightly moving the camera ... to recover the desired background scene as if the visual obstructions were not there" ([1])

  • Useful to know: "Survey results are mostly unaffected when the non-Web respondents are left out." ([1])

  • Surprising finding, meal worms can thrive just eating styrofoam: "the larvae lived as well as those fed with a normal diet (bran) over a period of 1 month" ([1])

  • Autonomous drone for better-than-GoPro filming? ([1] [2])

  • "We see people turning onto, and then driving on, the wrong side of the road a lot ... Drivers do very silly things when they realize they’re about to miss their turn ... Routinely see people weaving in and out of their lanes; we’ve spotted people reading books, and even one [driver] playing a trumpet." ([1])

  • A fun and cool collection of messed up images out of Apple maps. It's almost art. ([1])

  • SMBC comic, also applies to AI ([1])

by Greg Linden ( at November 25, 2015 07:38 AM


{git, hg} Custom Log Output

The standard log output for both Git and Mercurial is a bit verbose for my liking. I keep my terminal at ~50 lines, which results in only getting about 8 to 10 log entries depending on how verbose the commit was. This isn't a big deal if you are just i...

by Scott Hebert at November 25, 2015 06:00 AM

November 24, 2015

Trouble with tribbles

Replacing SunSSH with OpenSSH in Tribblix

I recently did some work to replace the old SSH implementation used by Tribblix, which was the old SunSSH from illumos, with OpenSSH.

This was always on the list - our SunSSH implementation was decrepit and unmaintained, and there seemed little point in general in maintaining our own version.

The need to replace has become more urgent recently, as the mainstream SSH implementations have drifted to the point that we're no longer compatible - to the point that our implementation will not interoperate at all with that on modern Linux distributions with the default settings.

As I've been doing a bit of work with some of those modern Linux distributions, being unable to connect to them was a bit of a pain in the neck.

Other illumos distributions such as OmniOS and SmartOS have also recently been making the switch.

Then there was a proposal to work on the SunSSH implementation so that it was mediated - allowing you to install both SunSSH and OpenSSH and dynamically switch between them to ease the transition. Personally, I couldn't see the point - it seemed to me much easier to simply nuke SunSSH, especially as some distros had already made or were in the process of making the transition. But I digress.

If you look at OmniOS, SmartOS, or OpenIndiana, they have a number of patches. In some cases, a lot of patches to bring OpenSSH more in line with old SunSSH.

I studied these at some length, looked at them, and largely rejected them. There are a couple of reasons for this:

  • In Tribblix, I have a philosophy of making minimal modifications to upstream projects. I might apply patches to make software build, or when replacing older components so that I don't break binary compatibility, but in general what I ship is as close to what you would get if you did './configure --prefix=/usr; make ; make install' as I can make it.
  • Some of the fixes were for functionality that I don't use, probably won't use, and have no way of testing. So blindly applying patches and hoping that what I produce still works, and doesn't arbitrarily break something else, isn't appealing. Unfortunately all the gssapi stuff falls into this bracket.
One thing that might change this in the future, and something we've discussed a little, is to have something like Joyent's illumos-extra brought up to a state where it can be used as a common baseline across all illumos distributions. It's a bit too specific to SmartOS right now, so won't work for me out of the box, and it's a little unfortunate that I've just about reimplemented all the same things for Tribblix myself.

So what I ship is almost vanilla OpenSSH. The modifications I have made are fairly few:

It's split into the same packages (3 of them) along just about the same boundaries as before. This is so that you don't accidentally mix bits of SunSSH with the new OpenSSH build.

The server has
KexAlgorithms +diffie-hellman-group1-sha1
added to /etc/ssh/sshd_config to allow connections from older SunSSH clients.

The client has
PubkeyAcceptedKeyTypes +ssh-dss
added to /etc/ssh/ssh_config so that it will allow you to send DSA keys, for users who still have just DSA keys.

Now, I'm not 100% happy about the fact that I might have broken something that SunSSH might have done, but having a working SSH that will interoperate with all the machines I need to talk to outweighs any minor disadvantages.

by Peter Tribble ( at November 24, 2015 09:32 PM

Le blog de Carl Chenet

db2twitter: Twitter out of the browser

You have a database, a tweet pattern and wants to automatically tweet on a regular basis? No need for RSS, fancy tricks, 3rd party website to translate RSS to Twitter or whatever. Just use db2twitter. db2twitter on Github (star it on Github if you like it :) ) Official documentation of db2twitter on readthedocs db2twitter

by Carl Chenet at November 24, 2015 06:00 PM

Aaron Johnson

November 23, 2015

Everything Sysadmin

What JJ Abrams just revealed about Star Wars

Last night (Saturday, Nov 21) I attended a fundraiser for the Montclair Film Festival where (I kid you not) for 90 minutes we watched Stephen Colbert interview J.J. Abrams.

What I learned:

  • He finished mixing The Force Awakens earlier that day. 2:30am California time. He then spent all day traveling to Newark, New Jersey for the event.
  • After working on it for so long, he's sooooo ready to get it in the theater. "The truth is working on this movie for nearly three years, it has been like living with the greatest roommate in history for too long. It's time for him to get his own place. It's been great and I can't tell you how much I want him to get out into the world and meet other people because we know each other really well. But really, 'Star Wars' is bigger than all of us. So I'm thrilled beyond words (to be involved) and terrified more than I can say."
  • When they played the Force Awakens trailer, J.J. said he had seen it before, but this was the first time he saw it with a live audience.
  • J.J. was influenced at an early age by "The Force" as being a non-denominational power for good.
  • Stephen Colbert saw the original Star Wars 3 weeks early thanks to a contest. He gloated that he's been excited about Star Wars for 3 weeks longer than anyone here.
  • Jennifer Garner worked for Colbert as a nanny when she was starting out in acting and needed the money.
  • Stephen Colbert auditioned for J.J.'s first film but didn't get the part. The script was called Filofax but was called "Taking Care of Business" when it was released. Colbert said he remembered auditioning for Filofax and then seeing TCoB in the theater and thinking, "Gosh this seems a lot like Filofax!"
  • J.J. acted in a film. He had a cameo in "Six Degrees of Separation". They showed a clip off YouTube.
  • While filming the pilot for "Lost" the network changed presidents. The new one wasn't very confident in the new series and asked them to film an ending for the pilot that would permit them to show it as a made-for-TV movie instead. He pretended to "forget" the request and the president never brought it up again.

The fundraiser was a total win. 2800 people were there (JJ said "about 2700 more than I expected"). If you are in the NY/NJ area, I highly recommend you follow them on Facebook and check out the next film festival on April 29 to May 8, 2016.

November 23, 2015 01:15 AM

Ubuntu Geek

Step By Step Ubuntu 15.10 (Wily Werewolf) LAMP Server Setup

In around 15 minutes, the time it takes to install Ubuntu Server Edition, you can have a LAMP (Linux, Apache, MySQL and PHP) server up and ready to go. This feature, exclusive to Ubuntu Server Edition, is available at the time of installation.The LAMP option means you don’t have to install and integrate each of the four separate LAMP components, a process which can take hours and requires someone who is skilled in the installation and configuration of the individual applications. Instead, you get increased security, reduced time-to-install, and reduced risk of misconfiguration, all of which results in a lower cost of ownership.Currently this installation provide PostgreSQL database, Mail Server, Open SSH Server,Samba File Server, Print Server, Tomcat Java Server,Virtual Machine Host,Manual Package selection,LAMP and DNS options for pre-configured installations, easing the deployment of common server configurations.
Read the rest of Step By Step Ubuntu 15.10 (Wily Werewolf) LAMP Server Setup (652 words)

© ruchi for Ubuntu Geek, 2015. | Permalink | No comment | Add to
Post tags: , , ,

Related posts

by ruchi at November 23, 2015 12:31 AM

November 22, 2015

Trouble with tribbles

On Keeping Your Stuff to Yourself

One of the fundamental principles of OmniOS - and indeed probably its defining characteristic - is KYSTY, or Keep Your Stuff* To Yourself.

(*um, whatever.)

This isn't anything new. I've expressed similar opinions in the past. To reiterate - any software that is critical for the success of your business/project/infrastructure/whatever should be directly under your control, rather than being completely at the whim of some external entity (in this case, your OS supplier).

We can flesh this out a bit. The software on a system will fall, generally, into 3 categories:

  1. The operating system, the stuff required for the system to boot and run reliably
  2. Your application, and its dependencies
  3. General utilities

As an aside, there are more modern takes on the above problem: with Docker, you bundle the operating system with your application; with unikernels you just link whatever you need from classes 1 and 2 into your application. Problem solved - or swept under the carpet, rather.

Looking at the above, OmniOS will only ship software in class 1, leaving the rest to the end user. SmartOS is a bit of a hybrid - it likes to hide everything in class 1 from you and relies on pkgsrc to supply classes 2 and 3, and the bits of class 1 that you might need.

Most (of the major) Linux distributions ship classes 1, 2, and 3, often in some crazily interdependent mess that you have to spend ages unpicking. The problem being that you need to work extra hard to ensure your own build doesn't accidentally acquire a dependency on some system component (or that you build somehow reads a system configuration file).

Generally missing from discussions is that class 3 - the general utilities. Stuff that you could really do with an instance of to make your life easier, but where you don't really care about the specifics of.

For example, it helps to have a copy of the gnu userland around. Way too much source out there needs GNU tar to unpack, or GNU make to build, or assumes various things about the userland that are only true of the GNU tools. (Sometimes, the GNU tools aren't just a randomly incompatible implementation, occasionally have capabilities that are missing from standard tools - like in-place editing in gsed.)

Or a reasonably complete suite of compression utilities. More accurately, uncompression, so that you have a pretty good chance of being able to unpack some arbitrary format that people have decided to use.

Then there are generic runtimes. There's an awful lot of python or perl out there, and sometimes the most convenient way to get a job done is to put together a small script or even a one-liner. So while you don't really care about the precise details, having copies of the appropriate runtimes (and you might add java, erlang, ruby, node, or whatever to that list) really helps for the occasions when you just want to put together a quick throwaway component. Again, if your business-critical application stack requires that runtime, you maintain your own, with whatever modules you need.

There might also be a need for basic graphics. You might not want or need a desktop, but something is linked against X11 anyway. (For example, java was mistakenly linked against X11 for font handling, even in headless mode - a bug recently fixed.) Even if it's not X11, applications might use common code such as cairo or pango for drawing. Or they might need to read or write image formats for web display.

So the chances are that you might pull in a very large code surface, just for convenience. Certainly I've spent a lot of time building 3rd-party libraries and applications on OmniOS that were included as standard pretty much everywhere else.

In Tribblix, I've attempted to build and package software cognizant of the above limitations. So I supply as wide a range of software in class 3 as I can - this is driven by my own needs and interests, as a rule, but over time it's increasingly complete. I do supply application stacks, but these are built to be in a separate location, and are kept at arms length from the rest of the system. This then integrated with Zones in a standardized zone architecture in a way that can be managed by zap. My intention here is not necessarily to supply the building blocks that can be used by users, but to provide the whole application, fully configured and ready to go.

by Peter Tribble ( at November 22, 2015 02:20 PM

Server Density

November 20, 2015


Implementing 'git show' in Mercurial

One of my frequently used git commands is 'git show <rev>'. As in, "show me what the heck this guy did here." Unfortunately, Mercurial doesn't have the same command, but it's easy enough to implement it using an alias in your .hgrc. The command y...

by Scott Hebert at November 20, 2015 02:00 PM

Yellow Bricks

Virtual SAN: Generic storage platform of the future

Advertise here with BSA

Over the last couple of weeks I have been presenting at various events on the topic of Virtual SAN. One of the sections in my deck is a bit about the future of Virtual SAN and where it is heading towards. Someone tweeted one of the diagrams in my slides recently which got picked up by Christian Mohn who provided his thoughts on the diagram and what it may mean for the future. I figured I would share my story behind this slide, which is actually a new version of a slide that was originally presented by Christos and also discussed in one of his blog posts. First, lets start with the diagram:

If you look at VSAN today and ask people what VSAN is today then most will answer: a “virtual machine” storage system. But VSAN to me is much more than that. VSAN is a generic object storage platform, which today is used to primarily store virtual machines. But these objects can be anything if you ask me, and on top of that can be presented as anything.

So what is it VMware is working towards, what is our vision? VSAN was designed to serve as a generic object storage platform from the start, and is being extended to serve as a platform to different types of data by providing an abstraction layer. In the diagram you see “REST” and “FILE” and things like Mesos and Docker, it isn’t difficult to imagine what types of workloads we envision to run on top of VSAN and what types of access you have to resources managed by VSAN. This could be through a native Rest API that is part of the platform which can be used by developers directly to store their objects on or through the use of a specific driver for direct “block” access for instance.

Combine that with the prototype of the distributed filesystem which was demonstrated at VMworld and I think it is fair to say that the possibilities are endless. VSAN isn’t just a storage system for virtual machines, it is a generic object based storage platform which leverages local resources and will be able to share those in a clustered fashion in any shape or form in the future. Christian definitely had a point, in which shape or form all of this will be delivered has to be seen though, this is not something I can (or want) to speculate on. Whether that is through Photon Platform, or something else is in my opinion besides the point. Even today VSAN has no dependencies on vCenter Server and can be fully configured, managed and monitoring using the APIs and/or the different command-line interface options we offer. Agility and choice have always been the key design principles for the platform.

Where things will go exactly and when this will happen is still to be seen. But if you ask me, exciting times are ahead for sure, and I can’t wait to see how everything plays out.


"Virtual SAN: Generic storage platform of the future" originally appeared on Follow me on twitter - @DuncanYB.

by Duncan Epping at November 20, 2015 01:49 PM

Aaron Johnson

Links: 11-19-2015

by ajohnson at November 20, 2015 06:30 AM

November 19, 2015


How to Delete or “Forget” a WiFi Network in Windows 10

This brief guide will take you step by step through the process of removing (also known as “forgetting”) a Wireless Network in Windows 10.

Two of the most common reasons for deleting the connection settings of a wireless network that you’ve previously connected to are:

  1. Troubleshooting a connection that no longer works.
  2. To stop your device from automatically connecting to networks you don’t use regularly (particularly helpful for laptop and tablet users).

Here are the quick steps to take in order to remove a wireless network from Windows 10:

  1. Click the “Start” button and select Settings
  2. Select Network & Internet
  3. Select Wi-Fi from the column on the left side of the window. From the Wi-Fi panel now displayed on the right side of the window, scroll down to and select/click Manage Wi-Fi settings
  4. Scroll down to the Manage known networks section. From the list of all of your “known” (saved) Wi-Fi networks, select the one you want to delete by clicking on it once. Once it’s been selected, a Forget button will appear. Click that button to remove the associated Wireless Network.
  5. Select each of the other networks you want to delete (if any) and repeat the “click Forget” process. That’s it!

by Ross McKillop at November 19, 2015 08:37 PM

Administered by Joe. Content copyright by their respective authors.