Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

July 30, 2014

Chris Siebenmann

My view on FreeBSD versus Linux, primarily on the desktop

Today I wound up saying on Twitter:

@thatcks: @devbeard I have a pile of reasons I'm not enthused about FreeBSD, especially as a desktop with X and so on. So that's not really an option.

I got asked about this on Twitter and since my views do not in any way fit into 140 characters, it's time for an entry.

I can split my views up into three broad categories: pragmatic, technical, and broadly cultural and social. The pragmatic reasons are the simplest ones and boil down to that Linux is the dominant open source Unix. People develop software for Linux first and everything else second, if at all. This is extremely visible for an X desktop (the X server and all modern desktops are developed and available first on Linux) but extends far beyond that; Go, for example, was first available on Linux and later ported to FreeBSD. Frankly I like having a wide selection of software that works without hassles and often comes pre-packaged, and generally not having to worry if something will run on my OS if it runs on Unix at all. FreeBSD may be more pure and minimal here but as I've put it before, I'm not a Unix purist. In short, running FreeBSD in general usage generally means taking on a certain amount of extra pain and doing without a certain amount of things.

On the technical side I feel that Linux and Linux distributions have made genuinely better choices in many areas, although I'm somewhat hampered by a lack of deep exposure to FreeBSD. For example, I would argue that modern .deb and RPM Linux package management is almost certainly significantly more advanced than FreeBSD ports. As another one, I happen to think that systemd is the best Unix init system currently available with a lot of things it really gets right, although it is not perfect. There are also a horde of packaging decisions like /etc/cron.d that matter to system administrators because they make our lives easier.

(And yes, FreeBSD has sometimes made better technical choices than Linux. I just think that there have been fewer of them.)

On the social and cultural side, well, I cannot put it nicely so I will put it bluntly: I have wound up feeling that FreeBSD is part of the conservative Unix axis that worships at the altar of UCB BSD, System V, and V7. This is not required by its niche as the non-Linux Unix but that situation certainly doesn't hurt; a non-Linux Unix is naturally attractive to people who don't like Linux's reinvention, ahistoricality, and brash cultural attitudes. I am not fond of this conservatism because I strongly believe that Unix needs to grow and change and that this necessarily requires experimentation, a willingness to have failed experiments, and above all a willingness to change.

This is a somewhat complex thing because I don't object to a Unix being slow moving. There is certainly a useful ecological niche for a cautious Unix that lets other people play pioneer and then adopts the ideas that have proven to be good ones (and Linux's willingness to adopt new things creates churn; just ask all of the people who ported their init scripts to Upstart and will now be re-porting them to systemd). If I was confident that FreeBSD was just waiting to adopt the good bits, that would be one thing. But as an outsider I haven't been left with that feeling; instead my brushing contacts have left me with more the view that FreeBSD has an aspect of dogmatic, 'this is how UCB BSD does it' conservatism to it. Part of this is based on FreeBSD still not adopting good ideas that are by now solidly proven (such as, well, /etc/cron.d as one small example).

This is also the area where my cultural bad blood with FreeBSD comes into play. Among other more direct things, I'm probably somewhat biased towards seeing FreeBSD as more conservative than it actually is and I likely don't give FreeBSD the benefit of the doubt when it does something (or doesn't do something) that I think of as hidebound.

None of this makes FreeBSD a bad Unix. Let me say it plainly: FreeBSD is a perfectly acceptable Unix in general. It is just not a Unix that I feel any particular enthusiasm for and thus not something I'm inclined to use without a compelling reason. My default Unix today is Linux.

(It would take a compelling reason to move me to FreeBSD instead of merely somewhere where FreeBSD is a bit better because of the costs of inevitable differences.)

by cks at July 30, 2014 05:32 AM

July 29, 2014

Ubuntu Geek

How to configure NFS Server and Client Configuration on Ubuntu 14.04

Sponsored Link
NFS was developed at a time when we weren't able to share our drives like we are able to today -- in the Windows environment. It offers the ability to share the hard disk space of a big server with many smaller clients. Again, this is a client/server environment. While this seems like a standard service to offer, it was not always like this. In the past, clients and servers were unable to share their disk space.

(...)
Read the rest of How to configure NFS Server and Client Configuration on Ubuntu 14.04 (626 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , , ,

Related posts

by ruchi at July 29, 2014 11:42 PM

Everything Sysadmin

Honest Life Hacks

I usually don't blog about "funny web pages" I found but this is relevant to the blog. People often forward me these "amazing life hacks that will blow your mind" articles because of the Time Management Book.

First of all, I shouldn't have to tell you that these are linkbait (warning: autoplay).

Secondly, here's a great response to all of these: Honest Life Hacks.

July 29, 2014 03:28 PM

The Nubby Admin

I Don’t Always Mistype My Password

It never fails. At the last two or three characters of a password, I’ll strike two keys at the same time. Maybe I pressed both, maybe I only pressed one. Whatever happened, certainly it doesn’t make any sense to press return and roll the dice. I’ll either get in or fail and have to type it all over again, and sysadmins can’t admit failure nor can we entertain the possibility of failing!!

So what’s the most logical maneuver? Backspace 400 times to make sure you got it all and try over. Make sure you fumble the password a few more time in a row so you can get a nice smooth spot worn on your backspace key.

Advertisement:

by WesleyDavid at July 29, 2014 11:12 AM

Chris Siebenmann

FreeBSD, cultural bad blood, and me

I set out to write a reasoned, rational elaboration of a tweet of mine, but in the course of writing it I've realized that I have some of those sticky human emotions involved too, much like my situation with Python 3. What it amounts to is that in addition to my rational reasons I have some cultural bad blood with FreeBSD.

It certainly used to be the case that a vocal segment of *BSD people, FreeBSD people among them, were elitists who looked down their noses at Linux (and sometimes other Unixes too, although that was usually quieter). They would say that Linux was not a Unix. They would say that Linux was clearly used by people who didn't know any better or who didn't have any taste. There was a fashion for denigrating Linux developers (especially kernel developers) as incompetents who didn't know anything. And so on; if you were around at the right time you probably can think of other things. In general these people seemed to venerated the holy way of UCB BSD and find little or no fault in it. Often these people believed (and propagated) other Unix mythology as well.

(This sense of offended superiority is in no way unique to *BSD people, of course. Variants have happened all through computing's history, generally from the losing side of whatever shift is going on at the time. The *BSD attitude of 'I can't believe so many people use this stupid Linux crud' echoes the Lisp reaction to Unix and the Unix reaction to Windows and Macs, at least (and the reaction of some fans of various commercial Unixes to today's free Unixes).)

This whole attitude irritated me for various reasons and made me roll my eyes extensively; to put it one way, it struck me as more religious than well informed and balanced. To this day I cannot completely detach my reaction to FreeBSD from my association of it with a league of virtual greybeards who are overly and often ignorantly attached to romantic visions of the perfection of UCB BSD et al. FreeBSD is a perfectly fine operating system and there is nothing wrong with it, but I wish it had kept different company in the late 1990s and early to mid 00s. Even today there is a part of me that doesn't want to use 'their' operating system because some of the company I'd be keeping would irritate me.

(FreeBSD keeping different company was probably impossible, though, because of where the Unix community went.)

(I date this *BSD elitist attitude only through the mid 00s because of my perception that it's mostly died down since then. Hopefully this is an accurate perception and not due to selective 'news' sources.)

by cks at July 29, 2014 03:33 AM

July 28, 2014

Standalone Sysadmin

Kerbal Space System Administration

I came to an interesting cross-pollination of ideas yesterday while talking to my wife about what I'd been doing lately, and I thought you might find it interesting too.

I've been spending some time lately playing video games. In particular, I'm especially fond of Kerbal Space Program, a space simulation game where you play the role of spaceflight director of Kerbin, a planet populated by small, green, mostly dumb (but courageous) people known as Kerbals.

Initially the game was a pure sandbox, as in, "You're in a planetary system. Here are some parts. Go knock yourself out", but recent additions to the game include a career mode in which you explore the star system and collect "science" points for doing sensor scans, taking surface samples, and so on. It adds a nice "reason" to go do things, and I've been working on building out more efficient ways to collect science and get it back to Kerbin.

Part of the problem is that when you use your sensors, whether they detect gravity, temperature, or materials science, you often lose a large percentage of the data when you transmit it back, rather than deliver it in ships - and delivering things in ships is expensive.

There is an advanced science lab called the MPL-LG-2 which allows greater fidelity in transmitted data, so my recent work in the game has been to build science ships which consist of a "mothership" with a lab, and a smaller lightweight lander craft which can go around whatever body I'm orbiting and collect data to bring to the mothership. It's working pretty well.

At the same time, I'm working on building out a collectd infrastructure that can talk to my graphite installation. It's not as easy as I'd like because we're standardized on Ubuntu Precise, which only has collectd 4.x, and the write_graphite plugin began with collectd 5.1.

To give you background, collectd is a daemon that runs and collects information, usually from the local machine, but there are an array of plugins to collect data from any number of local or remote sources. You configure collectd to collect data, and you use a write_* plugin to get that data to somewhere that can do something with it.

It was in the middle of explaining these two things - KSP's science missions and collectd - that I saw the amusing parity between them. In essence, I'm deploying science ships around my infrastructure to make it easier to get science back to my central repository so that I advance my own technology. I really like how analogous they are.

I talked about doing the collectd work on twitter, and Martijn Heemels expressed interest in what I was doing, since he would also like write_graphite on Precise, so I figured that other people probably might want to get in on the action, so to speak. I could give you the package I made, or I could show you how I made it. That sounds more fun.

Like all good things, this project involves software from Jordan Sissel - namely fpm, effing package management. Ever had to make packages and deal with spec files, control files, or esoteric rulesets that made you go into therapy? Not anymore!

So first we need to install it, which is easy, because it's a gem:


$ sudo gem install fpm

Now, lets make a place to stage files before they get packaged:

$ mkdir ~/collectd-package

And grab the source tarball and untar it:

$ wget https://collectd.org/files/collectd-5.4.1.tar.gz
$ tar zxvf collectd-5.4.1.tar.gz
$ cd collectd-5.4.1/

(if you're reading this, make sure to go to collectd.org and get the new one, not the version I have listed here.)

Configure the Makefile, just like you did when you were a kid:

$ ./configure --enable-debug --enable-static --with-perl-bindings="PREFIX=/usr"

Hat tip to Mike Julian who let me know that you can't actually enable debugging in the collectd tool unless you actually use the flag here, so save yourself some heartbreak by turning that on. Also, if I'm going to be shipping this around, I want to make sure that it's compiled statically, and for whatever reason, I found that the perl bindings were sad unless I added that flag.

Now we compile:

$ make

Now we "install":

make DESTDIR="/home/YOURUSERNAME/collectd-package" install

I've found that the install script is very grumpy about relative directory names, so I appeased it by giving it the full path to where the things would be dropped (the directory we created earlier)

We're going to be using a slightly customized init script. I took this from the version that comes with the precise 4.x collectd installation and added a prefix variable that can be changed. We didn't change the installation directories above, so by default, everything is going to eventually wind up in /opt/collectd/ and the init script needs to know about that:

$ cd ~
$ mkdir -p collectd-package/etc/init.d/
$ wget --no-check-certificate -O collectd-package/etc/init.d/collectd http://bit.ly/1mUaB7G
$ chmod +x collectd-package/etc/init.d/collectd

This is pulling in the file from this gist.

Now, we're finally ready to create the package:

fakeroot fpm -t deb -C collectd-package/ --name collectd \
--version 5.4.1 --iteration 1 --depends libltdl7 -s dir opt/ usr/ etc/

Since you may not be familiar with fpm, some of the options are obvious, but for the ones that aren't, -C changes directory to the given argument, --version is the version of the software, as opposed to --iteration is the version of the package. If you package this, deploy it, then find a bug in the packaging, when you package it again after fixing the problem, you increment the iteration flag, and your package management can treat it as an upgrade. The --depends is a library that collectd needs on the end systems. -s sets the source type to "directory", and then we give it a list of directories to include (remembering that we've changed directories with the -C flag).

Also, this was my first foray into the world of fakeroot, which you should probably read about if you run Debian-based systems.

At this point, in the current directory, there should be "collectd_5.4.1-1.deb", a package file that works for installing using 'dpkg -i' or in a PPA or in a repo, if you have one of those.

Once collectd is installed, you'll probably want to configure it to talk to your graphite host. Just edit the config in /opt/collectd/etc/collectd.conf. Make sure to uncomment the write_graphite plugin line, and change the write_graphite section. Here's mine:


  
    Host "YOURGRAPHITESERVER"
    Port "2003"
    Protocol "tcp"
    LogSendErrors true
    # remember the trailing period in prefix
    #    otherwise you get CCIS.systems.linuxTHISHOSTNAME
    #    You'll probably want to change it anyway, because 
    #    this one is mine. ;-) 
    Prefix "CCIS.systems.linux."
    StoreRates true
    AlwaysAppendDS false
    EscapeCharacter "_"
  

Anyway, hopefully this helped you in some way. Building a puppet module is left as an exercise to the reader. I think I could do a simplistic one in about 5 minutes, but as soon as you want to intelligently decide which modules to enable and configure, then it gets significantly harder. Hey, knock yourself out! (and let me know if you come up with anything cool!)

by Matt Simmons at July 28, 2014 10:22 AM

Chris Siebenmann

Go is still a young language

Once upon a time, young languages showed their youth by having core incapabilities (important features not implemented, important platforms not supported, or the like). This is no longer really the case today; now languages generally show their youth through limitations in their standard library. The reality is that a standard library that deals with the world of the modern Internet is both a lot of work and the expression of a lot of (painful) experience with corner cases, how specifications work out in practice, and so on. This means that such a library takes time, time to write everything and then time to find all of the corner cases. When (and while) the language is young, its standard library will inevitably have omissions, partial implementations, and rough corners.

Go is a young language. Go 1.0 was only released two years ago, which is not really much time as these things go. It's unsurprising that even today portions of the standard library are under active development (I mostly notice the net packages because that's what I primarily use) and keep gaining additional important features in successive Go releases.

Because I've come around to this view, I now mostly don't get irritated when I run across deficiencies in the corners of Go's standard packages. Such deficiencies are the inevitable consequence of using a young language, and while they're obvious to me that's because I'm immersed in the particular area that exposes them. I can't expect authors of standard libraries to know everything or to put their package to the same use that I am time. And time will cure most injuries here.

(Sometimes the omissions are deliberate and done for good reason, or so I've read. I'm not going to cite my primary example yet until I've done some more research about its state.)

This does mean that development in Go can sometimes require a certain sort of self-sufficiency and willingness to either go diving into the source of standard packages or deliberately find packages that duplicate the functionality you need but without the limitations you're running into. Some times this may mean duplicating some amount of functionality yourself, even if it seems annoying to have to do it at the time.

(Not mentioning specific issues in, say, the net packages is entirely deliberate. This entry is a general thought, not a gripe session. In fact I've deliberately written this entry as a note to myself instead of writing another irritated grump, because the world does not particularly need another irritated grump about an obscure corner of any standard Go package.)

by cks at July 28, 2014 03:07 AM

July 27, 2014

Ubuntu Geek

VirtualBox 4.3.14 released and ubuntu installation instructions included

VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprise as well as home use. Not only is VirtualBox an extremely feature rich, high performance product for enterprise customers, it is also the only professional solution that is freely available as Open Source Software under the terms of the GNU General Public License (GPL) version 2.
(...)
Read the rest of VirtualBox 4.3.14 released and ubuntu installation instructions included (577 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , ,

Related posts

by ruchi at July 27, 2014 11:32 PM

Rands in Repose

The internet is still at the beginning of its beginning

From Kevin Kelly on Medium:

But, but…here is the thing. In terms of the internet, nothing has happened yet. The internet is still at the beginning of its beginning. If we could climb into a time machine and journey 30 years into the future, and from that vantage look back to today, we’d realize that most of the greatest products running the lives of citizens in 2044 were not invented until after 2014. People in the future will look at their holodecks, and wearable virtual reality contact lenses, and downloadable avatars, and AI interfaces, and say, oh, you didn’t really have the internet (or whatever they’ll call it) back then.

Yesterday was a study in contrasts. I was cleaning up part of the garage in the morning and found a box full of CD cases containing classic video games: Quake, Baldur’s Gate II, Riven, and Alice. Later in the day, I also had the opportunity to play the open Destiny Beta for a few hours.

Anyone who believes that we’re not a part of a wide open frontier only need look at what was considered state of the art just a few years ago.

#

by rands at July 27, 2014 04:28 PM

Server Density

Chris Siebenmann

Save your test scripts and other test materials

Back in 2009 I tested ssh cipher speeds (although it later turned out to be somewhat incomplete. Recently I redid those tests on OmniOS, with some interesting results. I was able to do this (and do it easily) because I originally did something that I don't do often enough: I saved the script I used to run the tests for my original entry. I didn't save full information, though; I didn't save information on exactly how I ran it (and there's several options). I can guess a bit but I can't be completely sure.

I should do this more often. Saving test scripts and test material has two uses. First, you can go back later and repeat the tests in new environments and so on. This is not just an issue of getting comparison data, it's also an issue of getting interesting data. If the tests were interesting enough to run once in one environment they're probably going to be interesting in another environment later. Making it easy or trivial to test the new environment makes it more likely that you will. Would I have bothered to do these SSH speed tests on OmniOS and CentOS 7 if I hadn't had my test script sitting around? Probably not, and that means I'd have missed learning several things.

The second use is that saving all of this test information means that you can go back to your old test results with a lot more understanding of what they mean. It's one thing to know that I got network speeds of X Mbytes/sec between two systems, but there are a lot of potential variables in that simple number. Recording the details will give me (and other people) as many of those variables as possible later on, which means we'll understand a lot more about what the simple raw number means. One obvious aspect of this understanding is being able to fully compare a number today with a number from the past.

(This is an aspect of scripts capturing knowledge, of course. But note that test scripts by themselves don't necessarily tell you all the details unless you put a lot of 'how we ran this' documentation into their comments. This is probably a good idea, since it captures all of this stuff in one place.)

by cks at July 27, 2014 04:50 AM

July 26, 2014

Ferry Boender

Scripting a Cisco switch with Python and Expect

WS-C3750G-48TS-S2In the spirit of "Automate Everything" I was tasked with scripting some oft needed tasks on Cisco Switches. It's been a while since I've had to do anything even remotely related to switches, so I thought I'd start by googling for some ways to automate tasks on switches. What I found:

Both seemed to be able to get the job done quite well. Unfortunately it turns out that the source for sw_script is actually nowhere to be found and Trigger wouldn't even install properly, giving me a whole plethora of compiler errors. Since I was rather time constrained, I decided to fall back to good old Expect.

Expect

Expect s a framework to automate interactive applications. Basically what it does is let the user insert text into the input of the program, and then watches the output of the program for specific occurrences of text, hence the name "Expect". For example, consider a program that requires the user to enter a username and password. It lets the user know this by giving us prompts:

$ ftp host.local
Username: 
Password:

We can use Expect to scan the output of the program and respond with the username and password when appropriate:

spawn ftp host.local
expect "Username:"
send "fboender\r"
expect "password:"
send "sUp3rs3creT\r"

It's a wonderful tool, but error handling can be somewhat tricky, as you'll see further in this article.

Scripting a Cisco switch

There is an excellent Expect library for Python called Pexpect. Installation on Debian-derived systems is as easy as "aptitude install python-pexpect".

Here's an example session on a Cisco switch we'll automate with Expect in a bit:

$ ssh user@10.0.0.1
Password:
Switch>enable
Password:
Enter configuration commands, one per line.  End with CNTL/Z.
Switch(config)#interface Gi2/0/2 
Switch(config-if)#switchport access vlan 300
Switch(config-if)#no shutdown
Switch(config-if)#end
Switch#wr mem
Building configuration...
[OK]
Switch#quit

This is a simple manual session that changes the Vlan of switch port "Gi2/0/2" to Vlan 300. So how do we go about automating this with PExpect?

Logging in

The first step is to log in. This is fairly easy:

import pexpect

switch_ip = "10.0.0.1"
switch_un = "user"
switch_pw = "s3cr3t"
switch_port = "Gi2/0/2"
switch_vlan = 300

child = pexpect.spawn('ssh %s@%s' % (switch_un, switch_ip))
child.logfile = sys.stdout
child.timeout = 4
child.expect('Password:')
child.sendline(switch_pw)
child.expect('>')

First we import the pexpect module. Then we spawn a new process "ssh user@10.0.0.1". We set the process' logfile to sys.stdout. This is merely for debugging purposes. It tells PExpect to show all the output it's receiving on our terminal. The default timeout is set to 4 seconds.

Then comes the first juicy bit. We let Expect know that we expect to see a 'Password:' prompt. If something goes wrong, for instance the switch at 10.0.0.1 is down, expect will wait for 4 seconds, looking for the text 'Password:' in SSH's output. Of course, it won't get that prompt since the switch is down. It will then raise a pexpect.TIMEOUT exception after 4 seconds. If it does detect the 'Password:' prompt, it will then send the switch password and wait until it detects the prompt.

Catching errors

If we want to catch errors and show the user somewhat helpful error messages, we can use try/except clauses:

try:
  child.expect('Password:')
except pexpect.TIMEOUT:
  raise OurException("Login prompt not received")

​After the password prompt, we send the password. If all goes well, we'll receive the prompt. Otherwise the switch will ask for the password again. We don't "expect" this, so PExpect will timeout once again while waiting for the ">" prompt.

try:
  child.sendline(switch_pw)
  child.expect('>')
except pexpect.TIMEOUT:
  raise OurException("Login failed")

Let's jump ahead a bit and look at the next interesting problem. What if we supply the wrong port? The switch will respond like so:

Switch(config)#interface Gi2/0/2 
                         ^
% Invalid input detected at '^' marker.

If, on the other hand, our port is correct, we'll simply get a prompt:

Switch(config-if)#

So here we have two possible scenario's. Something goes wrong, or it goes right. How do we detect this? We can tell Expect that we expect two different scenario's:

o = child.expect(['\(config-if\)#', '% Invalid'])
if o != 0:
  raise OurException("Unknown switch port '%s'" % (port))

The first scenario '\(config-if\)#' is our successful one. The second is when an error occurred. We then simply check that we got the successful one, and otherwise raise an error.

​The rest of the script is just straight-forward expects and sendline's.

The full script

Here's the full script:

import pexpect

switch_ip = "10.0.0.1"
switch_un = "user"
switch_pw = "s3cr3t"
switch_enable_pw = "m0r3s3cr3t"
port = "Gi2/0/2"
vlan = 300

try:
  try:
    child = pexpect.spawn('ssh %s@%s' % (switch_un, switch_ip))
    if verbose:
        child.logfile = sys.stdout
    child.timeout = 4
    child.expect('Password:')
  except pexpect.TIMEOUT:
    raise OurException("Couldn't log on to the switch")

  child.sendline(switch_pw)
  child.expect('>')
  child.sendline('terminal length 0')
  child.expect('>')
  child.sendline('enable')
  child.expect('Password:')
  child.sendline(switch_enable_pw)
  child.expect('#')
  child.sendline('conf t')
  child.expect('\(config\)#')
  child.sendline('interface %s' % (port))
  o = child.expect(['\(config-if\)#', '% Invalid'])
  if o != 0:
      raise Exception("Unknown switch port '%s'" % (port))
  child.sendline('switchport access vlan %s' % (vlan))
  child.expect('\(config-if\)#')
  child.sendline('no shutdown')
  child.expect('#')
  child.sendline('end')
  child.expect('#')
  child.sendline('wr mem')
  child.expect('[OK]')
  child.expect('#')
  child.sendline('quit')
except (pexpect.EOF, pexpect.TIMEOUT), e:
    error("Error while trying to move the vlan on the switch.")
    raise

Conclusion

It's too bad that I couldn't use any of the existing frameworks. I could have tried getting Trigger to compile, but I was time constrained so I didn't bother. There are other ways of configuring Switches too. SNMP is one way, but it is complex and prone to errors. I believe it's also possible to retrieve the entire configuration from a switch, modify it and put it back. This is partly what Rancid does. However that would require even more time.

Expect was a good fit in this case. Although it too is rather error prone, it's fairly easy to catch errors as long as you're expecting (no pun intended) them. I strongly suggest you give Trigger a try before falling back to Expect. It seems like a very decent tool.

by admin at July 26, 2014 06:24 AM

Chris Siebenmann

An interesting picky difference between Bourne shells

Today we ran into an interesting bug in one of our internal shell scripts. The script had worked for years on our Solaris 10 machines, but on a new OmniOS fileserver it suddenly reported an error:

script[77]: [: 232G: arithmetic syntax error

Cognoscenti of ksh error messages have probably already recognized this one and can tell me the exact problem. To show it to everyone else, here is line 77:

if [ "$qsize" -eq "none" ]; then
   ....

In a strict POSIX shell, this is an error because test's -eq operator is specifically for comparing numbers, not strings. What we wanted is the = operator.

What makes this error more interesting is that the script had been running for some time on the OmniOS fileserver without this error. However, until now the $qsize variable had always had the value 'none'. So why hadn't it failed earlier? After all, 'none' (on either side of the expression) is just as much of not-a-number as '232G' is.

The answer is that this is a picky difference between shells in terms of how they actually behave. Bash, for example, always complains about such misuse of -eq; if either side is not a number you get an error saying 'integer expression expected' (as does Dash, with a slightly different error). But on our OmniOS, /bin/sh is actually ksh93 and ksh93 has a slightly different behavior. Here:

$ [ "none" -eq "none" ] && echo yes
yes
$ [ "bogus" -eq "none" ] && echo yes
yes
$ [ "none" -eq 0 ] && echo yes
yes
$ [ "none" -eq "232G" ] && echo yes
/bin/sh: [: 232G: arithmetic syntax error

The OmniOS version of ksh93 clearly has some sort of heuristic about number conversions such that strings with no numbers are silently interpreted as '0'. Only invalid numbers (as opposed to things that aren't numbers at all) produce the 'arithmetic syntax error' message. Bash and dash are both more straightforward about things (as is the FreeBSD /bin/sh, which is derived from ash).

Update: my description isn't actually what ksh93 is doing here; per opk's comment, it's actually interpreting the none and bogus as variable names and giving them a value of 0 when unset.

Interestingly, the old Solaris 10 /bin/sh seems to basically be calling atoi() on the arguments for -eq; the first three examples work the same, the fourth is silently false, and '[ 232 -eq 232G ]' is true. This matches the 'let's just do it' simple philosophy of the original Bourne shell and test program and may be authentic original V7 behavior.

(Technically this is a difference in test behavior, but test is a builtin in basically all Bourne shells these days. Sometimes the standalone test program in /bin or /usr/bin is actually a shell script to invoke the builtin.)

by cks at July 26, 2014 03:35 AM

July 25, 2014

Yellow Bricks

Must attend VMworld sessions 2014


Every year I do this post on the must attend VMworld sessions, and I just realized I had not done this for 2014 yet. So here it is, the list of sessions I feel are most definitely worth attending. I tend to focus on sessions which I know will have solid technical info and great presenters. Many of which over the years I have either seen presenting myself and respect very much. I tried to limit the list to 20 this year (edit: 21, 22), so of course it could be that your session (or your fav session) is missing, unfortunately I cannot list all as that would defeat the purpose.

Here we go:

  1. STO3008-SPO - Decoupled Storage: Practical Examples of Leveraging Server Flash in a Virtualized Datacenter by Satyam Vaghani and Frank Denneman. What more do I need to say? Both rock stars!
  2. STO1279 - Virtual SAN Architecture Deep Dive Christian and Christos were the leads on VSAN, who can tell you better than they can??
  3. SDDC1176 - Ask the Expert vBloggers featuring Chad Sakac, Scott Lowe, William Lam, myself and moderated by Rick Scherer. This session has been a hit for the last years and will be one you cannot miss!
  4. STO2996-SPO - The vExpert Storage Game Show featuring Vaughn Steward, Cormac Hogan, Rawlinson Rivera and many others… It will be educational and entertaining for sure! Not the standard “death by powerpoint” session. If you do want “DBP”, this is not for you!
  5. STP3266 - Web-Scale Converged Infrastructure for Enterprise. Josh Odgers talking web scale for Enterprise organizations, are you still using legacy apps? Then this is a must attend.
  6. SDDC2492 - How the New Software-defined Paradigms Will Impact Your vSphere Design Forbes Guthrie and Scott Lowe talking vSphere Design, you bet you will learn something here!
  7. HBC2068 - vCloud Hybrid Service Networking Technical Deep Dive Want to know more about vCHS networking, I am sure David Hill is going to dive deep!
  8. NET2747 - VMware NSX: Software Defined Networking in the Real World Chris Wahl and Jason Nash talking networking, what is there not to like?
  9. BCO1893 - Site Recovery Manager and vCloud Automation Center: Self-service DR Protection for the Software-Defined Data Center My co-presenter Lee Dilworth for the previous 2 VMworlds, he knows what he is talking about! Co-hosting a DR session with one of the BC/DR PMs, Ben Meadowcroft. This will be good.
  10. NET1674 - Advanced Topics & Future Directions in Network Virtualization with NSX I have seen Bruce Davie present multiple times, always a pleasure and educational!
  11. STO2496 - vSphere Storage Best Practices: Next-Gen Storage Technologies Chad and Vaughn in one session… this will be good!
  12. BCO2629 - Site Recovery Manager and vSphere Replication: What’s New Technical Deep Dive Jeff Hunter and Ken Werneburg are the DR experts at VMware Tech Marketing, so worth attending for sure!
  13. HBC2638 - Ten Vital Best Practices for Effective Hybrid Cloud Security by Russel Callen and Matthew Probst… These guys are the vCHS architects, you can bet this will be useful!
  14. STO3162 - Software Defined Storage: Satisfy the Requirements of Your Application at the Granularity of a Virtual Disk with Virtual Volumes (VVols) Cormac Hogan talking VVOLs with Rochna from Nimble, this is one I would like to see!
  15. STO2480 - Software Defined Storage – The VCDX Way Part II : The Empire Strikes Back The title by itself is enough to attend this one… (Wade Holmes and Rolo Rivera)
  16. SDDC3281 - A DevOps Story: Unlocking the Power of Docker with the VMware platform and its ecosystem. You may not know these guys, but I do… Aaron and George are rock stars, and Docker seems to be the new buzzword. Find out what it is about!
  17. VAPP2979 - Advanced SQL Server on vSphere Techniques and Best Practices Scott and Jeff are the experts when it comes to virtualizing SQL, what more can I say?!
  18. STO2197 - Storage DRS: Deep Dive and Best Practices Mustafa Uysal is the lead on SDRS/SIOC, I am sure this session will contain some gems!
  19. HBC1534 - Recovery as a Service (RaaS) with vCloud Hybrid Service David Hill and Chris Colotti talking, always a pleasure to attend!
  20. MGT1876 - Troubleshooting With vCenter Operations Manager (Live Demo) Wondering why your VM is slow? Sam McBride and Praveen Kannan will show you live…
  21. INF1601 - Taking Reporting and Command Line Automation to the Next Level with PowerCLI with Alan Renouf and Luc Dekens, all I would like to know is if PowerCLI-man is going to be there or not?
  22. MGT1923 - vCloud Automation Center 6 and Storage Policy-Based Management Framework Integration with Rawlinson Rivera and Chen Wei… They are doing things with VCAC and SPBM which has never been seen before!

As stated, some of your fav sessions may be missing… feel free to leave a suggestion so that others know which sessions they should attend.

"Must attend VMworld sessions 2014" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.


Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at July 25, 2014 07:14 PM

Everything Sysadmin

System Administrator Appreciation Day Spotlight: SuperUser.com

This being System Administrator Appreciation Day, I'd like to give a shout out to all the people of the superuser.com community.

If you aren't a system administrator, but have technical questions that you want to bring to your system administrator, this is the place for you. Do you feel bad that you keep interrupting your office IT person with questions? Maybe you should try SuperUser first. You might get your answers faster.

Whether it is a question about your home cable modem, or the mysteries of having to reboot after uninstalling software (this discussion will surprise you); this community will probably reduce the number of times each week that you interrupt your IT person.

If you are a system administrator, and like to help people, consider poking around the unanswered questions page!

Happy Sysadmin Appreciation Day!

Tom

P.S. "The Practice of System and Network Administration" is InformIT's "eBook deal of the day" and can be purchased at an extreme discount for 24 hours.

July 25, 2014 04:28 PM

System Administrator Appreciation Day Spotlight: ServerFault.com

This being System Administrator Appreciation Day, I'd like to give a shout out to all the people on ServerFault.com who help system administrators find the answers to their questions. If you are patient, understanding, and looking to help fellow system administrators, this is a worthy community to join.

I like to click on the "Unanswered" button and see what questions are most in need of a little love.

Sometimes I click on the "hot this week" tab and see what has been answered recently. I always learn a lot. Today I learned:

ServerFault also has a chatroom Chatroom which is a useful place to hang out and meet the other members.

Happy Sysadmin Appreciation Day!

Tom

P.S. "The Practice of System and Network Administration" is InformIT's "eBook deal of the day" and can be purchased at an extreme discount for 24 hours.

July 25, 2014 03:28 PM

Google Blog

Through the Google lens: search trends July 18-24

Based on search, it seems like a lot of you spent the last seven days slurping ice cream cones, jamming to pop parodies and starting the countdown to a certain February flick. Could be worse. Here’s a look at what people were searching for last week:

Fifty shades of search
Searchers were “Crazy in Love” with the new trailer for Fifty Shades of Grey, set to a special Beyonce recording of her 2003 hit. There were more than a million searches this week for the ….ahem… hotly anticipated movie, which comes out next Valentine’s Day. In addition to the trailer, people were also looking for information on stars [jamie dornan] and [dakota johnson]. Beyonce was in the spotlight for other reasons too, following rumors that her marriage to Jay-Z was on the rocks.
“Mandatory” and musical marriages
After three decades in the biz, Weird Al has finally made his way into the Billboard No. 1 spot with his latest album, “Mandatory Fun.” Though his shtick hasn’t changed, when it comes to promoting his parodies, the artist has adapted to the Internet era, releasing eight new videos in as many days to generate buzz—and more search volume than at any other point in the past five years. As an editor, of course, I’m partial to “Word Crimes” (which has more than 10 million views on YouTube), but it’s just one of the many “breakout” titles searchers are looking for, along with [tacky], [foil] and [first world problems].

In other musical news, Adam Levine’s bride [behati prinsloo] was trending this week after the two got married in Cabo San Lucas. And another Mexico wedding had people searching for information on [ryan dorsey], the new husband (after a surprise ceremony) of Glee star Naya Rivera.

Foodie ups and downs
A national fruit recall at stores like Costco and Whole Foods led people to the web to learn more about [listeria]. For many, the possible contamination may have been an extra incentive to celebrate several less than healthful food holidays: Last Sunday (or should we say sundae?) marked National Ice Cream Day, and people were searching for their favorite flavor. National Hot Dog Day took place just a few days later, though sausage searches paled in comparison. And just in case all that junk food made you thirsty, yesterday’s National Tequila Day had searchers looking for the perfect margarita recipe.
Tip of the week
Overindulged on ice cream last weekend? It’s easy to get back on the healthy eating train with a quick search. Just ask Google “how many calories in hummus?” or “compare coleslaw and potato salad” to get nutrition info on your favorite summer foods.
Posted by Emily Wood, Google Blog Editor, who searched this week for [coming of age in samoa] and [how old is weird al]

by Emily Wood (noreply@blogger.com) at July 25, 2014 03:59 PM

Everything Sysadmin

I'm a system administrator one day a year.

Many years ago I was working at a company and our department's administrative assistant was very strict about not letting people call her a "secretary". She made one exception, however, on "Secretary Appreciation Day".

I'm an SRE at StackExchange.com. That's a Site Reliability Engineer. We try to hold to the ideals of what a SRE as set forth by Ben Treynor's keynote at SRECon.

My job title is "SRE". Except one day each year. Since today is System Administrator Appreciation Day, I'm definitely a system administrator... today.

Sysadmins go by many job titles. The company I work for provides Question and Answer communities on over 120 topics. Many of them are technical and appeal to the many things that are system administrators.

We also have fun sites of interest to sysadmins like Video Games and Skeptics.

All these sites are fun places to poke around and read random answers. Of course, contributing answers is fun too.

Happy System Administrator Appreciation Day!

P.S. "The Practice of System and Network Administration" is InformIT's "eBook deal of the day" and can be purchased at an extreme discount for 24 hours.

July 25, 2014 01:28 PM

50% off Time Management for System Administrators

O'Reilly is running a special deal for Celebrate SysAdmin Day. For one day only, SAVE 50% on a wide range of system administration ebooks and training videos from shop.oreilly.com. If you scroll down to page 18, you'll find Time Management for System Administrators is included.

50% off is pretty astounding. Considering that the book is cheaper than most ($19.99 for the eBook) there's practically no excuse to not have a copy. Finding the time to read it... that may be another situation entirely. I hate to say "you owe it to yourself" but, seriously, if you are stressed out, overworked, and under appreciated this might be a good time to take a break and read the first chapter or two.

July 25, 2014 01:28 PM

Standalone Sysadmin

Happy SysAdmin Appreciation Day 2014!

Well, well, well, look what Friday it is!

In 1999, SysAdmin Ted Kekatos decided that, like administrative professionals, teachers, and cabbage, system administrators needed a day to recognize them, to appreciate what they do, their culture, and their impact to the business, and so he created SysAdmin Appreciation Day. And we all reap the benefit of that! Or at least, we can treat ourselves to something nice and have a really great excuse!

Speaking of, there are several things I know of going on around the world today:

As always, there are a lot of videos released on the topic. Some of the best I've seen are here:

  • A karaoke power-ballad from Spiceworks:
  • A very funny piece on users copping to some bad behavior from Sophos:
  • A heartfelt thanks from ManageEngine:
  • Imagine what life would be like without SysAdmins, by SysAid:

Also, I feel like I should mention that one of my party sponsors, Goverlan, is also giving away their software today to the first thousand people who sign up for it. It's a $900 value, so if you do Active Directory management, you should probably check that out.

Sophos was giving away socks, but it was so popular that they ran out. Not before sending me the whole set, though!

I don't even remember signing up for that, but apparently I did, because they came to my work address. Awesome!

I'm sure there are other things going on, too. Why not comment below if you know of one?

All in all, have a great day and try to find some people to get together with and hang out. Relax, and take it easy. You deserve it.

by Matt Simmons at July 25, 2014 12:52 PM

Chris Siebenmann

The OmniOS version of SSH is kind of slow for bulk transfers

If you look at the manpage and so on, it's sort of obvious that the Illumos and thus OmniOS version of SSH is rather behind the times; Sun branched from OpenSSH years ago to add some features they felt were important and it has not really been resynchronized since then. It (and before it the Solaris version) also has transfer speeds that are kind of slow due to the SSH cipher et al overhead. I tested this years ago (I believe close to the beginning of our ZFS fileservers), but today I wound up retesting it to see if anything had changed from the relatively early days of Solaris 10.

My simple tests today were on essentially identical hardware (our new fileserver hardware) running OmniOS r151010j and CentOS 7. Because I was doing loopback tests with the server itself for simplicity, I had to restrict my OmniOS tests to the ciphers that the OmniOS SSH server is configured to accept by default; at the moment that is aes128-ctr, aes192-ctr, aes256-ctr, arcfour128, arcfour256, and arcfour. Out of this list, the AES ciphers run from 42 MBytes/sec down to 32 MBytes/sec while the arcfour ciphers mostly run around 126 MBytes/sec (with hmac-md5) to 130 Mbytes/sec (with hmac-sha1).

(OmniOS unfortunately doesn't have any of the umac-* MACs that I found to be significantly faster.)

This is actually an important result because aes128-ctr is the default cipher for clients on OmniOS. In other words, the default SSH setup on OmniOS is about a third of the speed that it could be. This could be very important if you're planning to do bulk data transfers over SSH (perhaps to migrate ZFS filesystems from old fileservers to new ones)

The good news is that this is faster than 1G Ethernet; the bad news is that this is not very impressive compared to what Linux can get on the same hardware. We can make two comparisons here to show how slow OmniOS is compared to Linux. First, on Linux the best result on the OmniOS ciphers and MACs is aes128-ctr with hmac-sha1 at 180 Mbytes/sec (aes128-ctr with hmac-md5 is around 175 MBytes/sec), and even the arcfour ciphers run about 5 Mbytes/sec faster than on OmniOS. If we open this up to the more extensive set of Linux ciphers and MACs, the champion is aes128-ctr with umac-64-etm at around 335 MBytes/sec and all of the aes GCM variants come in with impressive performances of 250 Mbytes/sec and up (umac-64-etm improves things a bit here but not as much as it does for aes128-ctr).

(I believe that one reason Linux is much faster on the AES ciphers is that the version of OpenSSH that Linux uses has tuned assembly for AES and possibly uses Intel's AES instructions.)

In summary, through a combination of missing optimizations and missing ciphers and MACs, OmniOS's normal version of OpenSSH is leaving more than half the performance it could be getting on the table.

(The 'good' news for us is that we are doing all transfers from our old fileservers over 1G Ethernet, so OmniOS's ssh speeds are not going to be the limiting factor. The bad news is that our old fileservers have significantly slower CPUs and as a result max out at about 55 Mbytes/sec with arcfour (and interestingly, hmac-md5 is better than hmac-sha1 on them).)

PS: If I thought that network performance was more of a limit than disk performance for our ZFS transfers from old fileservers to the new ones, I would investigate shuffling the data across the network without using SSH. I currently haven't seen any sign that this is the case; our 'zfs send | zfs recv' runs have all been slower than this. Still, it's an option that I may experiment with (and who knows, a slow network transfer may have been having knock-on effects).

by cks at July 25, 2014 05:36 AM

Everything Sysadmin

Special deal for Sysadmin Appreciation Day

Today is Sysadmin Appreciation Day! We appreciate all of you! The Practice of System and Network Administration is today's InformIT eBook Deal of the Day. Click on the link to get a special discount.

July 25, 2014 05:28 AM

July 24, 2014

Ubuntu Geek

Yellow Bricks

Software Defined Storage, which phase are you in?!


Working within R&D at VMware means you typically work with technology which is 1 – 2 years out, and discuss futures of products which are 2-3 years. Especially in the storage  space a lot has changed. Not just innovations within the hypervisor by VMware like Storage DRS, Storage IO Control, VMFS-5, VM Storage Policies (SPBM), vSphere Flash Read Cache, Virtual SAN etc. But also by partners who do software based solutions like PernixData (FVP), Atlantis (ILIO) and SANDisk FlashSoft. Of course there is the whole Server SAN / Hyper-converged movement with Nutanix, Scale-IO, Pivot3, SimpliVity and others. Then there is the whole slew of new storage systems some which are scale out and all-flash, others which focus more on simplicity, here we are talking about Nimble, Tintri, Pure Storage, Xtreme-IO, Coho Data, Solid Fire and many many more.

Looking at it from my perspective, I would say there are multiple phases when it comes to the SDS journey:

  • Phase 0 – Legacy storage with NFS / VMFS
  • Phase 1 – Legacy storage with NFS / VMFS + Storage IO Control and Storage DRS
  • Phase 2 – Hybrid solutions (Legacy storage + acceleration solutions or hybrid storage)
  • Phase 3 – Object granular policy driven (scale out) storage

<edit>

Maybe I should have abstracted a bit more:

  • Phase 0 – Legacy storage
  • Phase 1 – Legacy storage + basic hypervisor extensions
  • Phase 2 – Hybrid solutions with hypervisor extensions
  • Phase 3 – Fully hypervisor / OS integrated storage stack

</edit>

I have written about Software Defined Storage multiple times in the last couple of years, have worked with various solutions which are considered to be “Software Defined Storage”. I have a certain view of what the world looks like. However, when I talk to some of our customers reality is different, some seem very happy with what they have in Phase 0. Although all of the above is the way of the future, and for some may be reality today, I do realise that Phase 1, 2 and 3 may be far away for many. I would like to invite all of you to share:

  1. Which phase you are in, and where you would like to go to?
  2. What you are struggling with most today that is driving you to look at new solutions?

"Software Defined Storage, which phase are you in?!" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.


Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at July 24, 2014 03:30 PM

Tech Teapot

Software the old fashioned way

I was clearing out my old bedroom after many years nagging by my parents when I came across my two of my old floppy disk boxes. Contained within is a small snapshot of my personal computing starting from 1990 through until late 1992. Everything before and after those dates doesn’t survive I’m afraid.

XLISP Source code disk CLIPS expert system sources disk Little Smalltalk interpreter disk Micro EMACS disks

The archive contains loads of backups of work I produced, now stored on Github, as well as public domain / shareware software, magazine cover disks and commercial software I purchased. Yes, people used to actually buy software. With real money. A PC game back in the late 1980s cost around £50 in 1980s money. According to this historic inflation calculator, that would be £117 now. Pretty close to a week’s salary for me at the time.

One of my better discoveries from the late 1980s was public domain and shareware software libraries. Back then there were a number of libraries, usually advertised in the small ads at the back of computer magazines.

This is a run down of how you’d use your typical software library:

  1. Find an advert from a suitable library and write them a nice little letter requesting they send you a catalog. Include payment as necessary;
  2. Wait for a week or two;
  3. Receive a small, photocopied catalog with lists of floppies and a brief description of the contents;
  4. Send the order form back to the library with payment, usually by cheque;
  5. Wait for another week or two;
  6. Receive  a small padded envelope through the post with my selection of floppies;
  7. Explore and enjoy!

If you received your order in two weeks you were doing well. After the first order, when you have your catalog to hand, you could get your order in around a week. A week was pretty quick for pretty well anything back then.

The libraries were run as small home businesses. They were the perfect second income. Everything was done by mail, all you had to do was send catalogs when requested and process orders.

One of the really nice things about shareware libraries was that you never really knew what you were going to get. Whilst you’d have an idea of what was on the disk from the description in the catalog, they’d be a lot of programs that were not described. Getting a new delivery was like a mini MS-DOS based text adventure, discovering all of the neat things on the disks.

The libraries contained lots of different things, mostly shareware applications of every kind you can think of. The most interesting to me as an aspiring programmer was the array of public domain software. Public domain software was distributed with the source code. There is no better learning tool when programming than reading other peoples’ code. The best code I’ve ever read was the CLIPS sources for a forward chaining expert system shell written by NASA.

Happy days :)

PS All of the floppies I’ve tried so far still work :) Not bad after 23 years.

PPS I found a letter from October 1990 ordering ten disks from the library.

Letter ordering disks

 

The post Software the old fashioned way appeared first on Openxtra Tech Teapot.

by Jack Hughes at July 24, 2014 07:00 AM

Aaron Johnson

Links: 7-23-2014

  • Microsoft’s New CEO Needs An Editor | Monday Note
    Loved the hierarchy of ideas of graph, quote: "The top layer deals with the Identity or Culture — I use the two terms interchangeably as one determines the other. One level down, we have Goals, where the group is going. Then come the Strategies or the paths to those goals. Finally, we have the Plan, the deployment of troops, time, and money."
    (categories: strategy business communication writing motivation culture microsoft )

  • Hierarchy of ideas | Chunking | NLP
    More on the hierarchy of ideas, this time as it relates to conflict, quote: "In NLP we learn to use this hierarchy of ideas and chunking to assist others in overcoming their problems, we use it to improve our communication skills (so we understand how others are thinking and how we are creating our own problems). We use it to discover the deep structure behind peoples thinking and the words that they are using."
    (categories: chunking ideas communication strategy conflict negotiation )

by ajohnson at July 24, 2014 06:30 AM

Chris Siebenmann

What influences SSH's bulk transfer speeds

A number of years ago I wrote How fast various ssh ciphers are because I was curious about just how fast you could do bulk SSH transfers and how to get them to go fast under various circumstances. Since then I have learned somewhat more about SSH speed and what controls what things you have available and can get.

To start with, my years ago entry was naively incomplete because SSH encryption has two components: it has both a cipher and a cryptographic hash used as the MAC. The choice of both of them can matter, especially if you're willing to deliberately weaken the MAC. As an example of how much of an impact this might make, in my testing on a Linux machine I could almost double SSH bandwidth by switching from the default MAC to 'umac-64-etm@openssh.com'.

(At the same time, no other MAC choice made much of a difference within a particular cipher, although hmac-sha1 was sometimes a bit faster than hmac-md5.)

Clients set the cipher list with -c and the MAC with -m, or with the Ciphers and MACs options in your SSH configuration file (either a personal one or a global one). However, what the client wants to use has to be both supported by the server and accepted by it; this is set in the server's Ciphers and MACs configuration options. The manpages for ssh_config and sshd_config on your system will hopefully document both what your system supports at all and what it's set to accept by default. Note that this is not necessarily the same thing; I've seen systems where sshd knows about ciphers that it will not accept by default.

(Some modern versions of OpenSSH also report this information through 'ssh -Q <option>'; see the ssh manpage for details. Note that such lists are not necessarily reported in preference order.)

At least some SSH clients will tell you what the server's list of acceptable ciphers (and MACs) if you tell the client to use options that the server doesn't support. If you wanted to, I suspect that you could write a program in some language with SSH protocol libraries that dumped all of this information for you for an arbitrary server (without the fuss of having to find a cipher and MAC that your client knew about but your server didn't accept).

Running 'ssh -v' will report the negotiated cipher and MAC that are being used for the connection. Technically there are two sets of them, one for the client to server and one for the server back to the client, but I believe that under all but really exceptional situations you'll use the same cipher and MAC in both directions.

Different Unix OSes may differ significantly in their support for both ciphers and MACs. In particular Solaris effectively forked a relatively old version of OpenSSH and so modern versions of Illumos (and Illumos distributions such as OmniOS) do not offer you anywhere near a modern list of choices here. How recent your distribution is will also matter; our Ubuntu 14.04 machines naturally offer us a lot more choice than our Ubuntu 10.04 ones.

PS: helpfully the latest OpenSSH manpages are online (cf), so the current manpage for ssh_config will tell you the latest set of ciphers and MACs supported by the official OpenSSH and also show the current preference order. To my interest it appears that OpenSSH now defaults to the very fast umac-64-etm MAC.

by cks at July 24, 2014 03:22 AM

July 23, 2014

mikas blog

Book Review: The Docker Book

Docker is an open-source project that automates the deployment of applications inside software containers. I’m responsible for a docker setup with Jenkins integration and a private docker-registry setup at a customer and pre-ordered James Turnbull’s “The Docker Book” a few months ago.

Recently James – he’s working for Docker Inc – released the first version of the book and thanks to being on holidays I already had a few hours to read it AND blog about it. :) (Note: I’ve read the Kindle version 1.0.0 and all the issues I found and reported to James have been fixed in the current version already, jey.)

The book is very well written and covers all the basics to get familiar with Docker and in my opinion it does a better job at that than the official user guide because of the way the book is structured. The book is also a more approachable way for learning some best practices and commonly used command lines than going through the official reference (but reading the reference after reading the book is still worth it).

I like James’ approach with “ENV REFRESHED_AT $TIMESTAMP” for better controlling the cache behaviour and definitely consider using this in my own setups as well. What I wasn’t aware is that you can directly invoke “docker build $git_repos_url” and further noted a few command line switches I should get more comfortable with. I also plan to check out the Automated Builds on Docker Hub.

There are some references to further online resources, which is relevant especially for the more advanced use cases, so I’d recommend to have network access available while reading the book.

What I’m missing in the book are best practices for running a private docker-registry in a production environment (high availability, scaling options,…). The provided Jenkins use cases are also very basic and nothing I personally would use. I’d also love to see how other folks are using the Docker plugin, the Docker build step plugin or the Docker build publish plugin in production (the plugins aren’t covered in the book at all). But I’m aware that this are fast moving parts and specialised used cases – upcoming versions of the book are already supposed to cover orchestration with libswarm, developing Docker plugins and more advanced topics, so I’m looking forward to further updates of the book (which you get for free as existing customer, being another plus).

Conclusion: I enjoyed reading the Docker book and can recommend it, especially if you’re either new to Docker or want to get further ideas and inspirations what folks from Docker Inc consider best practices.

by mika at July 23, 2014 08:16 PM

UnixDaemon

Ansible AWS Lookup Plugins

Once we started linking multiple CloudFormation stacks together with Ansible we started to feel the need to query Amazon Web Services for both the output values from existing CloudFormation stacks and certain other values, such as security group IDs and Elasticache Replication Group Endpoints. We found that the quickest and easiest way to gather this information was with a handful of Ansible Lookup Plugins.

I've put the code for the more generic Ansible AWS Lookup Plugins on github and even if you're an Ansible user who's not using AWS they are worth a look just to see how easy it is to write one.

In order to use these lookup plugins you'll want to configure both your default AWS credentials and, unless you want to keep the plugins alongside your playbooks, your lookup plugins path in your Ansible config.

First we configure the credentials for boto, the underlying AWS library used by Ansible.



cat ~/.aws/credentials
[default]
aws_access_key_id = 
aws_secret_access_key =


Then we can tell ansible where to find the plugins themselves.



cat ~/.ansible.cfg

[defaults]
...
lookup_plugins = /path/to/git/checkout/cloudformations/ansible-plugins/lookup_plugins


And lastly we can test that everything is working correctly



$ cat region-test.playbook 
---
- hosts: localhost
  connection: local
  gather_facts: False

  tasks:
  - shell: echo region is =={{ item }}==
    with_items: lookup('aws_regions').split(',')

# and then run the playbook
$ ansible-playbook -i hosts region-test.playbook

Now you've seen how easy it is, go write your own!

July 23, 2014 06:28 PM

Tech Teapot

Early 1990s Software Development Tools for Microsoft Windows

The early 1990s were an interesting time for software developers. Many of the tools that are taken for granted today made their debut for a mass market audience.

I don’t mean that the tools were not available previously. Both Smalltalk  and LISP sported what would today be considered modern development environments all the way back in the 1970s, but hardware requirements put the tools well beyond the means of regular joe programmers. Not too many people had workstations at home or in the office for that matter.

I spent the early 1990s giving most of my money to software development tool companies of one flavour or another.

Actor v4.0 Floppy Disks Whitewater Object Graphics Floppy Disk MKS LEX / YACC Floppy Disks Turbo Pascal for Windows Floppy Disks

Actor was a combination of object oriented language and programming environment for very early versions of Microsoft Windows. There is a review in Info World magazine of Actor version 3 that makes interesting reading. It was somewhat similar to Smalltalk, but rather more practical for building distributable programs. Unlike Smalltalk, it was not cross platform but on the plus side, programs did look like native Windows programs. It was very much ahead of its time in terms of both the language and the programming environment and ran on pretty modest hardware.

I gave Borland quite a lot of money too. I bought Turbo Pascal for Windows when it was released, having bought regular old Turbo Pascal v6 for DOS a year or so earlier. The floppy disks don’t have a version number on so I have no idea which version it is. Turbo Pascal for Windows eventually morphed in Delphi.

I bought Microsoft C version 6 introducing as it did a DOS based IDE, it was still very much an old school C compiler. If you wanted to create Windows software you needed to buy the Microsoft Windows SDK at considerable extra cost.

Asymetrix Toolbook was marketed in the early 1990s as a generic Microsoft Windows development tool. There are old Info World reviews here and here. Asymetrix later moved the product to be a learning authorship tool. I rather liked the tool, though it didn’t really have the performance and flexibility I was looking for. Distributing your finished work was also not a strong point.

Microsoft Quick C for Windows version 1.0 was released in late 1991. Quick C bundled a C compiler with the Windows SDK so that you could build 16 bit Windows software. It also sported an integrated C text editor, resource editor  and debugger.

The first version of Visual Basic was released in 1991. I am not sure why I didn’t buy it, I imagine there was some programming language snobbery on my part. I know there are plenty of programmers of a certain age who go all glassy eyed at the mere thought of BASIC, but I’m not one of them. Visual Basic also had an integrated editor and debugger.

Both Quick C and Visual Basic are the immediate predecesors of the Visual Studio product of today.

The post Early 1990s Software Development Tools for Microsoft Windows appeared first on Openxtra Tech Teapot.

by Jack Hughes at July 23, 2014 10:59 AM


Administered by Joe. Content copyright by their respective authors.