Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

September 30, 2014

Everything Sysadmin

I'll be at Philly Linux User Group tomorrow (Wednesday)

Hi Philly folks!

I will be speaking at the Philadelphia area Linux Users' Group (PLUG) meeting on Wednesday night (Oct 1st). They meet at the University of the Sciences in Philadelphia (USP). My topic will be "Highlights from The Practice of Cloud System Administration" and I'll have a few copies of the book to give away.

For more info, visit their website: http://www.phillylinux.org/meetings.html

Hope to see you there!

September 30, 2014 04:28 PM

Yellow Bricks

Changes – Joining Office of CTO


Almost 2 years ago I joined Integration Engineering (R&D) within VMware. As part of that role within Integration Engineering I was very fortunate to work on a very exciting project called “MARVIN”, as most of you know MARVIN became EVO:RAIL, which is what was my primary focus for the last 18 months or so. EVO:RAIL evolved in to a team after a successful prototype and came “out of stealth” at VMworld when it is was announced by Pat Gelsinger. Very exciting project, great opportunity and an experience I would not have wanted to miss out on. Truly unique to be one of the three founding members and see it grow from a couple of sketches and ideas to a solution. I want to thank Mornay for providing me the opportunity to be part of the MARVIN rollercoaster ride, and the EVO:RAIL team for the ride / experience / discussions etc!

Over the last months I have been thinking about where I wanted to go next and today I am humbled and proud to announce that I am joining VMware’s Office of CTO (OCTO as they refer to it within VMware) as a Chief Technologist. I’ve been with VMware little over 6 years, and started out as a Senior Consultant within PSO… I never imagined, not even in my wildest dreams, that one day I would have the opportunity to join a team like this. Very honoured, and looking forward to what is ahead. I am sure I will be placed in many uncomfortable situations, but I know from experience that that is needed in order to grow. I don’t expect much to change on my blog, I will keep writing about products / features / vendors / solutions I am passionate about. That definitely was Virtual SAN in 2014, and could be Virtual Volumes or NSX in 2015… who knows!

Go OCTO!

"Changes – Joining Office of CTO" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.


Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at September 30, 2014 03:30 PM

Rands in Repose

Elon on the Future of Humanity

Via Aeon:

‘If you look at our current technology level, something strange has to happen to civilisations, and I mean strange in a bad way,’ he said. ‘And it could be that there are a whole lot of dead, one-planet civilisations.’

#

by rands at September 30, 2014 02:33 PM

Mark Shuttleworth

Fixing the internet for confidentiality and security

“The Internet sees censorship as damage and routes around it” was a very motivating tagline during my early forays into the internet. Having grown up in Apartheid-era South Africa, where government control suppressed the free flow of ideas and information, I was inspired by the idea of connecting with people all over the world to explore the cutting edge of science and technology. Today, people connect with peers and fellow explorers all over the world not just for science but also for arts, culture, friendship, relationships and more. The Internet is the glue that is turning us into a super-organism, for better or worse. And yes, there are dark sides to that easy exchange – internet comments alone will make you cry. But we should remember that the brain is smart even if individual brain cells are dumb, and negative, nasty elements on the Internet are just part of a healthy whole. There’s no Department of Morals I would trust to weed ‘em out or protect me or mine from them.

Today, the pendulum is swinging back to government control of speech, most notably on the net. First, it became clear that total surveillance is the norm even amongst Western democratic governments (the “total information act” reborn).  Now we hear the UK government wants to be able to ban organisations without any evidence of involvement in illegal activities because they might “poison young minds”. Well, nonsense. Frustrated young minds will go off to Syria precisely BECAUSE they feel their avenues for discourse and debate are being shut down by an unfair and unrepresentative government – you couldn’t ask for a more compelling motivation for the next generation of home-grown anti-Western jihadists than to clamp down on discussion without recourse to due process. And yet, at the same time this is happening in the UK, protesters in Hong Kong are moving to peer-to-peer mechanisms to organise their protests precisely because of central control of the flow of information.

One of the reasons I picked the certificate and security business back in the 1990′s was because I wanted to be part of letting people communicate privately and securely, for business and pleasure. I’m saddened now at the extent to which the promise of that security has been undermined by state pressure and bad actors in the business of trust.

So I think it’s time that those of us who invest time, effort and money in the underpinnings of technology focus attention on the defensibility of the core freedoms at the heart of the internet.

There are many efforts to fix this under way. The IETF is slowly become more conscious of the ways in which ideals can be undermined and the central role it can play in setting standards which are robust in the face of such inevitable pressure. But we can do more, and I’m writing now to invite applications for Fellowships at the Shuttleworth Foundation by leaders that are focused on these problems. TSF already has Fellows working on privacy in personal communications; we are interested in generalising that to the foundations of all communications. We already have a range of applications in this regard, I would welcome more. And I’d like to call attention to the Edgenet effort (distributing network capabilities, based on zero-mq) which is holding a sprint in Brussels October 30-31.

20 years ago, “Clipper” (a proposed mandatory US government back door, supported by the NSA) died on the vine thanks to a concerted effort by industry to show the risks inherent to such schemes. For two decades we’ve had the tide on the side of those who believe it’s more important for individuals and companies to be able to protect information than it is for security agencies to be able to monitor it. I’m glad that today, you are more likely to get into trouble if you don’t encrypt sensitive information in transit on your laptop than if you do. I believe that’s the right side to fight for and the right side for all of our security in the long term, too. But with mandatory back doors back on the table we can take nothing for granted – regulatory regimes can and do change, as often for the worse as for the better. If you care about these issues, please take action of one form or another.

Law enforcement is important. There are huge dividends to a society in which people to make long term plans, which depends on their confidence in security and safety as much as their confidence in economic fairness and opportunity. But the agencies in whom we place this authority are human and tend over time, like any institution, to be more forceful in defending their own existence and privileges than they are in providing for the needs of others. There has never been an institution in history which has managed to avoid this cycle. For that reason, it’s important to ensure that law enforcement is done by due process; there are no short cuts which will not be abused sooner rather than later. Checks and balances are more important than knee-jerk responses to the last attack. Every society, even today’s modern Western society, is prone to abusive governance. We should fear our own darknesses more than we fear others.

A fair society is one where laws are clear and crimes are punished in a way that is deemed fair. It is not one where thinking about crime is criminal, or one where talking about things that are unpalatable is criminal, or one where everybody is notionally protected from the arbitrary and the capricious. Over the past 20 years life has become safer, not more risky, for people living in an Internet-connected West. That’s no thanks to the listeners; it’s thanks to living in a period when the youth (the source of most trouble in the world) feel they have access to opportunity and ideas on a world-wide basis. We are pretty much certain to have hard challenges ahead in that regard. So for all the scaremongering about Chinese cyber-espionage and Russian cyber-warfare and criminal activity in darknets, we are better off keeping the Internet as a free-flowing and confidential medium than we are entrusting an agency with the job of monitoring us for inappropriate and dangerous ideas. And that’s something we’ll have to work for.

by mark at September 30, 2014 02:24 PM

apt-get

My Debian LTS report for September

Thanks to the sponsorship of multiple companies, I have been paid to work 11 hours on Debian LTS this month.

CVE triagingI started by doing lots of triage in the security tracker (if you want to help, instructions are here) because I noticed that the dla-needed.txt list (which contains the list of packages that must be taken care of via an LTS security update) was missing quite a few packages that had open vulnerabilities in oldstable.

In the end, I pushed 23 commits to the security tracker. I won’t list the details each time but for once, it’s interesting to let you know the kind of things that this work entailed:

  • I reviewed the patches for CVE-2014-0231, CVE-2014-0226, CVE-2014-0118, CVE-2013-5704 and confirmed that they all affected the version of apache2 that we have in Squeeze. I thus added apache2 to dla-needed.txt.
  • I reviewed CVE-2014-6610 concerning asterisk and marked the version in Squeeze as not affected since the file with the vulnerability doesn’t exist in that version (this entails some checking that the specific feature is not implemented in some other file due to file reorganization or similar internal changes).
  • I reviewed CVE-2014-3596 and corrected the entry that said that is was fixed in unstable. I confirmed that the versions in squeeze was affected and added it to dla-needed.txt.
  • Same story for CVE-2012-6153 affecting commons-httpclient.
  • I reviewed CVE-2012-5351 and added a link to the upstream ticket.
  • I reviewed CVE-2014-4946 and CVE-2014-4945 for php-horde-imp/horde3, added links to upstream patches and marked the version in squeeze as unaffected since those concern javascript files that are not in the version in squeeze.
  • I reviewed CVE-2012-3155 affecting glassfish and was really annoyed by the lack of detailed information. I thus started a discussion on debian-lts to see whether this package should not be marked as unsupported security wise. It looks like we’re going to mark a single binary packages as unsupported… the one containing the application server with the vulnerabilities, the rest is still needed to build multiple java packages.
  • I reviewed many CVE on dbus, drupal6, eglibc, kde4libs, libplack-perl, mysql-5.1, ppp, squid and fckeditor and added those packages to dla-needed.txt.
  • I reviewed CVE-2011-5244 and CVE-2011-0433 concerning evince and came to the conclusion that those had already been fixed in the upload 2.30.3-2+squeeze1. I marked them as fixed.
  • I droppped graphicsmagick from dla-needed.txt because the only CVE affecting had been marked as no-dsa (meaning that we don’t estimate that a security updated is needed, usually because the problem is minor and/or that fixing it has more chances to introduce a regression than to help).
  • I filed a few bugs when those were missing: #762789 on ppp, #762444 on axis.
  • I marked a bunch of CVE concerning qemu-kvm and xen as end-of-life in Squeeze since those packages are not currently supported in Debian LTS.
  • I reviewed CVE-2012-3541 and since the whole report is not very clear I mailed the upstream author. This discussion led me to mark the bug as no-dsa as the impact seems to be limited to some information disclosure. I invited the upstream author to continue the discussion on RedHat’s bugzilla entry.

And when I say “I reviewed” it’s a simplification for this kind of process:

  • Look up for a clear explanation of the security issue, for a list of vulnerable versions, and for patches for the versions we have in Debian in the following places:
    • The Debian security tracker CVE page.
    • The associated Debian bug tracker entry (if any).
    • The description of the CVE on cve.mitre.org and the pages linked from there.
    • RedHat’s bugzilla entry for the CVE (which often implies downloading source RPM from CentOS to extract the patch they used).
    • The upstream git repository and sometimes the dedicated security pages on the upstream website.
  • When that was not enough to be conclusive for the version we have in Debian (and unfortunately, it’s often the case), download the Debian source package and look at the source code to verify if the problematic code (assuming that we can identify it based on the patch we have for newer versions) is also present in the old version that we are shipping.

CVE triaging is often almost half the work in the general process: once you know that you are affected and that you have a patch, the process to release an update is relatively straightforward (sometimes there’s still work to do to backport the patch).

Once I was over that first pass of triaging, I had already spent more than the 11 hours paid but I still took care of preparing the security update for python-django. Thorsten Alteholz had started the work but got stuck in the process of backporting the patches. Since I’m co-maintainer of the package, I took over and finished the work to release it as DLA-65-1.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

by Raphaël Hertzog at September 30, 2014 01:24 PM

Server Density

What’s new in Server Density – Summer 2014

We’ve been a bit quiet over the last few months but have still been working on improvements and new functionality to our server and website monitoring product, Server Density. This post summarises what we added over the summer and what’s coming up soon.

Log search beta

One of the first things you do when responding to an alert or tracking down performance problems is look at the server logs. Current log management products are expensive and complex to set up, so we’re pleased to announce the beta of our log search functionality.

Log search uses the existing Server Density agent to tail your logs and make them searchable from within your account. There’s a new dedicated search view so you can search by device, or you can view the logs from individual device views. Later, logs will automatically be displayed as part of a new, upcoming alert incident view.

If you’re interested in trying this out, please fill out this short form to get into the beta.

Server Density log search

Google Cloud integration

We released our integration into Google Cloud and Google Compute Engine which allows you to manage your instances and get alerts on instance and disk state changes. You can also sign up for $500 in free Google Compute Engine credits.

Google Cloud graphs

Snapshots

Click on any data point on your device graphs and then click the Snapshot link, and it will take you through to a view of what was happening on that server at that exact point in time. You can also click the Snapshot tab to go to the latest snapshot and then navigate backwards and forward through each time point.

Server snapshot

Linux agent 1.13.4

A number of fixes have been released as part of the latest Linux agent release, including better handling of plugin exceptions and more standards compliance for init scripts. MongoDB over SSL is also now supported. See the release notes.

Chef cookbook improvements

There are a range of improvements to the official Chef cookbook which include better support for EC2 and Google auto scaling and support for managing plugins through Chef. This is available on the Chef Supermarket and has had almost 100,000 downloads in the last 10 days.

Puppet module improvements

The official Puppet module has also had improvements to make it work better with Google Cloud. It is also available on the Puppet Forge.

App performance improvements

A lot of work has been done behind the scenes to improve the performance of the product generally. This ranges from optimising requests and connections in the UI, upgrades to the hardware powering the service to moving all our assets onto a CDN. We have a few more improvements still to release but this all goes towards our goal of having response times as close to instantaneous as possible.

Onboarding and help popups

We retired our old app tour with new in-app popup bubbles to help you learn more about functionality. Blank slates have been redesigned and we have more improvements to help show off some of the great functionality coming soon.

How to monitor xyz

We’re running a series of free webinars through Google Hangouts to cover how to monitor a range of different technologies. We started with MongoDB but our next upcoming hangout will be on how to monitor Nginx. Many more hangouts will be scheduled over the next few months and you can watch them back through our Youtube channel.

Redesigned multi factor authentication setup

The flow for setting up a new multi factor token has been redesigned to make it clearer how to proceed through. We highly recommend enabling this for extra security – passwords are no longer enough!

Enable MFA

Improved cloud actions menu

Actions taken within Server Density are separated from actions taken on the Cloud Provider level to ensure commands aren’t sent accidentally.

Cloud actions

Delete confirmations

Previously it was too easy to take the delete actions which could lead to accidentally deleting a device. We’ve improved the confirmation requirements for this.

delete

Auto refreshing graphs

All graphs, on the device overview and on the dashboard, now auto refresh so you can keep the window open and see the data show up immediately.

What’s coming next?

We’ll be returning to our monthly post schedule for “What’s new” as we start releasing some of the things we’ve been working on over the last few months. This includes permissions and a range of new alerting functionality, starting with tag based alerts and group recipients. Lots of interesting new functionality to be announced before the end of the year!

The post What’s new in Server Density – Summer 2014 appeared first on Server Density Blog.

by David Mytton at September 30, 2014 11:00 AM

Ferry Boender

Work around insufficient remote permissions when SCPing

Here's a problem I often run into:

  • I need to copy files from a remote system to my local system.
  • I have root access to the remote system via sudo or su, but not directly via SSH.
  • I don't have enough permissions to read the remote files as a normal user; I need to be root.
  • There isn't enough space to copy the files to a temp dir and change their ownership.

One solution is to use sudo tar remotely and output the tar file on stdout:

fboender@local$ ssh fboender@example.com "sudo tar -vczf - /root/foo" > foo.tar.gz

This relies on the remote host allowing X11 forwarding though, and you have to have an SSH askpass program installed. Half of the time, I can't get this work properly.

An easier solution is to build a reverse remote tunnel:

fboender@local$ ssh -R 19999:localhost:22 fboender@example.com

This maps the remote port 19999 on example.com to my local port 22. That means I can now access the SSH server running locally from the remote server by SSHing to port 19999. For example:

fboender@example.com$ scp -P 19999 -r /root/foo fboender@127.0.0.1
Password: 

There you go. Easy as pie.

by admin at September 30, 2014 09:35 AM

Chris Siebenmann

Don't split up error messages in your source code

Every so often, developers come up with really clever ways to frustrate system administrators and other people who want to go look at their code to diagnose problems. The one that I ran into today looks like this:

if (rval != IDM_STATUS_SUCCESS) {
	cmn_err(CE_NOTE, "iscsi connection(%u) unable to "
	    "connect to target %s", icp->conn_oid,
	    icp->conn_sess->sess_name);
	idm_conn_rele(icp->conn_ic);
}

In the name of keeping the source lines under 80 characters wide, the developer here has split the error message into two parts, using modern C's constant string concatenation to have the compiler put them back together.

Perhaps it is not obvious why this is at least really annoying. Suppose that you start with the following error message in your logs:

iscsi connection(60) unable to connect to target <tgtname>

You (the bystander, who is not a developer) would like find the code that produces this error message, so that you can understand the surrounding context. If this error message was on one line in the code, it would be very easy to search for; even if you need to wild-card some stuff with grep, the core string 'unable to connect to target' ought to be both relatively unique and easy to find. But because the message has been split onto multiple source lines, it's not; your initial search will fail. In fact a lot of substrings will fail to find the correct source of this message (eg 'unable to connect'). You're left to search for various substrings of the message, hoping both that they are unique enough that you are not going to be drowned in hits and that you have correctly guessed how the developer decided to split things up or parameterize their message.

(I don't blame developers for parameterizing their messages, but it does make searching for them in the code much harder. Clearly some parts of this message are generated on the fly, but are 'connect' or 'target' among them instead of being constant part of the message? You don't know and have to guess. 'Unable to <X> to <Y> <Z>' is not necessarily an irrational message format string, or you equally might guess 'unable to <X> to target <Z>'.)

The developers doing this are not making life impossible for people, of course. But they are making it harder and I wish they wouldn't. It is worth long lines to be able to find things in source code with common tools.

(Messages aren't the only example of this, of course, just the one that got to me today.)

by cks at September 30, 2014 04:34 AM

September 29, 2014

Mark Shuttleworth

Cloud Foundry for the Ubuntu community?

Quick question – we have Cloud Foundry in private beta now, is there anyone in the Ubuntu community who would like to use a Cloud Foundry instance if we were to operate that for Ubuntu members?

by mark at September 29, 2014 03:17 PM

Limn This

Velocity Conference talk - DevOps and Intentional Emergence

I'm going to do some more writing on this topic but in the meantime, I've posted the slides from my recent Velocity Conference talk:

 

The main point of this talk is that the information age is having as large an impact on corporate organization as the industrial age did, and that the organizing principles of that era (e.g. bureaucracy) don't translate well into this one. 

The web grew up in the information age though, and therefore, the way web companies work and organize might make a useful model for the corporate enterprise in transition. The web has been emergent from the beginning, for the corporate enterprise to become emergent faster it's useful to make emergence a goal, to pursue it with intention.

by Jim Stogdill at September 29, 2014 01:51 PM

Aaron Johnson

WHAT I DID THIS WEEKEND: 09/28/2014

  • Train into London on Saturday, then tubed it to St. Paul’s Cathedral, found a geocache at the top of the dome, lunch at a burger place next door and then tubed it to the Diana Memorial Playground or as the dudes call it, The Pirate Ship playground.
  • Got tickets for the Reading vs Wolverhampton Wanderers football match for all of us. Got to the pitch early… walked in the gate… went to our seats. Got settled. Went to get food. Cash only. No ATM’s inside the building. Tried to get out… can’t get out until halftime. Went out a bit after halftime to get ice cream for the little dudes (only way to convince them to attend a game, promise ice cream). Vendors inside the stadium have no ice cream (????), walked outside… ice cream that was outside is now gone. I miss a goal. Walk back inside cursing England, watch rest of game. Home team scores in the 88th minute to tie, fun! Takes 1 hour to get out of the stadium, no fun. Pub on the way home. Beer. Find 2 geocaches (1,2) with the boys on a cool woodlands trail right next to the pub. All is good in the world.
  • No long run this weekend because my left calf hurts, not sure what I did to it last week, probably need to incorporate stretching into the workouts.

by ajohnson at September 29, 2014 10:29 AM

Ubuntu Geek

Ubuntu 14.10 (Utopic Unicorn) Final Beta released

The Ubuntu team is pleased to announce the final beta release of Ubuntu 14.10 Desktop, Server, Cloud, and Core products.

Codenamed "Utopic Unicorn", 14.10 continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.
(...)
Read the rest of Ubuntu 14.10 (Utopic Unicorn) Final Beta released (107 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: ,

Related posts

by ruchi at September 29, 2014 07:46 AM

Chris Siebenmann

Learning a lesson about spam-related logging (again)

I recently mentioned some stats about how many clients do TLS with my sinkhole SMTP server. Today I was planning to present some broader stats from that server, showing how many clients made it to various points in the SMTP conversation with it. Then, unfortunately, I discovered that I'd shot myself in the foot as far as gathering this sort of stats was concerned.

(If I was running a pure 'accept everything' SMTP server these should be pretty boring stats. But as it happens I'm not; instead I'm going various things to get rid of uninteresting or overly noisy connections and uninteresting spam.)

My sinkhole SMTP server currently has two logs: a SMTP command log (with additional notations for events like connections, TLS negotiations, and so on) and a message log (which logs one line per message that made it all the way through to fully receiving the DATA payload). What I would like to do is generate stats on things like how many connections there were, how many of them made it as far as an EHLO, how many made it to MAIL FROM, and so on. When I started I thought that I could just grep my SMTP command log and count how many hits I got on various things.

Well, no, not so fast. Take EHLO, for example; a proper client that successfully negotiates TLS will issue two EHLO commands. A related issue happens with, say, RCPT TO commands; because many clients pipeline their input, they may well send a RCPT TO even though their MAIL FROM failed. Bogus clients may equally send a MAIL FROM even if their EHLO failed (or may try MAIL FROM without even EHLOing first).

Given pure SMTP logs there are two ways to fix this. The first would be to have a unique and distinct 'success' reply (or message) for every different SMTP command; then I could search for the successful replies instead of the commands being issued. The other would be to use a process which reconstructs the state of each SMTP conversation so it can tell successful commands from failed ones, suppress post-TLS EHLOs, and so on. You could even have the latter option spit out a data record for each conversation with all of this per-conversation information. Unfortunately I do not have the former (successful SMTP reply messages are both duplicated and varied) and creating something to do the latter is too much work for now.

What this really points out to me is that I should have thought more about what sort of information I might want when designing my server's logging. In theory I knew from prior experience with Exim that raw logs could make it too complicated to generate interesting stats, but in practice it never occurred to me that I might be doing this to myself.

(Applications to what gets logged and so on by our real mail system are left as an application for me to think about, but this time around I do want to think about it and see if I can improve Exim;s logging or otherwise do anything interesting.)

by cks at September 29, 2014 02:52 AM

September 28, 2014

Ubuntu Geek

Httpry – HTTP logging and information retrieval tool

httpry is a tool, written in C, designed for displaying and logging HTTP traffic. It parses traffic, online and offline,in an easy to read format. Daemonization is also supported for long-term logging. Before I discovered httpry I was parsing HTTP traffic from pcaps with ngrep and lots of ugly sed and awk to make my results readable. Doing this wasn’t very pretty and tended to consume more time than I would have liked, however, with httpry none of the extra work is needed.
(...)
Read the rest of Httpry – HTTP logging and information retrieval tool (64 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: ,

Related posts

by ruchi at September 28, 2014 11:25 PM

Rands in Repose

Put the Laptop Away

Clay Shirky on Medium:

People often start multi-tasking because they believe it will help them get more done. Those gains never materialize; instead, efficiency is degraded. However, it provides emotional gratification as a side-effect. (Multi-tasking moves the pleasure of procrastination inside the period of work.) This side-effect is enough to keep people committed to multi-tasking despite worsening the very thing they set out to improve.

#

by rands at September 28, 2014 06:49 PM

Server Density

Chris Siebenmann

Changing major version numbers does not fix compatibility issues

So in the wake of the Bash vulnerability I was reading this Errata Security entry on Bash's code (via due to an @0xabad1dea retweet) and I came across this:

So now that we know what's wrong, how do we fix it? The answer is to clean up the technical debt, to go through the code and make systematic changes to bring it up to 2014 standards.

This will fix a lot of bugs, but it will break existing shell-scripts that depend upon those bugs. That's not a problem -- that's what upping the major version number is for. [...]

There are several issues here, but let me start with the last statement. This statement is wrong, and let me say it plainly: changing your major version number does not fix compatibility problems. Changing your version number is not some magic get out of problems card that makes your new version compatible with your old version; regardless of its version number it remains just as incompatible with the old version as before. The purpose of changing version numbers is to warn people of potential problems, and by extension to reassure them that there won't be problems for minor version changes (which makes them more likely to adopt those changes).

If you introduce breaking changes in Bash or in any program, things that people have today will break. That is what breaking changes mean. People will still have to either fix them or stay with the old version of the program and yes, the latter really is an option whether or not you like it (it is especially an option if you are proposing changes in something that is not actually your program). When you introduce breaking changes you are asking people do extra work, which they do not enjoy in the least; generally the more breaking changes you introduce the more work you create. If you want your changes to be accepted, you need to be very sure that what you are offering is worth that work. Otherwise you are going to get an unpleasant surprise.

Changing the major version number does absolutely nothing to change this dynamic and thus it does nothing at all to make breaking changes less of a problem or less of an issue. You cannot hand wave away the cost of breaking changes this way, no matter how much programmers would love to do so and how this 'we'll just change major versions' thing is an ever-popular idea in languages, libraries, systems, commercial software, tools, and so on.

(Note that being left with no real option but to do a bunch of work also drastically lowers the cost of moving to some other alternative; you no longer have the difference between 'do nothing' and 'do the migration', now it is just the difference between 'fix the existing stuff' and 'do the migration'.)

by cks at September 28, 2014 03:40 AM

September 27, 2014

Raymii.org

Firefox History stats with Bash

This is a small script to gather some statistics from your Firefox history. First we use sqlite3 to parse the Firefox history database and get the last three months, then we remove all the IP addresses and port numbers and finally we sort and count it.

September 27, 2014 09:28 PM

Aaron Johnson

Chris Siebenmann

DWiki, Python 3, Python, and me

A while back I tweeted:

Programming in #golang remains fun. I'm not sure if this is true for me for Python any more, but maybe I need the right project.

One of the problems for me with Python programming is that I kind of have a millstone and this millstone intersects badly with Python 3, which I kind of want to be using.

I have a number of Python projects, both work and personal. The stuff for work is not moving to Python 3, in significant part because most of our systems don't have good versions of Python 3 (or sometimes any versions of it). Most of my personal Python projects are inactive (eg), generally because I don't have much use for them any more. The personal project that is the exception is DWiki, the software behind Wandering Thoughts.

Unfortunately DWiki's source code is kind of a mess and as a result DWiki itself is sort of a millstone. DWiki has grown far larger than I initially imagined it ever would be and I didn't do a great job of designing it from the start (partly because I did not really understand many of the problems I was dealing with when I started writing it, which resulted in some core design missteps, and partly because it changed directions during development). The code has wound up being old and tangled and not very well commented. One of the consequences of this is that making any changes at all takes a surprising amount of work, partly just to recover my original understanding of the code, and as a result I need to have a lot of energy and enthusiasm to actually start trying to make any change.

(For instance, I've wanted to add entry tags to DWiki for a long time and I even have a strawman design in my head. What I haven't had so far is enough time and energy to propel me to dive into the code and get it done. And partly this is because the idea of working on the mess of DWiki's code just plain saps my enthusiasm.)

DWiki is currently a Python 2 program. I expect that moving it to Python 3 would take a fair amount of work and a lot of mucking around in the depths of its code (and then a bunch more work to make it use any relevant exciting Python 3 features). In fact the very idea of attempting the conversion is extremely daunting. But at the same time DWiki is the only Python program I'm likely to work on any time soon and the only one that is really important to carry forward to a Python 3 future (because it's the one program I expect to be running for a long time).

(Of course DWiki has no tests as such, especially unit tests. My approach for testing important changes is scripting to render all pages of CSpace in an old and a new code version and then compare the rendered output.)

So there my millstone sits, sapping my enthusiasm for dealing with it and by extension my enthusiasm for Python 3. I would be reasonably happy to magically have a Python 3 version of DWiki and I'm sure it would prompt me to dive into Python 3 in a fairly big way, but I can't see how I actually get to that future. Life be different if I could see a way that Python 3 would be a really big win for DWiki (such as significantly speeding it up or allowing me to drastically simplify chunks of code), but I don't believe that (and I know that Python 3 will bring complications).

(Life would also be different if DWiki didn't work very well for some reason (or needed maintenance) and I clearly needed to do something to it. But the truth is it works pretty well as it is. It's just missing wishlist items, such as tags and running under Python 3.)

PS: on the other hand, if I keep thinking and talking about DWiki and Python 3, maybe I'll talk myself into trying a conversion just to see how difficult it really is. The idea has a certain perverse attraction.

Sidebar: Why a major rewrite is not the answer

At this point some people will urge me to gut major portions of the current code (or all of it) and rebuild it from scratch, better and cleaner and so on. The simple answer about this is that if I was going to redo DWiki from more or less scratch (which has a number of attractions), I don't see why I'd do it in Python 3 instead of in Go. Programming in Python 3 would likely be at least somewhat faster than in Go but I don't think it would be a massive difference, while the Go version would almost certainly run faster and it would clearly have a much simpler deployment story.

So why not go ahead and rewrite DWiki in Go? Because I don't want to do that much work, especially since DWiki works today and I don't think I'd gain any really major wins from a rewrite (I've pretty much scrubbed away all of DWiki's pain points for day to day usage already).

by cks at September 27, 2014 04:11 AM

September 26, 2014

Google Blog

Through the Google lens: search trends September 19-25

Spoiler alert! Those of you not caught up with Scandal might want to skim this one. -Ed.

This week, searchers learned how to get away with murder—and how not to get away with public criticism of prominent figures with important business relationships with your employer.

Shonda, Shonda, Shonda
TV fans, rejoice! This week brought premiere episodes for old favorite shows as well as hotly anticipated new ones. Top returning shows on search include CBS’s The Big Bang Theory (natch), and NBC’s The Blacklist and Chicago Fire. New shows that shot up the search ratings include Batman prequel Gotham and new family comedy black-ish.

But premieres week really came to a head on Thursday night, which we prefer to call the Night of Shonda. Producer Shonda Rimes has got ABC’s lineup locked up with Scandal, Grey’s Anatomy (in its final season this year) and the new How To Get Away With Murder, starring Academy Award-nominee Viola Davis. All three shows were in the top 10 hot searches the day of their premiere. True to form, Scandal’s season 4 debut left people with more questions than answers. Here's a sampling (spoiler alert!) of what searchers were asking during the show:
The end of an era
Derek Jeter first took the field as a New York Yankee in May 1995. Five World Series, more than 3,000 hits and nearly 20 years later, this weekend he will take to the diamond for a final game at Fenway against his archrivals, the Boston Red Sox. Though neither the Yankees nor the Sox made this season’s playoffs, anticipation for Jeter’s farewell at-bat was already high. But last night, after giving baseball fans so many memorable moments over the years, he gave us one more. In his final game at Yankee Stadium, Jeter’s ninth-inning walk-off single gave the Yankees a win over the Orioles, provided the world another excuse to search for the star shortstop, and was a fitting ending to Jeter’s fairy-tale career.

Over on the political field, Attorney General Eric Holder announced on Thursday that he is stepping down. Holder will leave behind a large and sometimes complicated legacy on issues including same-sex marriage, voting rights, criminal justice, national security and government secrecy. He’ll go down in history as the fourth longest-serving and first black AG.
NFL in the news
The NFL continues to be in the news for more than just its games. First, NFL commissioner Roger Goodell gave a press conference on Friday addressing the league’s issues with domestic violence. Then, on Monday, prominent sportswriter Bill Simmons was suspended for three weeks by ESPN after he called Goodell a liar in his podcast “The B.S. Report.” Simmons is prohibited from tweeting or other public communications until October 15, but Sports Guy supporters took to the web on his behalf, fighting to #FreeSimmons. Finally, this week’s season premiere of South Park featured a malfunctioning “GoodellBot” and a plotline about the controversy over Washington’s team name.

Happy 5775
Shana Tova! That’s what a lot of people were saying (and searching) as people worldwide dipped apples in honey and celebrated Rosh Hashanah, the Jewish New Year. The holiday was the fourth hottest search trend on Wednesday.

Tip of the week
Google can help you get a good deal on your next airplane ticket. When the price drops on a flight you’ve been researching on Flight Search, you may see a Now card letting you know. Just tap the card to quickly and easily book your trip. This works on the latest version of the Google app on Android in the U.S.

by Emily Wood (noreply@blogger.com) at September 26, 2014 03:39 PM

SysAdmin1138

Redundancy in the Cloud

Strange as it might be to contemplate, but imagine what would happen if AWS went into receivership and was shut down to liquidate assets? What would that mean for your infrastructure? Project? Or even startup?

It would be pretty bad.

Startups have been deploying preferentially on AWS or other Cloud services for some time now, in part due to venture-capitalist push to not have physical infrastructure to liquidate should the startup go *pop* and to scale fast should a much desired rocket-launch happen. If AWS shut down fully for, say, a week, the impact to pretty much everything would be tremendous.

Or what if it was Azure? Fully debilitating for those that are on it, but the wide impacts would be less.

Cloud vendors are big things. In the old physical days we used to deal with the all-our-eggs-in-one-basket problem by putting eggs in multiple places. If you're on AWS, Amazon is very big about making sure you deploy across multiple Availability Zones and helping you become multi-region in the process if that's important to you. See? More than one basket for your eggs. I have to presume Azure and the others are similar, since I haven't used them.

Do you put your product on multiple cloud-vendors as your more-than-one-basket approach?

It isn't as easy as it was with datacenters, that's for sure.

This approach can work if you treat the Cloud vendors as nothing but Virtualization and block-storage vendors. The multiple-datacenter approach worked in large part because colos sell only a few things that impact the technology (power, space, network connectivity, physical access controls), though pricing and policies may differ wildly. Cloud vendors are not like that, they differentiate in areas that are technically relevant.

Do you deploy your own MySQL servers, or do you use RDS?
Do you deploy your now MongoDB servers, or do you use DynamoDB?
Do you deploy your own CDN, or do you use CloudFront?
Do you deploy your own Redis group, or do you use SQS?
Do you deploy your own Chef, or do you use OpsWorks?

The deeper down the hole of Managed Services you dive, and Amazon is very invested in pushing people to use them, the harder it is to take your toys and go elsewhere. Or run your toys on multiple Cloud infrastructures. Azure and the other vendors are building up their own managed service offerings because AWS is successfully differentiating from everyone else by having the widest offering. The end-game here is to have enough managed services offerings that virtual private servers don't need to be used at all.

Deploying your product on multiple cloud vendors requires either eschewing managed-services entirely, or accepting greater management overhead due to very significant differences in how certain parts of your stack are managed. Cloud vendors are very much Infrastructure-as-Code, and deploying on both AWS and Azure is like deploying the same application in Java and .NET; it takes a lot of work, the dialect differences can be insurmountable, and the expertise required means different people are going to be working on each environment which creates organizational challenges. Deploying on multiple cloud-vendors is far harder than deploying in multiple physical datacenters, and this is very much intentional.

It can be done, it just takes drive.

  • New features will be deployed on one infrastructure before the others, and the others will follow on as the integration teams figure out how to port it.
  • Some features may only ever live on one infrastructure as they're not deemed important enough to go to all of the effort to port to another infrastructure. Even if policy says everything must be multi-infrastructure, because that's how people work.
  • The extra overhead of running in multiple infrastructures is guaranteed to become a target during cost-cutting drives.

The ChannelRegister article's assertion that AWS is now in "too big to fail" territory, and thus requiring governmental support to prevent wide-spread industry collapse, is a reasonable assertion. It just plain costs too much to plan for that kind of disaster in corporate disaster-response planning.

by SysAdmin1138 at September 26, 2014 12:21 PM

Chris Siebenmann

The practical problems with simple web apps that work as HTTP servers

These days there are a number of languages and environments with relatively simple to learn frameworks for doing web activity (I am deliberately phrasing that broadly). Both node and Go have such things, for example, and often make a big deal of it.

(I know that 'let's do some web stuff' is a popular Go tutorial topic to show easy it is.)

All of this makes it sound like these should be good alternatives to the CGI problem (especially with their collections of modules and packages and so on). Unfortunately this is not the case in default usage and one significant part of why not is exactly that these systems are pretty much set up to be direct HTTP servers out of the box.

Being a direct HTTP server is marvelously simple approach for a web app if and only if you're the only thing running on the web server. If you have a single purpose web server that exists merely for your one web application, it's great that you can expose the web app directly (and in simple internal setups you don't particularly need the services of Apache or nginx or lighttpd or the like). But this single purpose web server is very rarely the case for simple CGI-level things. Far more common is a setup where you have a whole collection of both static pages and various simple web applications aggregated together under one web server.

(In general I feel that anything big enough for its own server is too big to be sensible as a 'simple CGI'. Good simple CGI problems are small almost by definition.)

If you try hard you can still make this a single server in Go or node but you're going to wind up with kind of a mess where you have several different projects glued together, all sharing the same environment with each other (and then there are the static files to serve). If the projects are spread across several people, things will get even more fun. Everything in one bucket is simply not a good engineering answer here. So you need to separate things out, and any way you do that makes more and more work.

If you separate things out as separate web servers, you need multiple IPs (even if you embody things on the same host) and multiple names to go with them, which are going to be visible to your users. If you separate things out with a frontend web server and reverse proxying, all of your simple web apps have to be written to deal with this (and with the various issues involved in using HTTP as a transport). Both complicate your life, eroding some of the theoretical simplicity you're supposed to get.

(However Go does have a FastCGI package (as well as a CGI package, but then you're back to CGI), apparently with an API that's a drop in replacement for the native Go HTTP server. It appears that node has at least a FastCGI module that's said to be a relatively drop in replacement for its http module. FastCGI does leave you with the general problems of needing daemons, though.)

PS: I'm handwaving the potentially significant difference in programming models between CGI's 'no persistent state between requests' and the shared context web app model of 'all the persistent state you want (or forget to scrub)'. I will note that the former is much simpler and more forgiving than the latter, even in garbage collected environments such as Go and Node.

Sidebar: the general issues with daemons

Although it is not specific to systems that want to be direct HTTP servers, the other problem with any sort of separate process model for simple web apps is exactly that it involves separate processes for each app. Separate processes mean that you've added more daemons to be configured, started, monitored and eventually restarted. Also, those daemons will be sitting there consuming resources on your host even if their app is completely unused at the moment.

You can make this easy if you try hard. But today it involves crafting a significant amount of automation because pretty much no out of the box Unix system is designed for this sort of operation. Building this automation is a not insignificant setup cost for your 'simple' web apps (well, for your first few).

(If you try really hard and have the right programming model you can get apps to be started on demand and stopped when the demand goes away, but this actively requires extra work and complexity in your systems and so on.)

by cks at September 26, 2014 07:10 AM

September 25, 2014

Ubuntu Geek

Timekpr – Keep control of computer usage

This program will track and control the computer usage of your user accounts. You can limit their daily usage based on a timed access duration and configure periods of day when they can or cannot log in. With this application, administrators can limit account login time duration or account access hours.
(...)
Read the rest of Timekpr – Keep control of computer usage (58 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , ,

Related posts

by ruchi at September 25, 2014 11:29 PM

Evaggelos Balaskas

postgres nightmare

Some time in the last week, the iscsi volume of one of our PostgreSQL went up to 98% and nagios vomited on the standby mobile.

The specific postgres database holds customer’s preferences related to our webmail.

Unfortunately the webmail is a java web app (tomcat) - custom written by some company and the source code is a spaghetti mesh. The code has also gazillion bugs, so we took a decision to migrate to an opensource php based webmail. Hopefully in the near future we will official migrate to the new webmail platform and all known problems to humanity will cease to exist.

Till that time, we have to maintain the current webmail platform and figure out how a ~500Mb database has become a nearly ~50Gb nightmare!

My knowledge on databases are not basic but to be fair i lack in experience. As a veteran standby engineer I know that I need to apply a quick & dirty patch and investigate afterworks. Also I am not afraid to ask for help! And so i did.

First thing to do: increase the volume on the storage machine. I’ve said already that we are using an iscsi partition so it’s pointless that action. In fact - no it isnt !!! The storage machine has a percentage for reserving storage for snapshots. And the increase gave us a little space to breath as the snapshots were “eating” space from the actual volume! You are probably thinking that we should resize the partition - but this is a live-production machine and we dont want a downtime on the service (umount/resize/mount).

From 98% to 93% with only one command.

Second, but most popular thing to do, was VACUUM. A colleague took that step and tried to VACUUM each table separately so not to “lock” or provoke the daemon to a crash or even worst. That gave us a 88% of free space and the time to think before we act again.

For all you people that dont know postgres, postgres doesnt delete actual data from the storage only from the database. So you need to enable autovacuum or vacuum by hand from time to time.

Of course before everything else (or even vacuum) we took a pg_dump to another partition.
But pg_dump was taking hours and hours to complete.

After further investigation, we found a table that pg_dump was getting difficult with.

Fired up a new database and tried to restore this table there.
I couldnt. There was an error of duplicates and the restoration process was failing.

Tried to figured out the duplicate entries. 20 entries! The table has only four columns and a ~ 50.000 data entries. Only 20 of them were duplicates. The amount of data in size is ~20Mb. I was looking the data/entries and removed by hand the duplicates. After that i re-index the specific table and an hour later over 20Gb were free. Down to 44% from 98% by deleted 20 entries.

At that point i was thinking that postgres is mocking me. How the hell a 20Mb table had gone over 20G ?

Now pg_dump is taking 6.5 minutes - but is still taking a long time to dump this specific table.

Tomorrow is a new day to experiment with PostgreSQL

[edit1]: Just to be fair, postgres version is 8.1
[edit2]: The VACUUM process just finished. Another 20G free !!! So in total for 20 duplicate entries a total 40G disk free! We are now at 9% from 98% of used disk.

PS: We have already discussed a lot of plans (upgrade postgres version, restore the dump to a new machine etc etc) in our department but we believe not to focus to any of them (yet) as we havent found the trigger that fired up the database from 500Mb to 50Gb. After that all plays are in hand.

Tag(s): postgres

September 25, 2014 08:28 PM

Matt Brock

Monitoring PERC RAID controllers and storage arrays on Dell PowerEdge servers with Debian and Ubuntu

If you have a Dell PowerEdge server with a RAID array then you'll probably want to be notified when disks are misbehaving, so that you can replace the disks in a timely manner. Hopefully this article will help you to achieve this.

These tools generally rely on being able to send you email alerts otherwise their usefulness can be somewhat limited, so you should make sure you have a functioning MTA installed which can successfully send email to you from the root account. Setting up an MTA is beyond the scope of this article, so hopefully you already know how to do that.

Monitoring with SMART

Firstly, it's probably worth installing smartmontools to perform SMART monitoring on the storage array (though, to be honest, in all my years of sysadmin I've still yet to receive a single alert from smartd before a disk failure... but anyway).

apt-get install smartmontools

Comment out everything in /etc/smartd.conf and add a line akin to the following:

/dev/sda -d megaraid,0 -a -m foo@bar.com

Replace /dev/sda if you're using a different device, and replace foo@bar.com with your email address. Restart smartd:

service smartd restart

If you check /var/log/syslog, you should see that the service has successfully started up and is monitoring your RAID controller. You should now in theory receive emails if smartd suspects impending disk failure (though don't bet on it).

If you want to get information about your controller, try this command:

smartctl -d megaraid,0 -a /dev/sda

That should show you hardware details, current operating temperatures, error information, etc.

Monitoring and querying the RAID controller

So, let's get down to the proper monitoring tools we really need. Dell's "PERC" RAID controllers are apparently supplied by LSI, and there's a utility which LSI produce called megacli for managing these cards, but it seems as if megacli is fairly unfriendly and unintuitive. However, I haven't needed to use megacli because I've had success using a friendlier tool called megactl. So, my instructions are for installing and using megactl, but if that doesn't work for you then you'll probably need to try and figure out how to install and use megacli, in which case I wish you the best of luck.

To install megactl, firstly follow these straightforward instructions to install the required repository on Debian or Ubuntu, then install the tools:

apt-get install megaraid-status

This will automatically start the necessary monitoring daemon and ensure that it gets started on boot. This daemon should send a root email if it detects any problems with the array.

The following command can be used to show the current state of the array and its disks, in detail:

megaraidsas-status

To get more information and instructions on megactl and related tools, this page is a good starting point.

Monitoring with Nagios

In order to get alerted via Nagios, it seems that the check_megasasctl plugin for Nagios will do the trick, so long as you have megactl installed as described above. I haven't actually tried this in Nagios myself yet, so I can't vouch for it.

September 25, 2014 05:13 PM

Ferry Boender

How to REALLY test for Bash Shellshock (CVE-2014-6271)

Like always in a crisis, many things go wrong. Everyobody starts chattering, and start deteriorating the signal-to-noise level. I'll keep this brief.

There are a bunch of sites out there that are telling you how to test for the Bash Shellshock vulnerability. Many of the tests are WRONG:

# WROOOOOOOOOOOOOOOOONG
$ env x=’() { ;;}; echo vulnerable’ sh -c “echo this is a test”
syntax error near unexpected token `('

Spot the first problem! First off all, this uses the wrong kind of quotes. That syntax error is NOT an indication that your system isn't vulnerable. It's an indication that the blog where you copied the instruction from doesn't understand what ASCII quotes are.

Now, spot the second problem! Which shell is this calling?? Is it bash? No, it's `sh`. So if `sh` isn't linked to bash, you get this:

# WROOOOOOOOOOOOOOOOOOOOOOOOOOOOOOONG
$ env x='() { ;;}; echo vulnerable' sh -c “echo this is a test”
sh: x: line 0: syntax error near unexpected token `;;'
sh: x: line 0: `x () { ;;}; echo vulnerable'
sh: error importing function definition for `x'
this: “echo: command not found

"Oh, great, we're not vulnerable", you think. But it's not executing bash at all, it's executing some other shell. Sloppy work.

Here's a way to actually test your system. BUT don't take my word for it, perhaps it is not right either:

# Perhaps correct:
$ env x='() { :;}; echo vulnerable' bash -c 'echo hello'
vulnerable
hello

 

 

by admin at September 25, 2014 03:37 PM

Yellow Bricks

It is all about choice


The last couple of years we’ve seen a major shift in the market towards the software-defined datacenter. This has resulted in many new products, features and solutions being brought to market. What struck me though over the last couple of days is that many of the articles I have read in the past 6 months (and written as well) were about hardware and in many cases about the form factor or how it has changed. Also, there are the posts around hyper-converged vs traditional, or all flash storage solutions vs server side caching. Although we are moving towards a software-defined world, it seems that administrators / consultants / architects still very much live in the physical world. In many of these cases it even seems like there is a certain prejudice when it comes to the various types of products and the form factor they come in and whether that is 2U vs blade or software vs hardware is beside the point.

When I look at discussions being held around whether server side caching solutions is preferred over an all-flash arrays, which is just another form factor discussion if you ask me, the only right answer that comes to mind is “it depends”. It depends on what your business requirements are, what your budget is, if there are any constraints from an environmental perspective, hardware life cycle, what your staff’s expertise / knowledge is etc etc. It is impossible to to provide a single answer and solution to all the problems out there. What I realized is that what the software-defined movement actually brought us is choice, and in many of these cases the form factor is just a tiny aspect of the total story. It seems to be important though for many people, maybe still an inheritance from the “server hugger” days where hardware was still king? Those times are long gone though if you ask me.

In some cases a server side caching solutions will be the perfect fit, for instance when ultra low latency and use of existing storage infrastructure  is a requirement. In other cases bringing in an all-flash array may make more sense, or a hyper-converged appliance could be the perfect fit for that particular use case. What is more important though is how these components will enable you to optimize your operations, how these components will enable you to build that software-defined datacenter and help you meet the demands of the business. This is what you will need to ask yourself when looking at these various solutions, and if there is no clear answer… there is plenty of choice out there, stay open minded and go explore.

"It is all about choice" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.


Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at September 25, 2014 12:10 PM

Mark Shuttleworth

What Western media and polititians fail to mention about Iraq and Ukraine

Be careful of headlines, they appeal to our sense of the obvious and the familiar, they entrench rather than challenge established stereotypes and memes. What one doesn’t read about every day is usually more interesting than what’s in the headlines. And in the current round of global unease, what’s not being said – what we’ve failed to admit about our Western selves and our local allies – is central to the problems at hand.

Both Iraq and Ukraine, under Western tutelage, failed to create states which welcome diversity. Both Iraq and the Ukraine aggressively marginalised significant communities, with the full knowledge and in some cases support of their Western benefactors. And in both cases, those disenfranchised communities have rallied their cause into wars of aggression.

Reading the Western media one would think it’s clear who the aggressors are in both cases: Islamic State and Russia are “obvious bad actors” who’s behaviour needs to be met with stern action. Russia clearly has no business arming rebels with guns they use irresponsibly to tragic effect, and the Islamic State are clearly “a barbaric, evil force”. If those gross simplifications, reinforced in the Western media, define our debate and discussion on the subject then we are destined pursue some painful paths with little but frustration to show for the effort, and nasty thorns that fester indefinitely. If that sounds familiar it’s because yes, this is the same thing happening all over again. In a prior generation, only a decade ago, anger and frustration at 9/11 crowded out calm deliberation and a focus on the crimes in favour of shock and awe. Today, out of a lack of insight into the root cause of Ukrainian separatism and Islamic State’s attractiveness to a growing number across the Middle East and North Africa, we are about to compound our problems by slugging our way into a fight we should understand before we join.

This is in no way to say that the behaviour of Islamic State or Russia are acceptable in modern society. They are not. But we must take responsibility for our own behaviour first and foremost; time and history are the best judges of the behaviour of others.

In the case of the Ukraine, it’s important to know how miserable it has become for native Russian speakers born and raised in the Ukraine. People who have spent their entire lives as citizens of the Ukraine who happen to speak in Russian at home, at work, in church and at social events have found themselves discriminated against by official decree from Kiev. Friends of mine with family in Odessa tell me that there have been systematic attempts to undermine and disenfranchise Russian speaking in the Ukraine. “You may not speak in your home language in this school”. “This market can only be conducted in Ukrainian, not Russian”. It’s important to appreciate that being a Russian speaker in Ukraine doesn’t necessarily mean one is not perfectly happy to be a Ukranian. It just means that the Ukraine is a diverse cultural nation and has been throughout our lifetimes. This is a classic story of discrimination. Friends of mine who grew up in parts of Greece tell a similar story about the Macedonian culture being suppressed – schools being forced to punish Macedonian language spoken on the playground.

What we need to recognise is that countries – nations – political structures – which adopt ethnic and cultural purity as a central idea, are dangerous breeding grounds for dissent, revolt and violence. It matters not if the government in question is an ally or a foe. Those lines get drawn and redrawn all the time (witness the dance currently under way to recruit Kurdish and Iranian assistance in dealing with IS, who would have thought!) based on marriages of convenience and hot button issues of the day. Turning a blind eye to thuggery and stupidity on the part of your allies is just as bad as making sure you’re hanging with the cool kids on the playground even if it happens that they are thugs and bullies –  stupid and shameful short-sightedness.

In Iraq, the government installed and propped up with US money and materials (and the occasional slap on the back from Britain) took a pointedly sectarian approach to governance. People of particular religious communities were removed from positions of authority, disqualified from leadership, hunted and imprisoned and tortured. The US knew that leading figures in their Iraqi government were behaving in this way, but chose to continue supporting the government which protected these thugs because they were “our people”. That was a terrible mistake, because it is those very communities which have morphed into Islamic State.

The modern nation states we call Iraq and the Ukraine – both with borders drawn in our modern lifetimes – are intrinsically diverse, intrinsically complex, intrinsically multi-cultural parts of the world. We should know that a failure to create governments of that diversity, for that diversity, will result in murderous resentment. And yet, now that the lines for that resentment are drawn, we are quick to choose sides, precisely the wrong position to take.

What makes this so sad is that we know better and demand better for ourselves. The UK and the US are both countries who have diversity as a central tenet of their existence. Freedom of religion, freedom of expression, the right to a career and to leadership on the basis of competence rather than race or creed are major parts of our own identity. And yet we prop up states who take precisely the opposite approach, and wonder why they fail, again and again. We came to these values through blood and pain, we hold on to these values because we know first hand how miserable and how wasteful life becomes if we let human tribalism tear our communities apart. There are doors to universities in the UK on which have hung the bodies of religious dissidents, and we will never allow that to happen again at home, yet we prop up governments for whom that is the norm.

The Irish Troubles was a war nobody could win. It was resolved through dialogue. South African terrorism in the 80′s was a war nobody could win. It was resolved through dialogue and the establishment of a state for everybody. Time and time again, “terrorism” and “barbarism” are words used to describe fractious movements by secure, distant seats of power, and in most of those cases, allowing that language to dominate our thinking leads to wars that nobody can win.

Russia made a very grave error in arming Russian-speaking Ukranian separatists. But unless the West holds Kiev to account for its governance, unless it demands an open society free of discrimination, the misery there will continue. IS will gain nothing but contempt from its demonstrations of murder – there is no glory in violence on the defenceless and the innocent – but unless the West bends its might to the establishment of societies in Syria and Iraq in which these religious groups are welcome and free to pursue their ambitions, murder will be the only outlet for their frustration. Politicians think they have a new “clean” way to exert force – drones and airstrikes without “boots on the ground”. Believe me, that’s false. Remote control warfare will come home to fester on our streets.

 

by mark at September 25, 2014 08:01 AM

Google Blog

You don’t know what you don’t know: How our unconscious minds undermine the workplace

When YouTube launched their video upload app for iOS, between 5 and 10 percent of videos uploaded by users were upside-down. Were people shooting videos incorrectly? No. Our early design was the problem. It was designed for right-handed users, but phones are usually rotated 180 degrees when held in left hands. Without realizing it, we’d created an app that worked best for our almost exclusively right-handed developer team.

This is just one example of how unconscious biases influence our actions every day, even when—by definition—we don’t notice them. These biases are shaped by our experiences and by cultural norms, and allow us to filter information and make quick decisions. We’ve evolved to trust our guts. But sometimes these mental shortcuts can lead us astray, especially when they cause us to misjudge people. In the workplace, for example, the halo effect can cause us to inflate performance ratings or in-group bias can lead us to overlook great talent.

Combatting our unconscious biases is hard, because they don’t feel wrong; they feel right. But it’s necessary to fight against bias in order to create a work environment that supports and encourages diverse perspectives and people. Not only is that the right thing to do, but without a diverse workforce, there’s a pretty good chance that our products—just like that early YouTube app—won’t work for everyone. That means we need to make the unconscious, conscious.

The first step is education; we need to help people identify and understand their biases so that they can start to combat them. So we developed a workshop, Unconscious Bias @ Work, in which more than 26,000 Googlers have taken part. And it’s made an impact: Participants were significantly more aware, had greater understanding, and were more motivated to overcome bias.

In addition to our workshop, we’re partnering with organizations like the Clayman Institute and the Ada Initiative to further research and awareness. We’re also taking action to ensure that the decisions we make at work—from promoting employees to marketing products—are objective and fair. Here are four ways we're working to reduce the influence of bias:

  • Gather facts. It’s hard to know you’re improving if you’re not measuring. We collect data on things like gender representation in our doodles and at our conferences.
  • Create a structure for making decisions. Define clear criteria to evaluate the merits of each option, and use them consistently. Using the same standards to evaluate all options can reduce bias. This is why we use structured interviews in hiring, applying the same selection and evaluation methods for all.
  • Be mindful of subtle cues. Who’s included and who’s excluded? In 2013, Googlers pointed out that of the dozens of conference rooms named after famous scientists, only a few were female. Was this our vision for the future? No. So we changed Ferdinand von Zeppelin to Florence Nightingale—along with many others—to create more balanced representation. Seemingly small changes can have big effects.
  • Foster awareness. Hold yourself—and your colleagues—accountable. We’re encouraging Googlers to call out bias. For example, we share a “bias busting checklist” at performance reviews, encouraging managers to examine their own biases and call out those of others.

As we shared back in May, we’re not where we should be when it comes to diversity. But in order to get there, we need to have this conversation. We have to figure out where our biases lie, and we have to combat them. Tackling unconscious bias at work is just one piece of making Google a diverse workplace, but it’s absolutely essential if we’re going to live up to our promise to build technology that makes life better for as many people as possible.

by Emily Wood (noreply@blogger.com) at September 25, 2014 09:00 AM

Chris Siebenmann

Why CGI-BIN scripts are an attractive thing for many people

The recent Bash vulnerability has people suddenly talking about CGI-BIN scripts, among other things, and so the following Twitter exchange took place:

@dreid: Don't use CGI scripts. For reals.

@thatcks: My lightweight and simple deployment options are CGI scripts or PHP code. I'll take CGI scripts as the lesser evil.

@eevee: i am pretty interested in solving this problem. what are your constraints

This really deserves more of a reply than I could give on Twitter, so here's my attempt at explaining the enduring attraction of CGI scripts.

In a nutshell, the practical attraction of CGI is that it starts really simple and then you can make things more elaborate if you need it. Once the web server supports it in general, the minimal CGI script deployment is a single executable file written in the language of your choice. For GET based CGI scripts, this program runs in a quite simple environment (call it an API if you want) for both looking at the incoming request and dumping out its reply (life is slightly more difficult if you're dealing with POST requests). Updating your CGI script is as simple as editing it or copying in a new version and your update is guaranteed to take effect immediately; deactivating your script is equally simple. If you're using at least Apache you can easily give your CGI script a simple authentication system (with HTTP Basic authentication).

In the category of easy deployment, Apache often allows you to exercise a lot of control over this process without needing to involve the web server administrator to change the global web server configuration. Given .htaccess control you can do things like add your own basic authentication, add access control, and do some URL rewriting. This is part of how CGI scripts allow you to make things more elaborate if you need to. In particular, if your 'CGI script' grows big enough you don't have to stick with a single file; depending on your language there are all sorts of options to expand into auxiliary files and quite complicated systems (my Rube Goldberg lashup is an extreme case).

Out of all of the commonly available web application systems (at least on Unix), the only one that has a similar feature list and a similar ease of going from small to large is PHP. Just like CGI scripts you can start with a single PHP file that you drop somewhere and then can grow in various ways, and PHP has a simple CGI-like API (maybe even simpler, since you can conveniently intermix PHP and HTML). Everything else has a more complex deployment process (especially if you're starting from scratch) and often a more complex management process.

CGI scripts are not ideal for large applications, to say the least. But they are great for small, quick things and they have an appealingly simple deployment story for even moderate jobs like a DHCP registration system.

By the way, this is part of the reason that people write CGI scripts in (Bourne) shell. Bourne shell is itself a very concise language for relatively simple things, and if all you are doing is something relatively simple, well, there you go. A Bourne shell script is likely to be shorter and faster to write than almost anything else.

(Expert Perl programmers can probably dash off Perl scripts that are about as compact as that, but I think there are more relatively competent Bourne shell scripters among sysadmins than there are relatively expert Perl programmers.)

by cks at September 25, 2014 06:05 AM

September 24, 2014

Ferry Boender

SSH port forwarding: bind: Cannot assign requested address

Just now I tried seting up an SSH tunnel. Something I must have done for at least a few tens of thousands of times in my career. But suddenly, it didn't work anymore:

$ ssh -L 8080:127.0.0.1:80 dev.local
bind: Cannot assign requested address

After checking that the local port was free, and the remote port had Apache listening on it, and testing some other things, I turned on verbose logging for SSH:

$ ssh -v -L 8080:127.0.0.1:80 dev.local
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Local connections to LOCALHOST:8080 forwarded to remote
        address 127.0.0.1:80
debug1: Local forwarding listening on 127.0.0.1 port 8080.
debug1: channel 0: new [port listener]
debug1: Local forwarding listening on ::1 port 8080.
bind: Cannot assign requested address

 

The problem becomes obvious. It's trying to bind on an IPv6 address, not IPv4. And since IPv6 is a flaming heap of shit not production ready yet, it fails miserably. 

You can force the use of IPv4 on the commandline with the -4 switch:

$ ssh -4 -L 8080:127.0.0.1:80 dev.local
Linux dev.local 2.6.32-5-amd64 #1 SMP Mon Sep 23 22:14:43 UTC 2013 x86_64
[fboender@dev]~$

To permanently disable IPv6, edit your ~/.ssh/config and add:

$ vi ~/.ssh/config
Host *
    AddressFamily inet

That will make sure SSH never even tries anything with IPv6.

by admin at September 24, 2014 08:19 AM

Chris Siebenmann

One thing I've come to dislike about systemd

One of the standard knocks against systemd is that it keeps growing and expanding, swallowing more and more jobs and so on. I've come around to feeling that this is probably a problem, but not for the reasons you might expect. The short version is that the growing bits are not facing real competition and thus are not being proven and improved by it.

Say what you like about it, but the core design and implementation of systemd went through a relatively ruthlessly Darwinian selection process on the way to its current success. Or to put it more directly, it won a hard-fought fight against both Upstart and the status quo in multiple Linux distributions. This competition undoubtedly improved systemd and probably forced systemd to get a bunch of things right, and it's given us a reasonable assurance that systemd is probably the best choice today among the init systems that were viable options.

(Among other things, note that a number of well regarded Debian developers spent a bunch of time and effort carefully evaluating the various options and systemd came out on top.)

Unfortunately you cannot say the same thing about the components that systemd has added since then, for example the journal. These have simply not had to face real competition and probably never will (unless the uselessd fork catches on); instead they have been able to tacitly ride along with systemd because when you take systemd you more or less have to take the components. Is the systemd journal the best design possible? Is it the option that would win out if there was a free choice? It might be, but we fundamentally don't know. And to the extent that real competition pushes all parties to improve, well, the systemd journal has probably not had this opportunity. Realistically the systemd journal will probably never have any competition, which means that it's enough for it to merely work well enough and be good enough and it probably won't be pushed to be the best.

(Sure, in theory other people are free to write a better replacement for the journal and drop it in. In practice such a replacement project has exactly one real potential user, systemd itself, and the odds are that said user is not going to be particularly interested in replacing their existing solution with your new one, especially if your new one has a significantly different design.)

I would feel happier about systemd's ongoing habit of growing more and more tentacles if those tentacles faced real standalone competition. As it is, people are effectively being asked to take a slowly increasing amount of systemd's functionality on faith that it is good engineering and a good implementation. It may or may not be good engineering (I have no opinion on that), but I can't help but think that real competition would improve it. If nothing else, real competition would settle doubts.

(For instance, I have a reflexive dubiousness about the systemd journal (and I'm not alone in this). If I knew that outside third parties had evaluated it against other options and had determined that it was the best choice (and that the whole idea was worth doing), I would feel better about things.)

By the way, this possibility for separate and genuine competition is one good reason to stick to a system of loosely coupled pieces as much as possible. I say pieces instead of components deliberately, because people are much more likely to mix and match things that are separate projects than they are to replace a component of a project with something else.

(This too is one area where I've become not entirely happy with systemd. With systemd you have components, not pieces, and so in practice most people are going to take everything together. The systemd developers could change this if they wanted to because in large part this is a cultural thing with technical manifestations, such as do you distribute everything in one source tarball and have it all in one VCS repository.)

Sidebar: this doesn't apply to all systemd components

Note that some of what are now systemd components started out life as separate pieces that proved themselves independently and then got moved into systemd (udev is one example); they don't have this problem. Some components are probably either so simple or so easy to ignore that they don't really matter for this. The systemd journal is not an example of either.

by cks at September 24, 2014 05:45 AM

Raymii.org

Patch Shellshock)with Ansible

This is a simple ansible playbook to patch Debian, CentOS, Ubuntu and derivatives for the Shellshock vulnerability (CVE-2014-6271).

September 24, 2014 12:00 AM

September 23, 2014

Ubuntu Geek


Administered by Joe. Content copyright by their respective authors.