Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

October 22, 2014

Adrian C.

SysV init on Arch Linux, and Debian

Arch Linux distributes systemd as its init daemon, and has deprecated SysV init in June 2013. Debian is doing the same now and we see panic and terror sweep through that community, especially since this time thousands of my sysadmin colleagues are affected. But like with Arch Linux we are witnessing irrational behavior, loud protests all the way to the BSD camp and public threats of Debian forking. Yet all that is needed, and let's face it much simpler to achieve, is organizing a specialized user group interested in keeping SysV (or your alternative) usable in your favorite GNU/Linux distribution with members that support one another, exactly as I wrote back then about Arch Linux.

Unfortunately I'm not aware of any such group forming in the Arch Linux community around sysvinit, and I've been running SysV init alone as my PID 1 since then. It was not a big deal, but I don't always have time or the willpower to break my personal systems after a 60 hour work week, and the real problems are yet to come anyway - if (when) for example udev stops working without systemd PID 1. If you had a support group, and especially one with a few coding gurus among you most of the time chances are they would solve a difficult problem first, and everyone benefits. On some other occasions an enthusiastic user would solve it first, saving gurus from a lousy weekend.

For anyone else left standing at the cheapest part of the stadium, like me, maybe uselessd as a drop-in replacement is the way to go after major subsystems stop working in our favorite GNU/Linux distributions. I personally like what they reduced systemd to (inspired by suckless.org philosophy?), but chances are without support the project ends inside 2 years, and we would be back here duct taping in isolation.

by anrxc at October 22, 2014 09:28 PM

SysAdmin1138

Getting stuck in Siberia

I went on a bit of a twitter rant recently.

Good question, since that's a very different problem than the one I was ranting about. How do you deal with that?


I hate to break it to you, but if you're in the position where your manager is actively avoiding you it's all on you to fix it. There are cases where it isn't up to you, such as if there are a lot of people being avoided and it's affecting the manager's work-performance, but that's a systemic problem. No, for this case I'm talking about you are being avoided, and not your fellow direct-reports. It's personal, not systemic.

No, it's not fair. But you still have to deal with it.

You have a question to ask yourself:

Do I want to change myself to keep the job, or do I want to change my manager by getting a new job?

Because this shunning activity is done by managers who would really rather fire your ass, but can't or won't for some reason. Perhaps they don't have firing authority. Perhaps the paperwork is too much to bother with firing someone. Perhaps they're the conflict-avoidant type and pretending you don't exist is preferable to making you Very Angry by firing them.

You've been non-verbally invited to Go Away. You get to decide if that's what you want to do.

Going Away

Start job-hunting, and good riddance. They may even overlook job-hunt activities on the job, but don't push it.

Staying and Escalating

They can't/won't get rid of you, but you're still there. It's quite tempting to stick around and intimidate your way into their presence and force them to react. They're avoiding you for a reason, so hit those buttons harder. This is not the adult way to respond to the situation, but they started it.

I shouldn't have to say that, but this makes for a toxic work environment for everyone else so... don't do that.

Staying and Reforming

Perhaps the job itself is otherwise awesome-sauce, or maybe getting another job will involve moving and you're not ready for that. Time to change yourself.

Step 1: Figure out why the manager is hiding from you.
Step 2: Stop doing that.
Step 3: See if your peace-offering is accepted.

Figure out why they're hiding

This is key to the whole thing. Maybe they see you as too aggressive. Maybe you keep saying no and they hate that. Maybe you never give an unqualified answer and they want definites. Maybe you always say, 'that will never work,' to anything put before you. Maybe you talk politics in the office and they don't agree with you. Maybe you don't go paintballing on weekends. Whatever it is...

Stop doing that.

It's not always easy to know why someone is avoiding you. That whole avoidant thing makes it hard. Sometimes you can get intelligence from coworkers about what the manager has been saying when you're not around or what happens when your name comes up. Ask around, at least it'll show you're aware of the problem.

And then... stop doing whatever it is. Calm down. Say yes more often. Start qualifying answers only in your head instead of out loud. Say, "I'll see what I can do" instead of "that'll never work." Stop talking politics in the office. Go paintballing on weekends. Whatever it is, start establishing a new set of behaviors.

And wait.

Maybe they'll notice and warm up. It'll be hard, but you probably need the practice to change your habits.

See if your peace-offering is accepted

After your new leaf is turned over, it might pay off to draw their attention to it. This step definitely depends on the manager and the source of the problem, but demonstrating a new way of behaving before saying you've been behaving better can be the key to get back into the communications stream. It also hangs a hat on the fact that you noticed you were in bad graces and took effort to change.

What if it's not accepted?

Then learn to live in Siberia and work through proxies, or lump it and get another job.

by SysAdmin1138 at October 22, 2014 08:00 PM

Ubuntu Geek

Everything Sysadmin

Katherine Daniels (@beerops) interviews Tom Limoncelli

Katherine Daniels (known as @beerops on Twitter) interviewed me about the presentations I'll be doing at the upcoming Usenix LISA '14 conference. Check it out:

https://www.usenix.org/blog/interview-tom-limoncelli

Register soon! Seating in my tutorials is limited!

October 22, 2014 02:28 PM

Google Blog

An inbox that works for you

Today, we’re introducing something new. It’s called Inbox. Years in the making, Inbox is by the same people who brought you Gmail, but it’s not Gmail: it’s a completely different type of inbox, designed to focus on what really matters.

Email started simply as a way to send digital notes around the office. But fast-forward 30 years and with just the phone in your pocket, you can use email to contact virtually anyone in the world…from your best friend to the owner of that bagel shop you discovered last week.

With this evolution comes new challenges: we get more email now than ever, important information is buried inside messages, and our most important tasks can slip through the cracks—especially when we’re working on our phones. For many of us, dealing with email has become a daily chore that distracts from what we really need to do—rather than helping us get those things done.

If this all sounds familiar, then Inbox is for you. Or more accurately, Inbox works for you. Here are some of the ways Inbox is at your service:



Bundles: stay organized automatically
Inbox expands upon the categories we introduced in Gmail last year, making it easy to deal with similar types of mail all at once. For example, all your purchase receipts or bank statements are neatly grouped together so that you can quickly review and then swipe them out of the way. You can even teach Inbox to adapt to the way you work by choosing which emails you’d like to see grouped together.

Highlights: the important info at a glance
Inbox highlights the key information from important messages, such as flight itineraries, event information, and photos and documents emailed to you by friends and family. Inbox will even display useful information from the web that wasn’t in the original email, such as the real-time status of your flights and package deliveries. Highlights and Bundles work together to give you just the information you need at a glance.
Reminders, Assists, and Snooze: your to-do’s on your own terms
Inbox makes it easy to focus on your priorities by letting you add your own Reminders, from picking up the dry cleaning to giving your parents a call. No matter what you need to remember, your inbox becomes a centralized place to keep track of the things you need to get back to.
A sampling of Assists
And speaking of to-do’s, Inbox helps you cross those off your list by providing Assists—handy pieces of information you may need to get the job done. For example, if you write a Reminder to call the hardware store, Inbox will supply the store’s phone number and tell you if it's open. Assists work for your email, too. If you make a restaurant reservation online, Inbox adds a map to your confirmation email. Book a flight online, and Inbox gives a link to check-in.

Of course, not everything needs to be done right now. Whether you’re in an inconvenient place or simply need to focus on something else first, Inbox lets you Snooze away emails and Reminders. You can set them to come back at another time or when you get to a specific location, like your home or your office.

Get started with Inbox
Starting today, we’re sending out the first round of invitations to give Inbox a try, and each new user will be able to invite their friends. If Inbox can’t arrive soon enough for you, you can email us at inbox@google.com to get an invitation as soon as more become available.

When you start using Inbox, you’ll quickly see that it doesn’t feel the same as Gmail—and that’s the point. Gmail’s still there for you, but Inbox is something new. It’s a better way to get back to what matters, and we can’t wait to share it with you.



Cross-posted from the Official Gmail Blog

by Emily Wood (noreply@blogger.com) at October 22, 2014 11:03 AM

Chris Siebenmann

Exim's (log) identifiers are basically unique on a given machine

Exim gives each incoming email message an identifier; these look like '1XgWdJ-00020d-7g'. Among other things, this identifier is used for all log messages about the particular email message. Since Exim normally splits information about each message across multiple lines, you routinely need to reassemble or at least match multiple lines for a single message. As a result of this need to aggregate multiple lines, I've quietly wondered for a long time just how unique these log identifiers were. Clearly they weren't going to repeat over the short term, but if I gathered tens or hundreds of days of logs for a particular system, would I find repeats?

The answer turns out to be no. Under normal circumstances Exim's message IDs here will be permanently unique on a single machine, although you can't count on global uniqueness across multiple machines (although the odds are pretty good). The details of how these message IDs are formed are in the Exim documentation's chapter 3.4. On most Unixes and with most Exim configurations they are a per-second timestamp, the process PID, and a final subsecond timestamp, and Exim takes care to guarantee that the timestamps will be different for the next possible message with the same PID.

(Thus a cross-machine collision would require the same message time down to the subsecond component plus the same PID on both machines. This is fairly unlikely but not impossible. Exim has a setting that can force more cross-machine uniqueness.)

This means that aggregation of multi-line logs can be done with simple brute force approaches that rely on ID uniqueness. Heck, to group all the log lines for a given message together you can just sort on the ID field, assuming you do a stable sort so that things stay in timestamp order when the IDs match.

(As they say, this is relevant to my interests and I finally wound up looking it up today. Writing it down here insures I don't have to try to remember where I found it in the Exim documentation the next time I need it.)

PS: like many other uses of Unix timestamps, all of this uniqueness potentially goes out the window if you allow time on your machine to actually go backwards. On a moderate volume machine you'd still have to be pretty unlucky to have a collision, though.

by cks at October 22, 2014 04:21 AM

October 21, 2014

Ubuntu Geek

Yellow Bricks

What is coming for vSphere and VSAN? VMworld reveals…


I’ve been prepping a presentation for upcoming VMUGs, but wanted to also share this with my readers. The session is all about vSphere futures, what is coming soon? Before anyone says I am breaking NDA, I’ve harvested all of this info from public VMworld sessions. Except for the VSAN details, those were announced to the press at VMworld EMEA. Lets start with Virtual SAN…

The Virtual SAN details were posted in this Computer Weekly article, and by the looks of it they interviewed VMware’s CEO Pat Gelsinger and Alberto Farronato from the VSAN product team. So what is coming soon?

  • All Flash Virtual SAN support
    Considering the price of MLC has lowered to roughly the same price as SAS HDDs per GB I think this is a great new feature to have. Being able to build all-flash configurations at the price point of a regular configuration, and with probably many supported configurations is a huge advantage of VSAN. I would expect VSAN to support various types of flash as the “capacity” layer, so this is an architects dream… designing your own all-flash storage system!
  • Virsto integration
    I played with Virsto when it was just released and was impressed by the performance and the scalability. Functions that were part of Virst such as snapshots and clones these have been built into VSAN and it will bring VSAN to the next level!
  • JBOD support
    Something many have requested, and primarily to be able to use VSAN in Blade environments… Well with the JBOD support announced this will be a lot easier. I don’t know the exact details, but just the “JBOD” part got me excited.
  • 64 host VSAN cluster support
    VSAN doesn’t scale? Here you go,

That is a nice list by itself, and I am sure there is plenty more for VSAN. At VMworld for instance Wade Holmes also spoke about support for disk controller based encryption for instance. Cool right?! So what about vSphere? Considering even the version number was dropped during the keynote and it hints at a major release you would expect some big functionality to be introduced. Once again, all the stuff below is harvested from various public VMworld sessions:

  • VMFork aka Project Fargo – discussed here…
  • Increased scale!
    • 64 host HA/DRS cluster, I know a handful of customers who asked for 64 host clusters, so here it is guys… or better said: soon you will have it!
  • SMP vCPU FT – up to 4 vCPU support
    • I like FT from an innovation point of view, but it isn’t a feature I would personally use too much as I feel “fault tolerance” from an app perspective needs to be solved by the app. Now, I do realize that there are MANY legacy applications out there, and if you have a scale-up application which needs to be highly available then SMP FT is very useful. Do note that with this release the architecture of FT has changed. For instance you used to share the same “VMDK” for both primary and secondary, but that is no longer the case.
  • vMotion across anything
    • vMotion across vCenter instances
    • vMotion across Distributed Switch
    • vMotion across very large distance, support up to 100ms latency
    • vMotion to vCloud Air datacenter
  • Introduction of Virtual Datacenter concept in vCenter
    • Enhance “policy driven” experience within vCenter. Virtual Datacenter aggregates compute clusters, storage clusters, networks, and policies!
  • Content Library
    • Content Library provides storage and versioning of files including VM templates, ISOs, and OVFs.
      Includes powerful publish and subscribe features to replicate content
      Backed by vSphere Datastores or NFS
  • Web Client performance / enhancement
    • Recent tasks pane drops to the bottom instead of on the right
    • Performance vastly improved
    • Menus flattened
  • DRS placement “network aware”
    • Hosts with high network contention can show low CPU and memory usage, DRS will look for more VM placements
    • Provide network bandwidth reservation for VMs and migrate VMs in response to reservation violations!
  • vSphere HA component protection
    • Helps when hitting “all paths down” situations by allowing HA to take action on impacted virtual machines
  • Virtual Volumes, bringing the VSAN “policy goodness” to traditional storage systems

Of course there is more, but these are the ones that were discussed at VMworld… for the remainder you will have to wait until the next version of vSphere is released, or you can also sign up for the beta still I believe!

"What is coming for vSphere and VSAN? VMworld reveals…" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.


Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at October 21, 2014 12:55 PM

Chris Siebenmann

Some numbers on our inbound and outbound TLS usage in SMTP

As a result of POODLE, it's suddenly rather interesting to find out the volume of SSLv3 usage that you're seeing. Fortunately for us, Exim directly logs the SSL/TLS protocol version in a relatively easy to search for format; it's recorded as the 'X=...' parameter for both inbound and outbound email. So here's some statistics, first from our external MX gateway for inbound messages and then from our other servers for external deliveries.

Over the past 90 days, we've received roughly 1.17 million external email messages. 389,000 of them were received with some version of SSL/TLS. Unfortunately our external mail gateway currently only supports up to TLS 1.0, so the only split I can report is that only 130 of these messages were received using SSLv3 instead of TLS 1.0. 130 messages is low enough for me to examine the sources by hand; the only particularly interesting and eyebrow-raising ones were a couple of servers at a US university and a .nl ISP.

(I'm a little bit surprised that our Exim doesn't support higher TLS versions, to be honest. We're using Exim on Ubuntu 12.04, which I would have thought would support something more than just TLS 1.0.)

On our user mail submission machine, we've delivered to 167,000 remote addresses over the past 90 days. Almost all of them, 158,000, were done with SSL/TLS. Only three of them used SSLv3 and they were all to the same destination; everything else was TLS 1.0.

(It turns out that very few of our user submitted messages were received with TLS, only 0.9%. This rather surprises me but maybe many IMAP programs default to not using TLS even if the submission server offers it. All of these small number of submissions used TLS 1.0, as I'd hope.)

Given that our Exim version only supports TLS 1.0, these numbers are more boring than I was hoping they'd be when I started writing this entry. That's how it goes sometimes; the research process can be disappointing as well as educating.

(I did verify that our SMTP servers really only do support up to TLS 1.0 and it's not just that no one asked for a higher version than that.)

One set of numbers I'd like to get for our inbound email is how TLS usage correlates with spam score. Unfortunately our inbound mail setup makes it basically impossible to correlate the bits together, as spam scoring is done well after TLS information is readily available.

Sidebar: these numbers don't quite mean what you might think

I've talked about inbound message deliveries and outbound destination addresses here because that's what Exim logs information about, but of course what is really encrypted is connections. One (encrypted) connection may deliver multiple inbound messages and certainly may be handed multiple RCPT TO addresses in the same conversation. I've also made no attempt to aggregate this by source or destination, so very popular sources or destinations (like, say, Gmail) will influence these numbers quite a lot.

All of this means that this sort of numbers can't be taken as an indication of how many sources or destinations do TLS with us. All I can talk about is message flows.

(I can't even talk about how many outgoing messages are completely protected by TLS, because to do that I'd have to work out how many messages had no non-TLS deliveries. This is probably possible with Exim logs, but it's more work than I'm interested in doing right now. Clearly what I need is some sort of easy to use Exim log aggregator that will group all log messages for a given email message together and then let me do relatively sophisticated queries on the result.)

by cks at October 21, 2014 03:28 AM

October 20, 2014

Everything Sysadmin

See you tomorrow evening at the Denver DevOps Meetup!

Hey Denver folks! Don't forget that tomorrow evening (Tue, Oct 21) I'll be speaking at the Denver DevOps Meetup. It starts at 6:30pm! Hope to see you there!

http://www.meetup.com/DenverDevOps/events/213369602/

October 20, 2014 04:28 PM

Mark Shuttleworth

V is for Vivid

Release week! Already! I wouldn’t call Trusty ‘vintage’ just yet, but Utopic is poised to leap into the torrent stream. We’ve all managed to land our final touches to *buntu and are excited to bring the next wave of newness to users around the world. Glad to see the unicorn theme went down well, judging from the various desktops I see on G+.

And so it’s time to open the vatic floodgates and invite your thoughts and contributions to our soon-to-be-opened iteration next. Our ventrous quest to put GNU as you love it on phones is bearing fruit, with final touches to the first image in a new era of convergence in computing. From tiny devices to personal computers of all shapes and sizes to the ventose vistas of cloud computing, our goal is to make a platform that is useful, versal and widely used.

Who would have thought – a phone! Each year in Ubuntu brings something new. It is a privilege to celebrate our tenth anniversary milestone with such vernal efforts. New ecosystems are born all the time, and it’s vital that we refresh and renew our thinking and our product in vibrant ways. That we have the chance to do so is testament to the role Linux at large is playing in modern computing, and the breadth of vision in our virtual team.

To our fledgling phone developer community, for all your votive contributions and vocal participation, thank you! Let’s not be vaunty: we have a lot to do yet, but my oh my what we’ve made together feels fantastic. You are the vigorous vanguard, the verecund visionaries and our venerable mates in this adventure. Thank you again.

This verbose tract is a venial vanity, a chance to vector verbal vibes, a map of verdant hills to be climbed in months ahead. Amongst those peaks I expect we’ll find new ways to bring secure, free and fabulous opportunities for both developers and users. This is a time when every electronic thing can be an Internet thing, and that’s a chance for us to bring our platform, with its security and its long term support, to a vast and important field. In a world where almost any device can be smart, and also subverted, our shared efforts to make trusted and trustworthy systems might find fertile ground. So our goal this next cycle is to show the way past a simple Internet of things, to a world of Internet things-you-can-trust.

In my favourite places, the smartest thing around is a particular kind of monkey. Vexatious at times, volant and vogie at others, a vervet gets in anywhere and delights in teasing cats and dogs alike. As the upstart monkey in this business I can think of no better mascot. And so let’s launch our vicenary cycle, our verist varlet, the Vivid Vervet!

by mark at October 20, 2014 01:22 PM

Google Blog

DISTRICT VOICES: Inside Panem with our finest citizens

Meet District Voices, the latest campaign in our Art, Copy & Code project—where we explore new ways for brands to connect with consumers through experiences that people love, remember and share. District Voices was created in partnership with Lionsgate to promote the upcoming release of The Hunger Games: Mockingjay Part 1. -Ed.

Greetings, Citizens of Panem!

The Capitol has joined forces with Google and YouTube to celebrate the proud achievements of our strong, lively districts. Premiering today on YouTube, a new miniseries called DISTRICT VOICES will take you behind the scenes to meet some of Panem’s most creative—and loyal—citizens.

At 4 p.m. EDT/ 1 p.m. PDT every day this week, one of your favorite Citizen creators from YouTube will give you a never-before-seen tour of their districts. First, the Threadbanger textile experts of District 8 will show how utility meets beauty in this season’s fashion—plus, you’ll get a look at a new way to wear your Capitol pride. Tomorrow, District 2's Shane Fazen will provide a riveting demonstration of how we keep our noble peacekeepers in tip-top shape. On Wednesday, Derek Muller from District 5—Panem’s center of power generation—will give you a peek at a revolutionary new way to generate electricity. Thursday The Grain District’s own Feast of Fiction will show you how to bake one of beloved victor Peeta Mellark’s most special treats. And finally, iJustine, District 6’s liaison to the Capitol, will give you an exclusive glimpse at the majestic and powerful peacekeeper vehicles in action.

Tune in at CAPITOL TV. And remember—Love your labor. Take pride in your task. Our future is in your hands.

by Emily Wood (noreply@blogger.com) at October 20, 2014 10:05 AM

Tech Teapot

New Aviosys IP Power 9858 Box Opening

A series of box opening photos of the new Aviosys IP Power 9858 4 port network power switch. This model will in due course replace the Aviosys IP Power 9258 series of power switches. The 9258 series is still available in the mean time though, so don’t worry.

The new model supports WiFi (802.11n-b/g and WPS for easy WiFi setup), auto reboot on ping failure, time of day scheduler and internal temperature sensor. Aviosys have also built apps for iOS and Android, so you can now manage your power switch on the move. Together with the 8 port Aviosys IP Power 9820 they provide very handy tools for remote power management of devices. Say goodbye to travelling to a remote site just to reboot a broadband router.

Aviosys IP Power 9858DX Closed Box Aviosys IP Power 9858DX Open Box Aviosys IP Power 9858DX Front with Wifi Aerial Aviosys IP Power 9858DX Front Panel Aviosys IP Power 9858DX Rear Panel Aviosys IP Power 9858DX Read Close Up #2

 

The post New Aviosys IP Power 9858 Box Opening appeared first on Openxtra Tech Teapot.

by Jack Hughes at October 20, 2014 07:00 AM

Chris Siebenmann

Revisiting Python's string concatenation optimization

Back in Python 2.4, CPython introduced an optimization for string concatenation that was designed to reduce memory churn in this operation and I got curious enough about this to examine it in some detail. Python 2.4 is a long time ago and I recently was prompted to wonder what had changed since then, if anything, in both Python 2 and Python 3.

To quickly summarize my earlier entry, CPython only optimizes string concatenations by attempting to grow the left side in place instead of making a new string and copying everything. It can only do this if the left side string only has (or clearly will have) a reference count of one, because otherwise it's breaking the promise that strings are immutable. Generally this requires code of the form 'avar = avar + ...' or 'avar += ...'.

As of Python 2.7.8, things have changed only slightly. In particular concatenation of Unicode strings is still not optimized; this remains a byte string only optimization. For byte strings there are two cases. Strings under somewhat less than 512 bytes can sometimes be grown in place by a few bytes, depending on their exact sizes. Strings over that can be grown if the system realloc() can find empty space after them.

(As a trivial root, CPython also optimizes concatenating an empty string to something by just returning the other string with its reference count increased.)

In Python 3, things are more complicated but the good news is that this optimization does work on Unicode strings. Python 3.3+ has a complex implementation of (Unicode) strings, but it does attempt to do in-place resizing on them under appropriate circumstances. The first complication is that internally Python 3 has a hierarchy of Unicode string storage and you can't do an in-place concatenation of a more complex sort of Unicode string into a less complex one. Once you have compatible strings in this sense, in terms of byte sizes the relevant sizes are the same as for Python 2.7.8; Unicode string objects that are less than 512 bytes can sometimes be grown by a few bytes while ones larger than that are at the mercy of the system realloc(). However, how many bytes a Unicode string takes up depends on what sort of string storage it is using, which I think mostly depends on how big your Unicode characters are (see this section of the Python 3.3 release notes and PEP 393 for the gory details).

So my overall conclusion remains as before; this optimization is chancy and should not be counted on. If you are doing repeated concatenation you're almost certainly better off using .join() on a list; if you think you have a situation that's otherwise, you should benchmark it.

(In Python 3, the place to start is PyUnicode_Append() in Objects/unicodeobject.c. You'll probably also want to read Include/unicodeobject.h and PEP 393 to understand this, and then see Objects/obmalloc.c for the small object allocator.)

Sidebar: What the funny 512 byte breakpoint is about

Current versions of CPython 2 and 3 allocate 'small' objects using an internal allocator that I think is basically a slab allocator. This allocator is used for all overall objects that are 512 bytes or less and it rounds object size up to the next 8-byte boundary. This means that if you ask for, say, a 41-byte object you actually get one that can hold up to 48 bytes and thus can be 'grown' in place up to this size.

by cks at October 20, 2014 04:37 AM

October 19, 2014

Ubuntu Geek

Configuring layer-two peer-to-peer VPN using n2n

n2n is a layer-two peer-to-peer virtual private network (VPN) which allows users to exploit features typical of P2P applications at network instead of application level. This means that users can gain native IP visibility (e.g. two PCs belonging to the same n2n network can ping each other) and be reachable with the same network IP address regardless of the network where they currently belong. In a nutshell, as OpenVPN moved SSL from application (e.g. used to implement the https protocol) to network protocol, n2n moves P2P from application to network level.
(...)
Read the rest of Configuring layer-two peer-to-peer VPN using n2n (416 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , , ,

Related posts

by ruchi at October 19, 2014 11:20 PM

Evaggelos Balaskas

SatNOGS - Satellite Networked Open Ground Station

What started as a Nasa Space App Challenge now becomes an extraordinary opensource achievement on the top five finalist of hackaday.io.

What is SatNOGS in non technical words: imagine a cheap mobile openhardware ground station that can collaborate through the internet with other ground stations and gather satellite signals all together, participating in a holistic opensource/opendata and public accessible database/site !

If you are thinking, that cant be right, the answer is that it is!!!

The amazing team behind the SatNOGS is working around the clock - non stop ONLY with openhardware and free software to do exactly that !

A fully modular system (you can choose your own antennas! or base setup) you can review the entire code on github, you can see in high quality videos and guides for every step, every process, you can participate via comments, emails or even satellite signals !

satnogs_02.jpg

3D Printing is one of the major component in their journey till now. The have already published every design they are using for the satnogs project on github! You just need to print them. Every non-3d printing hardware are available to every hardware store near by you. The members of this project have published the Arduino code and schematics for the electronics too !!

Everything is fully documented in details, everything is open source !

AMAZING!

satnogs.jpg

It’s seems that i may be bias, so dont believe anything i am writing.
See for your self and be mind-blowing impressed with the quality of their hardware documentation

Visit their facebook account for news and contact them if you have a brilliant idea about satellites or you just want to get a status of their work.

How about the team ?

I’ve met the entire team at Athens Hackerspace and the first thing that came into my mind (and it is most impressive) is the diversity of the members itself.

Not only in age (most of them are university students, but older hobbyists are participating too) but also in the technical area of expertise. This team can easily solve every practical problem they can find in the process.

SatNOGS, as I’ve already mentioned, is fully active and that all started (with the bing bang of-course) with an idea: To reach and communicate with the Space (the final frontier). Satellites are sending signals 24/7 and the ground stations cant reach every satellite (i am not talking to geo-static satellites) and there is no one to acknowledge that. The problem that the satnogs is solving is real.

And i hope with this blog post, more people can understand how important is that this project scale to more hackerspaces around the globe.

To see more, just click here and you can monitor the entire process till now.

Tag(s): SatNOGS

October 19, 2014 09:28 PM

Ferry Boender

Bexec v0.8: Execute a vim buffer and capture output in split window

I released v0.8 of my Bexec vim plugin. The Bexec plugin allows the user to execute the current buffer if it contains a script with a shebang (#!/path/to/interpreter) on the first line or if the default interpreter for the script's type is known by Bexec. The output of the script will be grabbed and displayed in a separate buffer. 

New in this release:

  • Honor splitbelow and splitright vim setting (patch by Christopher Pease).

bexec

Installation instructions:

  1. Download the Vimball
  2. Start vim with: vim bexec-v0.8.vmb
  3. In Vim, type: :source %
  4. Bexec is now installed. Type :Bexec to run it, or use <MapLeader>bx

 

 

by admin at October 19, 2014 01:22 PM

Server Density

Chris Siebenmann

Vegeta, a tool for web server stress testing

Standard stress testing tools like siege (or the venerable ab, which you shouldn't use) are all systems that do N concurrent requests at once and see how your website stands up to this. This model is a fine one for putting a consistent load on your website for a stress test, but it's not actually representative of how the real world acts. In the real world you generally don't have, say, 50 clients all trying to repeatedly make and re-make one request to you as fast as they can; instead you'll have 50 new clients (and requests) show up every second.

(I wrote about this difference at length back in this old entry.)

Vegeta is a HTTP load and stress testing tool that I stumbled over at some point. What really attracted my attention is that it uses a 'N requests a second' model, instead of the concurrent request model. As a bonus it will also report not just average performance but also on outliers in the form of 90th and 99th percentile outliers. It's written in Go, which some of my readers may find annoying but which I rather like.

I gave it a try recently and, well, it works. It does what it says it does, which means that it's now become my default load and stress testing tool; 'N new requests a second' is a more realistic and thus interesting test than 'N concurrent requests' for my software (especially here, for obvious reasons).

(I may still do N concurrent requests tests as well, but it'll probably mostly be to see if there are issues that come up under some degree of consistent load and if I have any obvious concurrency race problems.)

Note that as with any HTTP stress tester, testing with high load levels may require a fast system (or systems) with plenty of CPUs, memory, and good networking if applicable. And as always you should validate that vegeta is actually delivering the degree of load that it should be, although this is actually reasonably easy to verify for a 'N new request per second' tester.

(Barring errors, N new requests a second over an M second test run should result in N*M requests made and thus appearing in your server logs. I suppose the next time I run a test with vegeta I should verify this myself in my test environment. In my usage so far I just took it on trust that vegeta was working right, which in light of my ab experience may be a little bit optimistic.)

by cks at October 19, 2014 06:04 AM

October 18, 2014

SysAdmin1138

For other Movable Type blogs out there

If you're wondering why comments aren't working, as I was, and are on shared hosting, as I am, and get to looking at your error_log file and see something like this in it:

[Sun Oct 12 12:34:56 2014] [error] [client 192.0.2.5] 
ModSecurity: Access denied with code 406 (phase 2).
Match of "beginsWith http://%{SERVER_NAME}/" against "MATCHED_VAR" required.
[file "/etc/httpd/modsecurity.d/10_asl_rules.conf"] [line "1425"] [id "340503"] [rev "1"]
[msg "Remote File Injection attempt in ARGS (/cgi-bin/mt4/mt-comments.cgi)"]
[severity "CRITICAL"]
[hostname "example.com"]
[uri "/cgi-bin/mt/mt-comments.cgi"]
[unique_id "PIMENTOCAKE"]

It's not just you.

It seems that some webhosts have a mod_security rule in place that bans submitting anything through "mt-comments.cgi". As this is the main way MT submits comments, this kind of breaks things. Happily, working around a rule like this is dead easy.

  1. Rename your mt-comments.cgi file to something else
  2. Add "CommentScript ${renamed file}" to your mt-config.cgi file

And suddenly comments start working again!

Except for Google, since they're deprecating OpenID support.

by SysAdmin1138 at October 18, 2014 09:46 PM

Rands in Repose

Chris Siebenmann

During your crisis, remember to look for anomalies

This is a war story.

Today I had one of those valuable learning experiences for a system administrator. What happened is that one of our old fileservers locked up mysteriously, so we power cycled it. Then it locked up again. And again (and an attempt to get a crash dump failed). We thought it might be hardware related, so we transplanted the system disks into an entirely new chassis (with more memory, because there was some indications that it might be running out of memory somehow). It still locked up. Each lockup took maybe ten or fifteen minutes from the reboot, and things were all the more alarming and mysterious because this particular old fileserver only had a handful of production filesystems still on it; almost all of them had been migrated to one of our new fileservers. After one more lockup we gave up and went with our panic plan: we disabled NFS and set up to do an emergency migration of the remaining filesystems to the appropriate new fileserver.

Only as we started the first filesystem migration did we notice that one of the ZFS pools was completely full (so full it could not make a ZFS snapshot). As we were freeing up some space in the pool, a little light came on in the back of my mind; I remembered reading something about how full ZFS pools on our ancient version of Solaris could be very bad news, and I was pretty sure that earlier I'd seen a bunch of NFS write IO at least being attempted against the pool. Rather than migrate the filesystem after the pool had some free space, we selectively re-enabled NFS fileservice. The fileserver stayed up. We enabled more NFS fileservice. And things stayed happy. At this point we're pretty sure that we found the actual cause of all of our fileserver problems today.

(Afterwards I discovered that we had run into something like this before.)

What this has taught me is during an inexplicable crisis, I should try to take a bit of time to look for anomalies. Not specific anomalies, but general ones; things about the state of the system that aren't right or don't seem right.

(There is a certain amount of hindsight bias in this advice, but I want to mull that over a bit before I wrote more about it. The more I think about it the more complicated real crisis response becomes.)

by cks at October 18, 2014 04:55 AM

Giri Mandalika

Blast from the Past : The Weekend Playlist #7

Previous playlists:

    #1 (50s, 60s and 70s) | #2 (80s) | #3 (80s) | #4 (80s) | #5 (80s) | #6 (90s)

Audio-Visual material courtesy: YouTube. Other information: Wikipedia.

1. Fatboy Slim / Norman Cook - Brimful of Asha (1998)

A remix. Original by UK band Cornershop.

2. Vanilla Ice - Ice Ice Baby (1990)

3. Beck - Loser (1993)

4. Primus - Mr. Krinkle (1993)

5. Tool - Stinkfist (1996)

if you don't mind watching dark videos, look for Stinkfist official video on youtube.

6. P.M. Dawn - Set Adrift On Memory Bliss (1991)

7. Primitive Radio Gods - Standing Outside A Broken Phone Booth (1996)

no traces of official video anywhere on web, for some reason.

8. Blues Traveler - Run-Around (1995)

Grammy winner.

9. KoRn - A.D.I.D.A.S. (1997)

Under Pressure mix. Another dark song that has nothing to do with sportswear brand, Adidas.

10. Chumbawamba - Tubthumping (1997)

one hit wonder.

by Giri Mandalika (noreply@blogger.com) at October 18, 2014 01:00 AM

October 17, 2014

Everything Sysadmin

Usenix LISA early registration discount expires soon!

Register by Mon, October 20 and take advantage of the early bird pricing.

I'll be teaching tutorials on managing oncall, team-driven sysadmin tools, upgrading live services and more. Please register soon and save!

https://www.usenix.org/conference/lisa14

October 17, 2014 05:28 PM

Standalone Sysadmin

VM Creation Day - PowerShell and VMware Automation

I should have ordered balloons and streamers, because Monday was VM creation day on my VMware cluster.

In addition to a 3-node production-licensed vSphere cluster, I run a 10-node cluster specifically for academic purposes. One of those purposes is building and maintaining classroom environments. A lot of professors maintain a server or two for their courses, but our Information Assurance program here goes above and beyond in terms of VM utilization. Every semester, I've got to deal with the added load, so I figured if I'm going to document it, I might as well get a blog entry while I'm at it.vmware_ia_spinup

Conceptually, the purpose of this process is to allow an instructor to create a set of virtual machines (typically between 1 and 4 of them), collectively referred to as a 'pod', which will serve as a lab for students. Once this set of VMs is configured exactly as the professor wants, and they have signed off on them, those VMs become the 'Gold Images', and then each student gets their own instance of these VMs. A class can have between 10 and 70 students, so this quickly becomes a real headache to deal with, hence the automation.

Additionally, because these classes are Information Assurance courses, it's not uncommon for the VMs to be configured in an insecure manner (on purpose) and to be attacked by other VMs, and to generally behave in a manner unbecoming a good network denizen, so each class is cordoned off onto its own VLAN, with its own PFsense box guarding the entryway and doing NAT for the several hundred VMs behind the wall. The script needs to automate the creation of the relevant PFsense configs, too, so that comes at the end.

I've written a relatively involved PowerShell script to do my dirty work for me, but it's still a long series of things to go from zero to working classroom environment. I figured I would spend a little time to talk about what I do to make this happen. I'm not saying it's the best solution, but it's the one I use, and it works for me. I'm interested in hearing if you've got a similar solution going on. Make sure to comment and let everyone know what you're using for these kinds of things.

The process is mostly automated hard parts separated by manual staging, because I want to verify sanity at each step. This kind of thing happens infrequently enough that I'm not completely trusting of the process yet, mostly due to my own ignorance of all of the edge cases that can cause failures. To the right, you'll see a diagram of the process.

In the script, the first thing I do is include functions that I stole from an awesome post on Subnet Math with PowerShell from Indented!, a software blog by Chris Dent. Because I'm going to be dealing with the DHCP config, it'll be very helpful to be able to have functions that understand what subnet boundaries are, and how to properly increment IP addresses.

I need to make sure that, if this powershell script is running, that we are actually loading the VMware PowerCLI commandlets. We can do that like this:


if ( ( Get-PSSnapin -name VMware.VimAutomation.Core -ErrorAction SilentlyContinue ) -eq $null ) {
Add-PSSnapin VMware.VimAutomation.Core
}

For the class itself, this whole process consists of functions to do what needs to be done (or "do the needful" if you use that particular phrase), and it's fairly linear, and each step requires the prior to be completed. What I've done is to create an object that represents the course as a whole, and then add the appropriate properties and methods. I don't actually need a lot of the power of OOP, but it provides a convenient way to keep everything together. Here's an example of the initial class setup:


$IA = New-Object psobject

# Lets add some initial values
Add-Member -InputObject $IA -MemberType NoteProperty -Name ClassCode -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name Semester -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name Datastore -Value "FASTDATASTORENAME"
Add-Member -InputObject $IA -MemberType NoteProperty -Name Cluster -Value "IA Program"
Add-Member -InputObject $IA -MemberType NoteProperty -Name VIServer -Value "VSPHERE-SERVER"
Add-Member -InputObject $IA -MemberType NoteProperty -Name IPBlock -Value "10.0.1.0"
Add-Member -InputObject $IA -MemberType NoteProperty -Name SubnetMask -Value "255.255.0.0"
Add-Member -InputObject $IA -MemberType NoteProperty -Name Connected -Value $false
Add-Member -InputObject $IA -MemberType NoteProperty -Name ResourcePool -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name PodCount -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name GoldMasters -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name Folder -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name MACPrefix -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name ConfigDir -Value ""
Add-Member -InputObject $IA -MemberType NoteProperty -Name VMarray -Value @()

These are just the values that almost never change. Since we're using NAT, and we're not routing to that network, and every class has its own dedicated VLAN, we can use the same IP block every time without running into a problem. The blank values are there just as placeholder, and those values will be filled in as the class methods are invoked.

At the bottom of the script, which is where I spend most of my time, I set per-class settings:


$IA.ClassCode = "ia1234"
$IA.Semester = "Fall-2014"
$IA.PodCount = 35
$IA.GoldMasters = @(
@{
vmname = "ia1234-win7-gold-20141014"
osname = "win7"
tcp = 3389
udp = ""
},
@{
vmname = "ia1234-centos-gold-20141014"
osname = "centos"
tcp = ""
udp = ""
},
@{
vmname = "ia1234-kali-gold-20141014"
osname = "kali"
tcp = "22"
udp = ""
}
)

We set the class code, semester, and pod count simply. These will be used to create the VM names, the folders, and resource groups that the VMs live in. The GoldMaster array is a data structure that has an entry for each of the gold images that the professor has created. It contains the name of the gold image, plus a short code that will be used to name the VM instances coming from it, and has a placeholder for the tcp and udp ports which need forwarded from the outside to allow internal access. I don't currently have the code in place that allows me to specify multiple port forwards, but that's going to be added, because I had a professor request 7(!) forwarded ports per VM in one of their classes this semester.

As you can see in the diagram, I'm using Linked Clones to spin up the students' pods. This has the advantage of saving diskspace and of completing quickly. Linked clones operate on a snapshot of the original disk image. Rather than actually have the VMs operate on the gold images, I do a full clone of the VM over to a faster datastore than the Ol' Reliable NetApp.

We add a method to the $IA object like this:


Add-Member -InputObject $IA -MemberType ScriptMethod -Name createLCMASTERs -Value {
# This is the code that converts the gold images into LCMASTERs
# Because you need to put a template somewhere, it makes sense to put it
# into the folder that the VMs will eventually live in themselves (thus saving
# yourself the effort of locating the right folder twice).
Param()
Process {
... stuff goes here
}
}

The core of this method is the following block, which actually performs the clone:


if ( ! (Get-VM -Name $LCMASTERName) ) {
try {
$presnap = New-snapshot -Name ("Autosnap: " + $(Get-Date).toString("yyyMMdd")) -VM $GoldVM -confirm:$false

$cloneSpec = new-object VMware.Vim.VirtualMachineCloneSpec
$cloneSpec.Location = New-Object VMware.Vim.VirtualMachineRelocateSpec
$cloneSpec.Location.Pool = ($IA.ResourcePool | Get-View).MoRef
$cloneSpec.Location.host = ($vm | Get-VMHost).MoRef
$cloneSpec.Location.Datastore = ($IA.Datastore | Get-View).MoRef
$cloneSpec.Location.DiskMoveType = [VMware.Vim.VirtualMachineRelocateDiskMoveOptions]::createNewChildDiskBacking
$cloneSpec.Snapshot = ($GoldVM | Get-View).Snapshot.CurrentSnapshot
$cloneSpec.PowerOn = $false

($GoldVM | Get-View).cloneVM( $LCMasterFolder.MoRef, $LCMASTERName, $cloneSpec)

Remove-snapshot -Snapshot $presnap -confirm:$false
}
catch [Exception] {
Write-Host "Error: " $_.Exception.Message
exit
}
} else {
Write-Host "Template found with name $LCMasterName - not recreating"
}


(apologies for the lack of indentation)

If you're interested in doing this kind of thing, make sure you check out the docs for the createNewChildDiskBacking setting.

After the Linked Clone Masters have been created, then it's a simple matter of creating the VMs from each of them (using the $IA.PodCount value to figure out how many we need). They end up getting named something like $IA.ClassCode-$IA.Semester-$IA.GoldMasters[#].osname-pod$podcount which makes it easy to figure out what goes where when I have several classes running at once.

After the VMs have been created, we can start dealing with the network portion. I used to spin up all of the VMs, then loop through them and pull the MAC addresses to use with the DHCP config, but there were problems with that method. I found that a lot of the time, I'll need to rerun this script a few times per class, either because I've screwed something up or the instructor needs to make changes to the pod. When that happens, EACH TIME I had to re-generate the DHCP config (which is easy) and then manually insert it into PFsense (which is super-annoying).

Rather than do that every time, I eventually realized that it's much easier just to dictate what the MAC address for each machine is, and then it doesn't matter how often I rerun the script, the DHCP config doesn't change. (And yes, I'm using DHCP, but with static leases, which is necessary because of the port forwarding).

Here's what I do:

Add-Member -InputObject $IA -MemberType ScriptMethod -Name assignMACs -Value {
Param()
Process {
$StaticPrefix = "00:50:56"
if ( $IA.MACPrefix -eq "" ) {
# Since there isn't already a prefix set, it's cool to make one randomly
$IA.MACPrefix = $StaticPrefix + ":" + ("{0:X2}" -f (Get-Random -Minimum 0 -Maximum 63) )
}
$machineCount = 0
$IA.VMarray | ForEach-Object {
$machineAddr = $IA.MACPrefix + ":" + ("{0:X4}" -f $machineCount).Insert(2,":")

$vm = Get-VM -name $_.name
$networkAdapter = Get-NetworkAdapter -VM $vm
Write-Host "Setting $vm to $machineAddr"
Set-NetworkAdapter -NetworkAdapter $networkAdapter -MacAddress $machineAddr -Confirm:$false
$IA.VMarray[$machineCount].MAC = $machineAddr
$IA.VMarray[$machineCount].index = $machineCount
$machineCount++

}
}
}

As you can see, this randomly assigns a MAC address in the vSphere range. Sort of. The fourth octet is randomly selected between 00 and 3F, and then the last two octets are incremented starting from 00. Optionally, the fourth octet can be specified, which is useful in a re-run of the script so that the DHCP config doesn't need to be re-generated.

After the MAC addresses are assigned, the IPs can be determined using the network math:


Add-Member -InputObject $IA -MemberType ScriptMethod -Name assignIPs -Value {
# This method really only assigns the IP to the object.
Param()
Process {
# It was tempting to assign a sane IP block to this network, but given the
# tendancy to shove God-only-knows how many people into a class at a time,
# lets not be bounded by reasonable or sane. /16 it is.
# First 50 IPs are reserved for gateway plus potential gold images.
$currentIP = Get-NextIP $IA.IPBlock 2
$IA.VMarray | ForEach-Object {
$_.IPAddr = $currentIP
$currentIP = Get-NextIP $currentIP 2
}

}
}

This is done by naively giving every other IP to a machine, leaving the odd IP addresses between them open. I've had to massage this before, where a large pod of 5-6 VMs all need to be incremental then skip IPs between them, but I've done those mostly as a one-off. I don't think I need to build in a lot of flexibility because those are relatively rare cases, but it wouldn't be that hard to develop a scheme for it if you needed.

After the IPs are assigned, you can create the DHCP config. Right now, I'm using an ugly hack, where I basically just print out the top of the DHCP config, then loop through the VMs outputting XML the whole way. It's ugly, and I'm not going to paste it here, but if you download a DHCPD XML file from PFsense, then you can basically see what I'm doing. I then do the same thing with the NAT config.

Because I'm still running these functions manually, I have these XML-creation methods printing output, but it's easy to see how you could have them redirect output to a text file (and if you were super-cool, you could use something like this example from MSDN where you spin up an instance of IE:


$ie = new-object -com "InternetExplorer.Application"
$ie.navigate("http://localhost/MiniCalc/Default.aspx")
... and so on

Anyway, I've spun up probably thousands of VMs using this script (or previous instances of it). It's saved me a lot of time, and if you have to manage bulk-VMs using vSphere, and you're not automating it (using PowerCLI, or vCloud Director, or something else), you really should be. And if you DO, what do you do? Comment below and let me know!

Thanks for reading all the way through!

by Matt Simmons at October 17, 2014 03:16 PM

Google Blog

Through the Google lens: search trends October 10-16

Diet secrets from Zach Galifianakis, and cord cutting from a cable company?! Here's a look at another topsy-turvy week in search.

A cast of characters
Search will always have its fair share of characters and this week was no different. First up, moviegoers learned who’s next in line for Hollywood’s superhero treatment when Ezra Miller, star of The Perks of Being a Wallflower, landed the title role in the 2018 film The Flash. And whispers are swirling in Tinseltown that Gal Gadot's already impressive resume—she’s set to play the world’s most famous Amazonian, Wonder Woman—will soon get another stellar addition, the lead female role in a remake of Ben-Hur.

But they weren’t the only celebrities to get the Internet buzzing. Comedian and fan favorite Zach Galifianakis caused a stir on the trends charts after he revealed a much thinner version of himself on the red carpet of the New York Film Festival. When a reporter asked Galifianakis if he had made any lifestyle changes to lose the weight, he responded with a straight face, “No, I'm just... I'm dying.” Clearly Galifianakis isn’t sharing his weight loss secrets.

Out with the old, in with the new
HBO has seen the light! This week the premium television network announced that they will launch a new stand-alone service for fans of its TV shows. Soon, homes without a cable subscription can sign up for HBO Go and get their fill of Game of Thrones and other HBO shows with just an Internet connection—leading people to wonder if this is the beginning of the end for cable providers.

Consumers also had a lot of new mobile devices to choose from this week, starting with our own line of Nexus gadgets like the Nexus 6 running the latest version of Android, 5.0 Lollipop. Meanwhile, Apple announced an updated version of the iPad.
The show’s just getting started
Is it awards show season already? It’s not—but that’s not stopping searchers from looking ahead. The Internet rejoiced when How I Met Your Mother and Gone Girl star Neil Patrick Harris said “Hosting the 2015 Academy Awards? Challenge accepted!” But with the Oscars red carpet still months away, searchers had their sights set on another celebrity bash: Paul Rudd's keg party… at his mom’s house… in the suburbs of Kansas City. What else are you supposed to do when mom’s out of town and the KC Royals just punched a ticket to the World Series after a nearly 30-year hiatus?

Tip of the week
‘Tis the season for pumpkin spice beers? Next time you’re in a new town and looking to grab a cold one just say “Ok Google, show me pubs near my hotel” and find your new favorite haunt.


by Emily Wood (noreply@blogger.com) at October 17, 2014 02:36 PM

Chris Siebenmann

My experience doing relatively low level X stuff in Go

Today I wound up needing a program that spoke the current Firefox remote control protocol instead of the old -remote based protocol that Firefox Nightly just removed. I had my choice between either adding a bunch of buffer mangling to a very old C program that already did basically all of the X stuff necessary or trying to do low-level X things from a Go program. The latter seemed much more interesting and so it's what I did.

(The old protocol was pretty simple but the new one involves a bunch of annoying buffer packing.)

Remote controlling Firefox is done through X properties, which is a relatively low level part of the X protocol (well below the usual level of GUIs and toolkits like GTK and Qt). You aren't making windows or drawing anything; instead you're grubbing around in window trees and getting obscure events from other people's windows. Fortunately Go has low level bindings for X in the form of Andrew Gallant's X Go Binding and his xgbutil packages for them (note that the XGB documentation you really want to read is for xgb/xproto). Use of these can be a little bit obscure so it very much helped me to read several examples (for both xgb and xgbutil).

All told the whole experience was pretty painless. Most of the stumbling blocks I ran into were because I don't really know X programming and because I was effectively translating from an older X API (Xlib) that my original C program was using to XCB, which is what XGB's API is based on. This involved a certain amount of working out what old functions that the old code was calling actually did and then figuring out how to translate them into XGB and xgbutil stuff (mostly the latter, because xgbutil puts a nice veneer over a lot of painstaking protocol bits).

(I was especially pleased that my Go code for the annoying buffer packing worked the first time. It was also pretty easy and obvious to write.)

One of the nice little things about using Go for this is that XGB turns out to be a pure Go binding, which means it can be freely cross compiled. So now I can theoretically do Firefox remote control from essentially any machine I remotely log into around here. Someday I may have a use for this, perhaps for some annoying system management program that insists on spawning something to show me links.

(Cross machine remote control matters to me because I read my email on a remote machine with a graphical program, and of course I want to click on links there and have them open in my workstation's main Firefox.)

Interested parties who want either a functional and reasonably commented example of doing this sort of stuff in Go or a program to do lightweight remote control of Unix Firefox can take a look at the ffox-remote repo. As a bonus I have written down in comments what I now know about the actual Firefox remote control protocol itself.

by cks at October 17, 2014 04:55 AM

Byron Miller

Austin Puppet Users Group – Join our Meetup!

It’s been too long since our initial meetup so i’m thrilled to be getting some dates on the calendar. Right now, we plan on having the meetup be the 2nd Tuesday of each month  from 6:30 pm until 8ish with a special meetup on the 28th of October so we can have a PuppetConf 2015 recap and […]

by byronm at October 17, 2014 12:41 AM

October 16, 2014

Ubuntu Geek

UbuTricks – Script to install the latest versions of several games and applications in Ubuntu

UbuTricks is a program that helps you install the latest versions of several games and applications in Ubuntu.

UbuTricks is a Zenity-based, graphical script with a simple interface. Although early in development, its aim is to create a simple, graphical way of installing updated applications in Ubuntu 14.04 and future releases.
(...)
Read the rest of UbuTricks – Script to install the latest versions of several games and applications in Ubuntu (220 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , ,

Related posts

by ruchi at October 16, 2014 11:17 PM

Aaron Johnson

Day 6: Volcanos, bubbling mud pots, swimming in nature baths and cows pooping: Iceland

NO VOMIT! We had an entire night with no one vomiting! Breakfast bright and early again and then we hit the road, big day today.

We drove for about 30 minutes and bagged our first geocache at a place called Tveir Brú and then did a nice hike up to a waterfall:

Said geocache was NOT kid friendly so again, if you’re ever in Iceland driving around Tveir Brú with small children, make sure they walk up to the edge of the 50 foot cliff where the cache is hidden with their head up, not staring at the GPS.

Second stop was at Krafla which apparently is a “caldera”, where we did a short (2 miles?) walk out over a flat plain:

to a small hill with a bunch of fissure vents and bubbling pools of waters:

On the way to the hill Reed made friends with about 10 lovely Japanese people, all of whom gave him (or he gave them?) a high five.

A 10 minute drive later and were we at Námafjall, which is a geothermal area with boiling mudpools and steaming fumaroles and if you’re not wearing disposable booties over your shoes like all the people getting off of the tour buses were, a giant mess for the car. Beck couldn’t take the smell and had to go back to the car (pretty sure he just wanted to finish the book he was reading) but we forced the little dudes to walk around with us. Mud pots were cool but I think this was my favorite:

We did lunch out of the trunk (peanut butter and jelly, apples and if you finished your sandwich, a cookie or two) and then we were back on the road again on our way to Lake Mývatn. Our first stop was at a place called Hverfjall, which is “… a tephra cone or tuff ring volcano in northern Iceland, to the east of Mývatn” as my friends at Wikipedia say. We did the 4×4 gravel road out to the base of the volcano and after a bit of prodding to get up this trail:

everyone made it to the top of the mountain, although not without us having to break out the “You’re a mountain lion! Growl like one!” strategy to get certain people up the hill that weren’t excited about climbing the mountain:

which everyone had to join in on:

but eventually all led to this:

Cool spot.

Our slave driver navigator then proceeded to direct us to Dimmuborgir (a large area of unusually shaped lava fields east of Mývatn) where we did yet another short hike out to see one of the structures which supposedly looks like an old cathedral:

Pretty sure at this point that everyone was really cold and tired and we completely lucked out because the very next thing we did after getting back into the car was a bath in the natural hot springs:

which was AMAZING for all involved, especially people that had floaties. Highly recommended if you’re ever driving through northern Iceland after a long day of volcano hiking and mud pot viewing.

Day only got better though because I found an amazing restaurant / farm on Foursquare called Vogafjos Cowshed Cafe, where we (adults) not only had an EPIC meal (braised lamb shanks and pan fried artic char) but we got to watch cows pooping and peeing and then eventually get milked which if you’re between the ages of 2 and 5 is PRETTY COOL:

Finally, we drove to the hotel, which was another 30 miles away and everyone hit the sack.
Stats:

  • Weird mud pots and fissures: too many to count
  • People that had to be encouraged to make mountain lion sounds to bring out their inner lion to make it up the mountain: 1
  • People vomiting at night: 0 (WOOHOO!)
  • Geocaches: 2

by ajohnson at October 16, 2014 10:28 PM

Day 5: The Farm and lots of driving: Iceland

We spent the night at a farm (Brunnholl) that, like most places, was in the absolute middle of nowhere and got there the night before after it got dark so when we woke up (after the middle one threw up a minimum of 10 times in the middle of the night) we had some breakfast and then took a bit of time to look around. First thing we noticed outside was a border collie named Mila, who was carrying a stick and very obviously wanted me to play with her. The dudes and I exited through the side of the dining room and stumbled on to not only a very playful and determined dog but also a trampoline (partially frozen) and a sandbox full of black sand, which all combined to occupy us for at least 15 minutes before we discovered the other side of the farm, starting with the ATV:

which I’m pretty sure Reed would have figured out how to drive away if the key had been in the ignition. But better than the ATV were the frozen puddles, all of which had to be stomped on or hit with a stick and the Icelandic horses that they had in pasture, one of which I became very close friends with:

Pretty sure she wanted to get in the car with us to go on the rest of the trip, sadly we had no room for her. We ended up sticking around the farm and meeting the cows, hanging with the horses and generally having a relaxing farm morning until about 10:30 or 11am, much later than our normal departures.

This turned out to be a good thing because there wasn’t much to see or at least there wasn’t much that we stopped to see on Day 5. We ended up nabbing a geocache (always good to get out, stretch the legs, pee and get some little dude energy out) that was on a side road and then made it to have lunch at Kaffi Steinn in Djúpivogur, which is teensy little town right on the water.

Back in the saddle an hour later, we drove and drove… and then on a whim I pulled off at a black sand beach that turned out to have some good climbing and rock throwing facilities that gave everyone a breather from being the car:

and then we turned inland and drove through some beautiful mountain ranges:

although the entire country is a giant beautiful mountain range in some ways (would be a good bet by the way, I doubt you could be anywhere in Iceland on a clear day and not be able to see a giant mountain range somewhere).

We eventually made it to our final destination, which turned out to be a newly renovated hotel called Gistihúsið Egilsstöðum in Egilsstaðir which was VERY nice compared to where we had been staying. We dropped our stuff off and then immediately got back in the car to go and see a lake that supposedly had a monster in it, dropped off our first “trackable” geocache on the way to that:

and then drove over to see a waterfall that ended up being a hike that we couldn’t make before the sun went down. Dinner at Subway because it was cheap. Note: no meatball subs in Europe.
Stats:

  • Ice puddles smashed: too many to count
  • Icelandic horses that are my best friend: 1
  • People vomiting at night: 1
  • Geocaches: 2

by ajohnson at October 16, 2014 09:43 PM

SysAdmin1138

The new economy and systems administration

"Over the next few decades demand in the top layer of the labor market may well centre on individuals with high abstract reasoning, creative, and interpersonal skills that are beyond most workers, including graduates."
-Economist, vol413/num8907, Oct 4, 2014, "Special Report: The Third Great Wave. Productivity: Technology isn't Working"

The rest of the Special Report lays a convincing argument that people who have automation-creation as part of their primary job duties are in for quite a bit of growth and that people in industries subject to automation are going to have a hard time of it. This has a direct impact to sysadminly career direction.

In the past decate Systems Administration has been moving away from mechanics who deploy hardware, install software and fix problems and towards Engineers who are able to build automation for provisioning new computing instances, installing application frameworks, and know how to troubleshoot problems with all of that. In many ways we're a specialized niche of Software Engineering now, and that means we can ride the rocket with them. If you want to continue to have a good job in the new industrial revolution, keep plugging along and don't become the dragon in the datacenter people don't talk to.

Abstract Reasoning

Being able to comprehend how a complex system works is a prime example of abstract reasoning. Systems Administration is more than just knowing the arcana of init, grub, or WMI; we need to know how systems interact with each other. This is a skill that has been a pre-requisite for Senior Sysadmins for several decades now, so isn't new. It's already on our skill-path. This is where System Engineers make their names, and sometimes become Systems Architects.

Creative

This has been less on our skill-path, but is definitely something we've been focusing on in the past decade or so. Building large automation systems, even with frameworks such as Puppet or Chef, takes a fair amount of both abstract reasoning and creativity. If you're good at this, you've got 'creative' down.

This has impacts for the lower rungs of the sysadmin skill-ladder. Brand new sysadmins are going to be doing less racking-and-stacking and more parsing and patching ruby or ruby-like DSLs.

Interpersonal Skills

This is where sysadmins tend to fall down. A lot of us got into this gig because we didn't have to talk to people who weren't other sysadmins. Technology made sense, people didn't.

This skill is more a reflection of the service-oriented economy, and sysadmins are only sort of that, but our role in product creation and maintenance is ever more social these days. If you're one of two sysadmin-types in a company with 15 software engineers, you're going to have to learn how to have a good relationship with software engineers. In olden days, only very senior sysadmins had to have the Speaker to Management skill, now even mid-levels need to be able to speak coherently to technical and non-technical management.

It is no coincidence that many of the tutorials at conferences like LISA are aimed at building business and social skills in sysadmins. It's worth your time to attend them, since your career advancement depends on it.


Yes, we're well positioned to do well in the new economy. We just have to make a few changes we've known about for a while now.

by SysAdmin1138 at October 16, 2014 05:10 PM

Rands in Repose

The First Addition to the Cloud Classification System in Half a Century

But soon after launching the site, Pretor-Pinney received a couple pictures that didn’t quite fit into existing classifications. One image, taken from the 12th floor of an office building in Cedar Rapids, Iowa, looked positively apocalyptic — a violent and undulating thing menacing the city skyline. “They struck me as being rather different from the normal undulates clouds,”

#

by rands at October 16, 2014 03:44 PM

Everything Sysadmin

Results of the PuppetConf 2014 Raffle

If you recall, the fine folks at Puppet Labs gave me a free ticket to PuppetConf 2014 to give away to a reader of this blog. Here's a report from our lucky winner!


Conference Report: PuppetConf 2014

by Anastasiia Zhenevskaia

You never know when you will be lucky enough to win a ticket to the PuppetConf, one of the greatest conferences of this year. My "moment" happened just 3 weeks before the conference and let me dive into things I've never thought about.

Being a person who worked mostly with the front-end development, I was always a little bit scared and puzzled by more complicated things. Fortunately, the Conference helped me to understand how important and simple all these processes could be. I was so impressed by personality of all speakers. Their eyes were full of passion, their presentations were clear, informational and breath-taking. Their attitude towards things they're working on - exceptional. Those are people you might want to work with, share ideas and create amazing things.

I'm so glad that I got this opportunity and wish that everybody could get this chance and taste the atmosphere of Puppet!

October 16, 2014 02:28 PM

Server Density

A guide to handling incidents, downtime and outages

Outages and downtime are inevitable. Designing your systems to handle failure is a key part of modern infrastructure architecture which makes it possible to survive most problems, however there will be incidents you didn’t think about, software bugs you didn’t catch and other events which result in downtime for your service.

Microsoft, Amazon and Google spend $billions every quarter and even they still have outages. How much do you spend?

There are some companies who constantly seem to have problems and suffer from it unnecessarily. Regular outages ultimately become unacceptable but if you adopt a few key principles and design your systems properly, the few times when you do have service incidents you can be forgiven by customers.

Step 1: Planning

If critical alerts result in panic and chaos then you deserve to suffer from the incident! There are a number of things you can do in advance to ensure that when something does go wrong, everyone on your team knows what they should be doing.

  • Put in place the right documentation. This should be easily accessible, searchable and up to date. We use Google Docs for this.
  • Use proper config management, be it Puppet, Chef, Ansible, Salt Stack or some other systems to be able to make mass changes to your infrastructure in a controlled manner. It also helps your team understand novel issues because the code that defines the setup is easily accessible.

Unexpected failures

Be aware of your whole system. Unexpected failures can come from unusual places. Are you hosted on AWS? What happens if they suffer an outage and you need to use Slack or Hipchat for internal communication? Are you hosted on Google Cloud? What happens if your GMail is unavailable during a Google Cloud outage? Are you using a data center within the city you live in? What happens if there’s a weather event and the phone service is knocked out?

Step 2: Be ready to handle the alerts

Some people hate being on call, others love it! Either way, you need a system to handle on call rotations, escalating issues to other members of the team, planning for reachability and allowing people to go off-call after incidents. We use PagerDuty on a weekly rotation through the team and consider things like who is available, internet connectivity, illness, holidays and looping in product engineering so issues waking people up can be resolved quickly.

pagerduty-on-call-calendar

More and more outages are being caused by software bugs getting into production because it’s never just a single thing that goes wrong – a cascade of problems all culminate to cause downtimeso you need rotations amongst different teams, such as frontend engineering, not just ops.

Step 3: Deal with it, using checklists

Have a defined process in place ready to run through whenever the alerts go off. Using a checklist removes unnecessary thinking so you can focus on the real problem, and ensures key actions are taken and not forgotten. Have a channel for communication both internally and externally – there’s nothing worse to be the customer of a service that is down and you have no idea if they’re working on it or not.

Google Docs Incident Handling

Step 4: Write up a detailed postmortem

This is the opportunity to win back trust. If you follow the steps above and provide accurate, useful information during the outage so people know what is going on, this is the chance to write it up, explain what happened, what went wrong and crucially, what you are going to do to prevent it from happening again. Outages highlight unknown system flaws and it’s important to tell your users that the hole no longer exists, or is in the process of being closed.

Interested in learning more?

We are going live on the internet in the form of a Q&A webinar on the 11th November 2014 @ 18:30 BST. We’ll be discussing things to consider when handling incidents, on-call rotations and outage status page communications. Join us for free!

The post A guide to handling incidents, downtime and outages appeared first on Server Density Blog.

by David Mytton at October 16, 2014 12:00 PM

Chris Siebenmann

Don't use dd as a quick version of disk mirroring

Suppose, not entirely hypothetically, that you initially set up a server with one system disk but have come to wish that it had a mirrored pair of them. The server is in production and in-place migration to software RAID requires a downtime or two, so as a cheap 'in case of emergency' measure you stick in a second disk and then clone your current system disk to it with dd (remember to fsck the root filesystem afterwards).

(This has a number of problems if you ever actually need to boot from the second disk, but let's set them aside for now.)

Unfortunately, on a modern Linux machine you have just armed a time bomb that is aimed at your foot. It may never go off, or it may go off more than a year and a half later (when you've forgotten all about this), or it may go off the next time you reboot the machine. The problem is that modern Linux systems identify their root filesystem by its UUID, not its disk location, and because you cloned the disk with dd you now have two different filesystems with the same UUID.

(Unless you do something to manually change the UUID on the cloned copy, which you can. But you have to remember that step. On extN filesystems, it's done with tune2fs's -U argument; you probably want '-U random'.)

Most of the time, the kernel and initramfs will probably see your first disk first and inventory the UUID on its root partition first and so on, and thus boot from the right filesystem on the first disk. But this is not guaranteed. Someday the kernel may get around to looking at sdb1 before it looks at sda1, find the UUID it's looking for, and mount your cloned copy as the root filesystem instead of the real thing. If you're lucky, the cloned copy is so out of date that things fail explosively and you notice immediately (although figuring out what's going on may take a bit of time and in the mean time life can be quite exciting). If you're unlucky, the cloned copy is close enough to the real root filesystem that things mostly work and you might only have a few little anomalies, like missing log files or mysteriously reverted package versions or the like. You might not even really notice.

(This is the background behind my recent tweet.)

by cks at October 16, 2014 06:14 AM

Google Webmasters

Best practices for XML sitemaps & RSS/Atom feeds

Webmaster level: intermediate-advanced

Submitting sitemaps can be an important part of optimizing websites. Sitemaps enable search engines to discover all pages on a site and to download them quickly when they change. This blog post explains which fields in sitemaps are important, when to use XML sitemaps and RSS/Atom feeds, and how to optimize them for Google.

Sitemaps and feeds

Sitemaps can be in XML sitemap, RSS, or Atom formats. The important difference between these formats is that XML sitemaps describe the whole set of URLs within a site, while RSS/Atom feeds describe recent changes. This has important implications:

  • XML sitemaps are usually large; RSS/Atom feeds are small, containing only the most recent updates to your site.
  • XML sitemaps are downloaded less frequently than RSS/Atom feeds.

For optimal crawling, we recommend using both XML sitemaps and RSS/Atom feeds. XML sitemaps will give Google information about all of the pages on your site. RSS/Atom feeds will provide all updates on your site, helping Google to keep your content fresher in its index. Note that submitting sitemaps or feeds does not guarantee the indexing of those URLs.

Example of an XML sitemap:

<?xml version="1.0" encoding="utf-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
 <url>
   <loc>http://example.com/mypage</loc>
   <lastmod>2011-06-27T19:34:00+01:00</lastmod>
   <!-- optional additional tags -->
 </url>
 <url>
   ...
 </url>
</urlset>

Example of an RSS feed:

<?xml version="1.0" encoding="utf-8"?>
<rss>
 <channel>
   <!-- other tags -->
   <item>
     <!-- other tags -->
     <link>http://example.com/mypage</link>
     <pubDate>Mon, 27 Jun 2011 19:34:00 +0100</pubDate>
   </item>
   <item>
     ...
   </item>
 </channel>
</rss>

Example of an Atom feed:

<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
 <!-- other tags -->
 <entry>
   <link href="http://example.com/mypage" />
   <updated>2011-06-27T19:34:00+01:00</updated>
   <!-- other tags -->
 </entry>
 <entry>
   ...
 </entry>
</feed>

“other tags” refer to both optional and required tags by their respective standards. We recommend that you specify the required tags for Atom/RSS as they will help you to appear on other properties that might use these feeds, in addition to Google Search.

Best practices

Important fields

XML sitemaps and RSS/Atom feeds, in their core, are lists of URLs with metadata attached to them. The two most important pieces of information for Google are the URL itself and its last modification time:

URLs

URLs in XML sitemaps and RSS/Atom feeds should adhere to the following guidelines:

  • Only include URLs that can be fetched by Googlebot. A common mistake is including URLs disallowed by robots.txt — which cannot be fetched by Googlebot, or including URLs of pages that don't exist.
  • Only include canonical URLs. A common mistake is to include URLs of duplicate pages. This increases the load on your server without improving indexing.
Last modification time

Specify a last modification time for each URL in an XML sitemap and RSS/Atom feed. The last modification time should be the last time the content of the page changed meaningfully. If a change is meant to be visible in the search results, then the last modification time should be the time of this change.

  • XML sitemap uses  <lastmod>
  • RSS uses <pubDate>
  • Atom uses <updated>

Be sure to set or update last modification time correctly:

  • Specify the time in the correct format: W3C Datetime for XML sitemaps, RFC3339 for Atom and RFC822 for RSS.
  • Only update modification time when the content changed meaningfully.
  • Don’t set the last modification time to the current time whenever the sitemap or feed is served.

XML sitemaps

XML sitemaps should contain URLs of all pages on your site. They are often large and update infrequently. Follow these guidelines:

  • For a single XML sitemap: update it at least once a day (if your site changes regularly) and ping Google after you update it.
  • For a set of XML sitemaps: maximize the number of URLs in each XML sitemap. The limit is 50,000 URLs or a maximum size of 10MB uncompressed, whichever is reached first. Ping Google for each updated XML sitemap (or once for the sitemap index, if that's used) every time it is updated. A common mistake is to put only a handful of URLs into each XML sitemap file, which usually makes it harder for Google to download all of these XML sitemaps in a reasonable time.

RSS/Atom

RSS/Atom feeds should convey recent updates of your site. They are usually small and updated frequently. For these feeds, we recommend:

  • When a new page is added or an existing page meaningfully changed, add the URL and the modification time to the feed.
  • In order for Google to not miss updates, the RSS/Atom feed should have all updates in it since at least the last time Google downloaded it. The best way to achieve this is by using PubSubHubbub. The hub will propagate the content of your feed to all interested parties (RSS readers, search engines, etc.) in the fastest and most efficient way possible.

Generating both XML sitemaps and Atom/RSS feeds is a great way to optimize crawling of a site for Google and other search engines. The key information in these files is the canonical URL and the time of the last modification of pages within the website. Setting these properly, and notifying Google and other search engines through sitemaps pings and PubSubHubbub, will allow your website to be crawled optimally, and represented accordingly in search results.

If you have any questions, feel free to post them here, or to join other webmasters in the webmaster help forum section on sitemaps.

by Google Webmaster Central (noreply@blogger.com) at October 16, 2014 05:24 AM


Administered by Joe. Content copyright by their respective authors.