Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

December 19, 2014

Chris Siebenmann

Our likely long road to working 10G-T on OmniOS

I wrote earlier about our problems with Intel 10G-T on our OmniOS fileservers and how we've had to fall back to 1G networking. Obviously we'd like to change that and go back to 10G-T. The obvious option was another sort of 10G-T chipset besides Intel's. Unfortunately, as far as we can see Intel's chipsets are the best supported option and eg Broadcom seems even less likely to work well (or at all, and we later had problems with even a Broadcom 1G chipset under OmniOS). So we've scratched that idea; at this point it's Intel or bust.

We really want to reproduce our issues outside of production. While we've set up a test environment and put load on it, we've so far been unable to make it fall over in any clearly networking related way (OmniOS did lock up once under extreme load, but that might not be related at all). We're going to have to keep trying in the new year; I don't know what we'll do if we can't reproduce things.

(We also aren't currently trying to reproduce the dual port card issue. We may switch to this at some point.)

As I said in the earlier entry, we no longer feel that we can trust the current OmniOS ixgbe driver in production. That means going back to production needs an updated driver. At the moment I don't think anyone in the Illumos community is actively working on this (which I can't blame them for), although I believe there's some interest in doing a driver update at some point.

It's possible that we could find some money to sponsor work on updating the ixgbe driver to the current upstream Intel version, and so get it done that way (assuming that this sort of work can be sponsored for what we can afford, which may be dubious). Unfortunately our constrained budget situation means that I can't argue very persuasively for exploring this until we have some confidence that the current upstream Intel driver would fix our issues. This is hard to get without at least some sort of reproduction of the problem.

(What this says to me is that I should start trying to match up driver versions and read driver changelogs. My guess is that the current Linux driver is basically what we'd get if the OmniOS driver was resynchronized, so I can also look at it for changes in the areas that I already know are problems, such as the 20msec stall while fondling the X540-AT2 ports.)

While I don't want to call it 'ideal', I would settle for a way to reproduce the dual card issue with simply artificial TCP network traffic. We could then change the server from OmniOS to an up to date Linux to see if the current Linux driver avoids the problem under the same load, then use this as evidence that commissioning an OmniOS driver update would get us something worthwhile.

None of this seems likely to be very fast. At this point, getting 10G-T back in six months seems extremely optimistic.

(The pessimistic view of when we might get our new fileserver environment back to 10G-T is obviously 'never'. That has its own long-term consequences that I don't want to think about right now.)

Sidebar: the crazy option

The crazy option is to try to learn enough about building and working on OmniOS so that I can build new ixgbe driver versions myself and so attempt either spot code modifications or my own hack testing on a larger scale driver resynchronization. While there is a part of me that finds this idea both nifty and attractive, my realistic side argues strongly that it would take far too much of my time for too little reward. Becoming a vaguely competent Illumos kernel coder doesn't seem like it's exactly going to be a small job, among other issues.

(But if there is an easy way to build new OmniOS kernel components, it'd be useful to learn at least that much. I've looked into this a bit but not very much.)

by cks at December 19, 2014 06:02 AM

Ubuntu Geek

Attic – Deduplicating backup program

Sponsored Link
Attic is a deduplicating backup program written in Python. The main goal of Attic is to provide an efficient and secure way to backup data. The data deduplication technique used makes Attic suitable for daily backups since only the changes are stored.
Read the rest of Attic – Deduplicating backup program (351 words)

© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to
Post tags: , ,

Related posts

by ruchi at December 19, 2014 12:31 AM

December 18, 2014


WARNING: The following packages cannot be authenticated!

We run several (read: hundreds) of servers that are still running Debian 6 (Squeeze). A few months ago, we started seeing the following errors coming from the daily apt cronjob: "WARNING: The following packages cannot be authenticated!" When running apt-get update the following errors dump out:

W: GPG error: squeeze-backports Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AD6246925553
W: GPG error: squeeze-lts Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AD6246925553
W: GPG error: squeeze-updates Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AD6246925553

There are two ways to solve the problem:

apt-get install debian-archive-keyring will install all the keys you need.

If you want to install a specific key, then apt-key adv --keyserver --recv-keys 8B48AD6246925553 will do what you need. Obviously, adjust the key accordingly.

by Scott Hebert at December 18, 2014 05:33 PM

Chris Siebenmann

The potential end of public clients at the university?

Recently, another department asked our campus-wide sysadmin mailing list for ideas on how to deal with keyloggers, after having found one. They soon clarified that they meant physical keyloggers, because that's what they'd found. As I read the ensuing discussion I had an increasing sinking feeling that the answer was basically 'you can't' (which was pretty much the consensus answer; no one had really good ideas and several people knew things that looked attractive but didn't fully work). And that makes me pretty unhappy, because it means that I'm not sure public clients are viable any more.

Here at the university there's long been a tradition and habit of various sorts of public client machines, ranging from workstations in computer labs in various departments to terminals in libraries. All of these uses depend crucially on the machines being at least non-malicious, where we can assure users that using the machine in front of them is not going to give them massive problems like compromised passwords and everything that ensues from that.

(A machine being non-malicious is different from it being secure, although secure machines are usually non-malicious as well. A secure machine is doing only what you think it should be, while a non-malicious machine is at least not screwing its user. A machine that does what the user wants instead of what you want is insecure but not hopefully not malicious (and if it is malicious, well, the user did it to themselves, which is admittedly not a great comfort).)

Keyloggers, whether software or physical, are one way to create malicious machines. Once upon a time they were hard to get, expensive, and limited. These days, well, not so much, based on some hardware projects I've heard of; I'm pretty sure you could build a relatively transparent USB keylogger with tens of megabytes of logging capacity as an undergrad final project with inexpensive off the shelf parts. Probably you can already buy fully functional ones for cheap on EBay. What was once a pretty rare and exclusive preserve is now available to anyone who is bored and sufficiently nasty to go fishing. As this incident illustrates, some number of our users probably will do so (and it's only going to get worse as this stuff gets easier to get and use).

If we can't feasibly keep public machines from being made malicious, it's hard to see how we can keep offering and operating them at all. I'm now far from convinced that this is possible in most settings. Pessimistically, it seems like we may have reached the era where it's much safer to tell people to bring their own laptops, tablets, or phones (which they often will anyways, and will prefer using).

(I'm not even convinced it's a good idea to have university provided machines in graduate student offices, many of which are shared and in practice are often open for people who look like they belong to stroll through and fiddle briefly with a desktop.)

PS: Note that keyloggers are on the easy scale of the damage you can do with nasty USB hardware. There's much worse possible, but of course people really want to be able to plug their own USB sticks and so on into your public machines.

Sidebar: Possible versus feasible here

I'm pretty sure that you could build a kiosk style hardware enclosure that would make a desktop's actual USB ports and so on completely inaccessible, so that people couldn't unplug the keyboard and plug in their keylogger. I'm equally confident that this would be a relatively costly piece of custom design and construction that would also consume a bunch of extra physical space (and the physical space needed for public machines is often a big limiting factor on how many seats you can fit in).

by cks at December 18, 2014 04:44 AM

Yellow Bricks

Virtualization networking strategies…

I was asked a question on LinkedIn about the different virtualization networking strategies from a host point of view. The question came from someone who recently had 10GbE infrastructure introduced in to his data center and the way the network originally was architected was with 6 x 1 Gbps carved up in three bundles of 2 x 1Gbps. Three types of traffic use their own pair of NICs: Management, vMotion and VM. 10GbE was added to the current infrastructure and the question which came up was: should I use 10GbE while keeping my 1Gbps links for things like management for instance? The classic model has a nice separation of network traffic right?

Well I guess from a visual point of view the classic model is nice as it provides a lot of clarity around which type of traffic uses which NIC and which physical switch port. However in the end you typically still end up leveraging VLANs and on top of the physical separation you also provide a logical separation. This logical separation is the most important part if you ask me. Especially when you leverages Distributed Switches and Network IO Control you can create a great simple architecture which is fairly easy to maintain and implement both from a physical and virtual point of view, yes from a visual perspective it may be bit more complex but I think the flexibility and simplicity that you get in return definitely outweighs that. I definitely would recommend, in almost all cases, to keep it simple. Converge physically, separate logically.

"Virtualization networking strategies…" originally appeared on Follow me on twitter - @DuncanYB.

Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at December 18, 2014 12:47 AM

Daniel E. Markle

Wargame Trilogy by Eugen

As more games become available for Linux on Steam, I've been adding to my collection. Although I don't get much time to play, I like to support bringing titles to the platform with my dollars. Of the ones I have tried, the Wargame series by Eugen Systems has been a standout.

Purchasing the trilogy during a sale, I started with Wargame: AirLand Battle due to it having a tutorial as a starting point. Going into this game thinking it was a more typical RTS, I quickly found out just tossing your units at the enemy results in a quick loss. The game has a dizzying array of units modeled on real world counterparts, and requires knowledge of how to use reconnaissance and correct unit selection and tactics. Unlike other games where one can just crank out more units if a strategy doesn't work, this game has 'decks' of available units. Just like in a real battle resources for each action are limited, and there's no time spent building bases and collecting resources.

In one of the tutorial missions I went in not realizing how the game works, and began to use the usual RTS tactic of building tanks then working my way up the map to clear each area in turn. By the time I got to the top, I had plenty of 'resources' (action points in this game), but was confused; why couldn't I build more tanks? By this time the AI was tearing my tanks to pieces with their air force and I was getting quickly overrun by their tanks. Reading more about why I lost and how the game actually works online, I tried another strategy next time; instead of wasting resources fighting battles that didn't matter, I flanked the enemy and cut off their air corridor. This time, with logical tactics, I won easily. I've been hooked ever since.

by (Daniel E. Markle) at December 18, 2014 12:35 AM

December 17, 2014

Chris Siebenmann

Does having a separate daemon manager help system resilience?

One of the reasons usually put forward for having a separate daemon manager process (instead of having PID 1 do this work) is that doing so increases overall system resilience. As the theory goes, PID 1 can be made minimal and extremely unlikely to crash (unlike a more complex PID 1), while if the more complicated daemon manager does crash it can be restarted.

Well, maybe. The problem is the question of how well you can actually take over from a crashed daemon manager. Usually this won't be an orderly takeover and you can't necessarily trust anything in any auxiliary database that the daemon manager has left behind (since it could well have been corrupted before or during the crash). You need to have the new manager process step in and somehow figure out what was (and is) running and what isn't, then synchronize the state of the system back to what it's supposed to be, then pick up monitoring everything.

The simple case is a passive init system. Since the init system does not explicitly track daemon state, there is no state to recover on a daemon manager restart and resynchronization can be done simply by trying to start everything that should be started (based on runlevel and so on). We can blithely assume that the 'start' action for everything will do nothing if the particular service is already started. Of course this is not very realistic, as passive init systems generally don't have daemon manager processes that can crash in the first place.

For an active daemon manager, I think that at a minimum what you need is some sort of persistent and stable identifier for groups of processes that can be introspected and monitored from an arbitrary process. The daemon manager starts processes for all services under a an identifier determined from their service name; then when it crashes and you have to start a new one, the new one can introspect the identifiers for all of the groups to determine what services are (probably) running. Unfortunately there are lots of complications here, including that this doesn't capture the state of 'one-shot' services without persistent processes. This is of course not a standard Unix facility, so no fully portable daemon manager can do this.

It's certainly the case that a straightforward, simple daemon manager will not be able to take over from a crashed instance of itself. Being able to do real takeover requires both system-specific features and a relatively complex design and series of steps on startup, and still leaves you with uncertain or open issues. In short, having a separate daemon manager does not automatically make the system any more resilient under real circumstances. A crashing daemon manager is likely to force a system reboot just as much as a crashing PID 1 does.

However I think it's fair to say that under normal circumstances a separate daemon manager process crashing (instead of PID 1 crashing) will buy you more time to schedule a system outage. If the only thing that needs the daemon manager running is starting or stopping services and you already have all normal services started up, your system may be able to run for days before you need to reboot it. If your daemon manager is more involved in system operation or is routinely required to restart services, well, you're going to have (much) less time depending on the exact details.

by cks at December 17, 2014 04:54 AM

Ubuntu Geek

December 16, 2014

Chris Siebenmann

How a Firefox update just damaged practical security

Recently, Mozilla pushed out Firefox 34 as one of their periodic regular Firefox updates. Unfortunately this shipped with a known incompatible change that broke several extensions, including the popular Flashblock extension. Mozilla had known about this problem for months before the release; in fact the bug report was essentially filed immediately after the change in question landed in the tree, and the breakage was known when the change was proposed. Mozilla people didn't care enough to do anything in particular about this beyond (I think) blacklisting the extension as non-functional in Firefox 34.

I'm sure that this made sense internally in Mozilla and was justified at the time. But in practice this was a terrible decision, one that's undoubtedly damaged pragmatic Firefox security for some time to come. Given that addons create a new browser, the practical effect of this decision is that Firefox's automatic update to Firefox 34 broke people's browsers. When your automatic update breaks people's browsers, congratulations, you have just trained them to turn your updates off. And turning automatic updates off has very serious security impacts.

The real world effect of Mozilla's decision is that Mozilla has now trained some number of users that if they let Mozilla update Firefox, things break. Since users hate having things break, they're going to stop allowing those updates to happen, which will leave them exposed to real Firefox security vulnerabilities that future updates would fix (and we can be confident that there will be such updates). Mozilla did this damage not for a security critical change but for a long term cleanup that they decided was nice to have.

(Note that Mozilla could have taken a number of methods to fix the popular extensions that were known to be broken by this change, since the actual change required to extensions is extremely minimal.)

I don't blame Mozilla for making the initial change; trying to make this change was sensible. I do blame Mozilla's release process for allowing this release to happen knowing that it broke popular extensions and doing nothing significant about it, because Mozilla's release process certainly should care about the security impact of Mozilla's decisions.

by cks at December 16, 2014 03:15 AM

Yellow Bricks

Geek Whisperers episode: Marketing, Blogging & Community

I had the honor a couple of weeks ago to be on the Geek Whisperers podcast. It was very entertaining with John, Amy and Matt. The podcast deals about how I got started with blogging and communities and many other random topics.

There are a couple things which I wanted to share. First and foremost, blogging and social media has got nothing do with marketing for me personally, it is what I do, it is who I am. Everyone has a different way of digesting information, learning new things, dealing with complex matters or even dealing with emotions… Some sit down behind a white board, some discuss it with their colleagues, I write / share.

Secondly, when it comes to social media I am (more and more) a believer in the “social aspect”. I’ve seen the rise of the “message boards” and online communities and all the flame wars that came with it, I’ve seen the same on twitter / facebook etc. Recently I decided to be more hardline when it comes to social media and following people / accepting friend requests. If you look at facebook for instance, which is more personal for me then twitter, I have pictures of my kids up there so in that case I want to make sure I “trust” the person before I accept. And then there is the whole unfollowing / unfriending thing… Anyway, enough said… just have a listen.

And euuhm, thanks Matt for the nice pic of me riding a unicorn shooting rainbows, not sure what to think of it yet :)

"Geek Whisperers episode: Marketing, Blogging & Community" originally appeared on Follow me on twitter - @DuncanYB.

Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at December 16, 2014 01:12 AM

Google Blog

A Year in Search: the moments that defined 2014

Every year, we reflect on the moments that made us laugh, smile from ear to ear, or stay gripped to our screens in our annual Year in Search. In 2014, we were struck by the death of a beloved comedian, and watched news unfold about a horrific plane crash and a terrifying disease. We were captivated by the beautiful game, and had fun with birds, a bucket of ice, and a frozen princess.

Watch our video to rediscover the events, people and topics that defined 2014:

Wishing the genie goodbye
“You're only given a little spark of madness. You mustn't lose it.” The passing of beloved comedian and actor Robin Williams shook the world, bringing many people online to search for more information and to remember—and putting Williams in the #1 spot on our global trends charts. There was even an uptick in searches related to depression tests and mental health in the days following his death. We revisited his iconic roles in movies like Aladdin and Dead Poets Society and found solace in gifs and memes that captured Williams’ spirit.

All the world’s a stage
Nothing brings people together like sports, and 2014 had one of the biggest athletic events in recent memory. The World Cup in Brazil had its fair share of unforgettable moments and had everyone glued to their TVs and mobile devices all summer. From Luis Suarez’s bite heard around the world, to Tim Howard's superman performance vs. Belgium, to Germany’s incredible run to their fourth title, the competition certainly lived up to its reputation and topped the charts.

While sports brought people together, so did a good cause. This year, awareness for Amyotrophic Lateral Sclerosis, better known as ALS or Lou Gehrig’s Disease, reached an all-time high around the world due to the viral ALS Ice Bucket Challenge. As celebrities and everyday people alike braved a bucket of ice cold water for a cause, donations to help find a cure for the illness hit almost $100 million.
Into the unknown
How could a plane just vanish into thin air? In the wake of the disappearance of Malaysia Airlines Flight 370, that question propelled the mystery to the global trends charts. As the investigation continued on the ground and online, people stayed hopeful for a happy ending despite the dim odds: searches for “mh370 found” outnumbered searches for “mh370 lost.”

Here’s the full list of our top 10 global trending searches:
You can find more on these top searches and more at

Explore the stories from the year, one chapter at a time
On our Year in Search site, you can take an in-depth look at the stories that made 2014 unforgettable. From the rise of the selfie, to understanding if we search for “how” more than “why,” each chapter shares a glimpse into the people and events that drove this year forward.
We've also made it easier to find the trending topics of the year directly from Google Search. For the first time, a simple search for [google 2014] will give you a peek at what made the top trending lists from around the world. And you can follow more insights from the year with #YearInSearch. So take a moment to appreciate what this year had to offer. It’ll be 2015 before you know it.

by Emily Wood ( at December 16, 2014 12:00 AM

December 15, 2014

Rands in Repose

Google Webmasters

Google Public DNS and Location-Sensitive DNS Responses

Webmaster level: advanced

Recently the Google Public DNS team, in collaboration with Akamai, reached an important milestone: Google Public DNS now propagates client location information to Akamai nameservers. This effort significantly improves the accuracy of approximately 30% of the location-sensitive DNS responses returned by Google Public DNS. In other words, client requests to Akamai hosted content can be routed to closer servers with lower latency and greater data transfer throughput. Overall, Google Public DNS resolvers serve 400 billion responses per day and more than 50% of them are location-sensitive.

DNS is often used by Content Distribution Networks (CDNs) such as Akamai to achieve location-based load balancing by constructing responses based on clients’ IP addresses. However, CDNs usually see the DNS resolvers’ IP address instead of the actual clients’ and are therefore forced to assume that the resolvers are close to the clients. Unfortunately, the assumption is not always true. Many resolvers, especially those open to the Internet at large, are not deployed at every single local network.

To solve this issue, a group of DNS and content providers, including Google, proposed an approach to allow resolvers to forward the client’s subnet to CDN nameservers in an extension field in the DNS request. The subnet is a portion of the client’s IP address, truncated to preserve privacy. The approach is officially named edns-client-subnet or ECS.

This solution requires that both resolvers and CDNs adopt the new DNS extension. Google Public DNS resolvers automatically probe to discover ECS-aware nameservers and have observed the footprint of ECS support from CDNs expanding steadily over the past years. By now, more than 4000 nameservers from approximately 300 content providers support ECS. The Google-Akamai collaboration marks a significant milestone in our ongoing efforts to ensure DNS contributes to keeping the Internet fast. We encourage more CDNs to join us by supporting the ECS option.

For more information about Google Public DNS, please visit our website. For CDN operators, please also visit “A Faster Internet” for more technical details.

by Google Webmaster Central ( at December 15, 2014 08:00 AM

Chris Siebenmann

Why your 64-bit Go programs may have a huge virtual size

For various reasons, I build (and rebuild) my copy of the core Go system from the latest development source on a regular basis, and periodically rebuild the Go programs I use from that build. Recently I was looking at the memory use of one of my programs with ps and noticed that it had an absolutely huge virtual size (Linux ps's VSZ field) of around 138 GB, although it had only a moderate resident set size. This nearly gave me a heart attack, since a huge virtual size with a relatively tiny resident set size is one classical sign of a memory leak.

(Builds with earlier versions of Go tended to have much more modest virtual set sizes on the order of 32 MB to 128 MB depending on how long it had been running.)

Fortunately this was not a memory leak. In fact, experimentation soon demonstrated that even a basic 'hello world' program had that huge a virtual size. Inspection of the process's /proc/<pid>/smaps file (cf) showed that basically all of the virtual space used was coming from two inaccessible mappings, one roughly 8 GB long and one roughly 128 GB. These mappings had no access permissions (they disallowed reading, writing, and executing) so all they did was reserve address space (without ever using any actual RAM). A lot of address space.

It turns out that this is how Go's current low-level memory management likes to work on 64-bit systems. Simplified somewhat, Go does low level allocations in 8 KB pages taken from a (theoretically) contiguous arena; what pages are free versus allocated is stored in a giant bitmap. On 64-bit machines, Go simply pre-reserves the entire memory address space for both the bitmaps and the arena itself. As the runtime and your Go code starts to actually use memory, pieces of the arena bitmap and the memory arena will be changed from simple address space reservations into memory that is actually backed by RAM and being used for something.

(Mechanically, the bitmap and arena are initially mmap()'d with PROT_NONE. As memory is used, it is remapped with PROT_READ|PROT_WRITE. I'm not confident that I understand what happens when it's freed up, so I'm not going to say anything there.)

All of this is the case for the current post Go 1.4 development version of Go. Go 1.4 and earlier behave differently with much lower virtual sizes for running 64-bit programs, although in reading the Go 1.4 source code I'm not sure I understand why.

As far as I can tell, one of the interesting consequences of this is that 64-bit Go programs can use at most 128 GB of memory for most of their allocations (perhaps all of them that go through the runtime, I'm not sure).

For more details on this, see the comments in src/runtime/malloc2.go and in mallocinit() in src/runtime/malloc1.go.

I have to say that this turned out to be more interesting and educational than I initially expected, even if it means that watching ps is no longer a good way to detect memory leaks in your Go programs (mind you, I'm not sure it ever was). As a result, the best way to check this sort of memory usage is probably some combination of runtime.ReadMemStats() (perhaps exposed through net/http/pprof) and Linux's smem program or the like to obtain detailed information on meaningful memory address space usage.

PS: Unixes are generally smart enough to understand that PROT_NONE mappings will never use up any memory and so shouldn't count against things like system memory overcommit limits. However they generally will count against a per-process limit on total address space, which likely means that you can't really use such limits and run post 1.4 Go programs. Since total address space limits are rarely used, this is probably not likely to be an issue.

Sidebar: How this works on 32-bit systems

The full story is in the mallocinit() comment. The short version is that the runtime reserves a large enough arena to handle 2 GB of memory (which 'only' takes 256 MB) but only reserves 512 MB of address space out of the 2 GB it could theoretically use. If the runtime later needs more memory, it asks the OS for another block of address space and hopes that it is in the remaining 1.5 GB of address space that the arena covers. Under many circumstances the odds are good that the runtime will get what it needs.

by cks at December 15, 2014 06:18 AM

Ubuntu Geek

Install Unsettings (Unity GUI) in Ubuntu 14.10

Unsettings is a graphical configuration program for the Unity desktop environment that lets you change some oft the Unity settings.Unsettings can only change your users’s settings, you can’t use it to change global settings or do anything else that needs root privileges.You can use Unsettings to change the themes for GTK, window manager, icons and cursors. But it doesn’t support the installation of new themes.
Read the rest of Install Unsettings (Unity GUI) in Ubuntu 14.10 (77 words)

© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to
Post tags: , ,

Related posts

by ruchi at December 15, 2014 12:59 AM

December 14, 2014

Adams Tech Talk

Sniffing the Network

This article is intended to provide a simple demonstration of how easy it is to sniff/intercept traffic on various types of networks, and serve as a warning to utilize secure methods of communication on a) untrusted networks and b) known networks with the potential for untrusted clients or administrators.

The first consideration is the topology of the network we’re connected to. To consider 5 common scenarios:

  1. Wired ethernet hub network: Hubs are becoming more and more obsolete as they are changed to switches. Multiple devices can be connected to a hub, and any data received by the hub from one device is broadcast out to all other devices. This means that all devices receive all network traffic. Not only is this an inefficient use of bandwidth, but each device is trusted to accept traffic destined for itself and to ignore traffic destined for another node. To sniff such a network, a node simply needs to switch it’s network interface card to “promiscuous mode”, meaning that it accepts all traffic received.
  2. Wired ethernet switched network: Multiple devices can be connected to a switch, however a switch has greater intelligence than a hub. The switch will inspect the traffic sent on each port, and learn the hardware (MAC) address of the client connected to a particular port. Once learned, the switch will inspect any frames it receives on a port, and forward that frame to the known recipient’s port alone. Other devices connected to the switch will not receive traffic that is not destined for them. This offers enhanced bandwidth usage over a hub. Switches rely on ARP packets which are easily forged in order to learn which devices are on which ports.
  3. Wireless open networks: Multiple devices can connect to an open wireless network. All data is broadcast across the network in plain text, and any attacker can sniff/intercept traffic being broadcast across the network. An open wireless network may present the user with a form of hotspot login page before granting internet access, however this does not detract from the network itself being open.
  4. WEP encrypted wireless network: A WEP encrypted network requires a WEP key to encrypt and decrypt network traffic. WEP has long been an outdated and insecure method of wireless network protection, and cracking a wireless network’s WEP key is fast and requires low skill. WEP is not secure. In addition, all clients connected to the network use the same WEP key to connect. That results in any user on the network with the WEP key being table to view any traffic transmitted to and from other nodes on the network.
  5. WPA/WPA2 encrypted network: A WPA/WPA2 encrypted network is significantly more secure than a WEP network. Whilst attacks exist on parts of the protocol, and extensions such as WPS, no known attack is able to recover a complex WPA/WPA2 password within an acceptable period of time. Whilst all clients connect to the network with the same password, the protocol is engineered to create different keystreams between each connected client and the access point. This means that simple sniffing in the traditional sense is not possible on the network.

Now to look at some of the network attacks that can be leveraged in the above scenarios:

  1. Switches maintain a table of MAC addresses, and once this table is full, some switches will revert to hub activity. This allows an attacker to fill the MAC address table with bogus information via spoofed ARP packets, forcing the switch to act as a hub at which point traffic can be sniffed. Another attack involves flooding the switch with ARP packets, claiming that an existing node’s MAC address is in fact on the port that we are connected to. This causes traffic originally destined for the legitimate node to be directed to our switch port allowing us to intercept it. This type of attack can be mitigated using “port security”, where the switch is either manually configured with allowed MAC addresses on each port, or it is set to learn the first MAC address on the port that it receives and refuse to accept other conflicting ARP packets. ARP spoofing applies to wireless and wired networks equally.
  2. Most networks are configured to use DHCP which allows each node to dynamically gain its network configuration settings from a server on the fly. Setting up a rogue DHCP server is trivial, most commonly instructing the nodes to use a malicious gateway that intercepts traffic. This type of attack can be mitigated using “DHCP snooping”.
  3. Wireless clients can communicate with each other, and therefore a malicious client can launch attacks against other clients on a wireless network. This type of attack is mitigated by utilizing “client isolation” – this is typically implemented by the access point intercepting ARP requests for other IPs on the network and responding with it’s own MAC address. This prevents clients on the network from communicating with each other – they are only permitted to communicate with the access point.
  4. Although WPA/WPA2 networks prohibit simple sniffing, ARP spoofing is a common technique to intercept traffic on such networks.

It’s important to reiterate that “secured” networks such as WPA/WPA2 are only secure for the user as far as they trust other users on the network, and the network operator. For that reason an encrypted VPN tunnel or restricting all traffic to encrypted rather than plaintext protocols is essential.

Let’s get into an example: sniffing traffic on a WPA2 network. Unauthorized subversion of traffic on a network is illegal, and so this demonstration was performed on a private test network.

First, we need to connect to the WPA2 network using wpa_supplicant. I created a sample configuration in /etc/wpa_supplicant/test_network.conf containing my network name and PSK (password). wpa_supplicant has a number of configuration options. The most simple configuration is as follows:


Now, to connect to the network:

# wpa_supplicant -Dwext -iwlan0 -c /etc/wpa_supplicant/test_network.conf

Once connected, I gain an IP address using dhclient:

# dhclient wlan0

I’m assigned an IP of with a gateway of Running tcpdump -i wlan0 -n shows sporadic network broadcasts, but nothing of interest. This makes sense, as per the above explanation all connected clients are communicating with the access point using different keys and therefore I am unable to decrypt their traffic. Instead, if I use ARP spoofing to claim the MAC address currently assigned to the router (, clients will begin routing their internet bound traffic to my node. Before doing so, I will need to enable IP forwarding, allowing my node to forward the clients traffic out to the legitimate gateway, allowing them to continue accessing the internet:

# echo 1 > /proc/sys/net/ipv4/ip_forward
# arpspoof -i wlan0

arpspoof immediately begins sending ARP packets to the network, instructing nodes that the router is found at the MAC address of my own wireless adapter wlan0.

Let’s now use tcpdump again to view any HTTP traffic on the network in ASCII format:

# tcpdump -i wlan0 -n -s 1500 -A tcp port 80

tcpdump soon begins providing output:

21:47:09.699770 IP > Flags [P.], seq 0:1243, ack 1, win 229, options [nop,nop,TS val 11685115 ecr 2023438326], length 1243
..L.x.7.GET /favicon.ico HTTP/1.1
Connection: keep-alive
User-Agent: Mozilla/5.0 (Linux; U; Android 4.1.2; en-gb; GT-I8160 Build/JZO54K) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30
Accept-Encoding: gzip,deflate
Accept-Language: en-GB, en-US
Accept-Charset: utf-8, iso-8859-1, utf-16, *;q=0.7
Accept: */*
Cookie: SID=PTDDDCEBAAB6MKOSEFJolaj8Cuh7lQoJ10TjxIOTd2ekS6blnoAOtdt4q11yArTcyQdluS_56mKZRBDY45AUFnlXwUVytp4F5cj5GlRxMTb2ZoZiPfruVp0CA5_j7T294Ncakx5ymzmN8lTaj8m8EFNGFOOLtgE69YnnlJpYnFWMQqTE8Ux_3kkRQbzjMsJmXmWDQHlQ_a5JENcz7J9ttDq30FhgZYFu7aQItP5m965jrA_WBbTot0jUdZnUov2Wy9Ph0TCAPeTc4lXLYvCSD6Ymzvnw-9F5wrUBovQjiRADmHEuX9V8V9M0LyhyqAI_Cmw_jWVh0SQNpINbnW0oGmMsTUwhLb3BZoJgoGN-O5OTYbfmGRFVyhCLK2i6L2gcxNKCToCA0zWzKBSZjzZi_G7bX2Sqjy6k; HSID=ANi1GHnDIS9IjCj2s; APISID=jB_NbBrcih9k50kY/Ae-busQdQ82QWJWTo; PREF=ID=2BE55a438f68ebb4:U=ce7b384a9ae16929:FF=0:LD=en:TM=1411032446:LM=1411595388:S=bePFJjWQuBb51peq; NID=67=H1W_AjsPoXfHMEWGtLYQM9trpjiQDU2Rp2TPwn9sv6xsMoDmHPdHyO1DvB8WHQGmcDgtpZPa_Ydm5Mb6inwalLc5Gct7N5XnzTFNtk9-9wGCxQC-kmkRR6aQZnHwFRjHOrqkeqNNUUR3MZ160YPSw0gjRkuM4domcv_XJY0LnEAjB20d4rWUHQvien4wPYME0CX5
If-Modified-Since: Tue, 14 Aug 2012 15:19:23 GMT

We can see that we’ve caught an HTTP request to, including the full cookie data. An attacker could then hijack and replay that cookie to impersonate our session.

It’s easier to view and filter traffic visually using Wireshark:


Wireshark has been started and is listening on wlan0 whilst arpspoof continues to run. I’ve set a filter of contains “” to show HTTP traffic where the Host header matches

Now to view the stream, we right click on a packet and click “Follow TCP Stream”:


Wireshark 3

Wireshark now presents us with the entire TCP conversation. Different filters can be set to filter out IMAP or POP3 traffic for example. This attack was conducted on a WPA2 network using ARP spoofing, and can be easily applied to the other network scenarios described above unless adequate precautions are taken.

More advanced attack options

Firesheep is a browser plugin that demonstrates HTTP session hijacking and highlights the insecurities of using unencrypted protocols on open networks.

The scenario above can be extended by using iptables to redirect outbound traffic through application proxies running on our malicious host. sslstrip by Moxie Marlinspike acts as an HTTP proxy server that parses HTML and converts any HTTPS links to HTTP – its intention is to force unaware users to browse HTTP versions of sites rather than the encrypted HTTPS version. iptables would be used to redirect outbound traffic through sslstrip

Fake local services could be run and have traffic redirected through them in the same way as sslstrip above to present users with fake inboxes and email scenarios for example.

Lastly, an intercepting HTTPS server could be utilized that generated fake, self-signed certificates given the host name being requested. Users on the network would be presented with an SSL warning indicating that the certificate was not signed by a trusted CA, however many users would blindly accept and continue, not understanding the implications of the warning and simply wanting to access the desired remote site.

The best advice for staying secure on untrusted networks is a) using an encrypted VPN tunnel to a trusted remote host with both client and server side validation, and b) using encrypted protocols such as HTTPS, IMAPS, POP3S and SMTPS. It’s crucial not to accept SSL warnings without a thorough understanding of the situation and why the message has arisen.


by Adam Palmer at December 14, 2014 11:54 PM

Server Density

Chris Siebenmann

How init wound up as Unix's daemon manager

If you think about it, it's at least a little bit odd that PID 1 wound up as the de facto daemon manager for Unix. While I believe that the role itself is part of the init system as a whole, this is not the same thing as having PID 1 do the job and in many ways you'd kind of expect it to be done in another process. As with many things about Unix, I think that this can be attributed to the historical evolution Unix has gone through.

As I see the evolution of this, things start in V7 Unix (or maybe earlier) when Research Unix grew some system daemons, things like crond. Something had to start these, so V7 had init run /etc/rc on boot as the minimal approach. Adding networking to Unix in BSD Unix increased the number of daemons to start (and was one of several changes that complicated the whole startup process a lot). Sun added even more daemons with NFS and YP and so on and either created or elaborated interdependencies among them. Finally System V came along and made everything systematic with rcN.d and so on, which was just in time for yet more daemons.

(Modern developments have extended this even further to actively monitoring and restarting daemons if you ask them to. System V init could technically do this if you wanted, but people generally didn't use inittab for this.)

At no point in this process was it obvious to anyone that Unix was going through a major sea change. It's not as if Unix went in one step from no daemons to a whole bunch of daemons; instead there was a slow but steady growth in both the number of daemons and the complexity of system startup in general, and much of this happened on relatively resource-constrained machines where extra processes were a bad idea. Had there been a single giant step, maybe people would have sat down and asked themselves if PID 1 and a pile of shell scripts were the right approach and said 'no, it should be a separate process'. But that moment never happened; instead Unix basically drifted into the current situation.

(Technically speaking you can argue that System V init actually does do daemon 'management' in another process. System V init doesn't directly start daemons; instead they're started several layers of shell scripts away from PID 1. I call it part of PID 1 because there is no separate process that really has this responsibility, unlike the situation in eg Solaris SMF.)

by cks at December 14, 2014 05:56 AM

December 13, 2014

Aaron Johnson

Links: 12-12-2014

  • The Importance of What You Say | Scott Berkun
    Quote: "Despite dreams of a world in which the best ideas win simply because they should, we live in a world where the fate of ideas hinges on how well you talk about what you’ve made, or what you want to make." Just convinced me to buy his book.
    (categories: speaking writing ideas )

by ajohnson at December 13, 2014 06:30 AM

Chris Siebenmann

There are two parts to making your code work with Python 3

In my not terribly extensive experience so far, in the general case porting your code to Python 3 is really two steps in one, not a single process. First, you need to revise your code so that it runs on Python 3 at all; it uses print(), it imports modules under their new names, and so on. Some amount of this can be automated by 2to3 and similar tools, although not all of it. As I discovered, a great deal of this is basically synonymous with modernizing your code to the current best practice for Python 2.7. I believe that almost all of the necessary changes will still work on Python 2.7 without hacks (certainly things like print() will with the right imports from __future__).

After your code will theoretically run at all, you need to revise your code so that it handles strings in Unicode, and it means that calling this process 'porting' is not really a good label. The moment you deal with Unicode you need to consider both character encoding conversion points and what you do on errors. Dealing with Unicode is extra work and confronting it may well require at least a thorough exploration of your code and perhaps a deep rethink of your design. This is not at all like the effort to revise your code to Python 3 idioms.

(And some people will have serious problems, although future Python 3 versions are dealing with some of the problems.)

Code that has already been written to the latest Python 2.7 idioms will need relatively little revision for Python 3's basic requirements, although I think it always needs some just to cope with renamed modules. Code that was already dealing very carefully with Unicode on Python 2.7 will need little or no revision to deal with Python 3's more forced Unicode model, because it's already effectively operating in that model anyways (although possibly imperfectly in ways that were camouflaged by Python 2.7's handling of this issue).

The direct corollary is that both the amount and type of work you need to do to get your code running under Python 3 depends very much on what it does today with strings and Unicode on Python 2. 'Clean' code that already lives in a Unicode world will have one experience; 'sloppy' code will have an entirely different one. This means that the process and experience of making code work on Python 3 is not at all monolithic. Different people with different code bases will have very different experiences, depending on what their code does (and on how much they need to consider corner cases and encoding errors).

(I think that Python 3 basically just works for almost all string handling if your system locale is a UTF-8 one and you never deal with any input that isn't UTF-8 and so never are confronted with decoding errors. Since this describes a great many people's environments and assumptions, simplistic Python 3 code can get very far. If you're in such a simple environment, the second step of Python 3 porting also disappears; your code works on Python 3 the moment it runs, possibly better than it did on Python 2.)

by cks at December 13, 2014 06:13 AM


ZFS: RAID-Z Resilvering

Solaris 11.2 introduced a new ZFS pool version: 35 Sequential resilver.

The new feature is supposed to make disk resilvering (disk replacement, hot-spare synchronization, etc.) much faster. It achieves it by reading ahead some meta data first and then by trying to read the data to be resilvered in a sequential manner. And it does work!

Here is a real world case, with real data - over 150mln different sized files, most relatively small. Many of them were deleted, new were written, etc. so I expect that the data is already fragmented in the pool. The server is Sun/Oracle x4-2l with 26x 1.2TB 2.5" 10k SAS disks. The 24 disks in front are presented in a pass-thru mode and managed by ZFS, configured as 3 RAID-Z pools, the other 2 disks in rear are configured in RADI-1 in the raid controller and used for OS. A disk in one of the pools failed, and hot-spare automatically attached:

# zpool status -x
status: One or more devices is currently being resilvered. The pool will
continue to function in a degraded state.
action: Wait for the resilver to complete.
Run 'zpool status -v' to see device specific details.
scan: resilver in progress since Fri Dec 12 21:02:58 2014
3.60T scanned
45.9G resilvered at 342M/s, 9.96% done, 2h45m to go

raidz1-0 DEGRADED 0 0 0
spare-0 DEGRADED 0 0 0
c0t5000CCA01D5EAE50d0 UNAVAIL 0 0 0
c0t5000CCA01D5EED34d0 DEGRADED 0 0 0 (resilvering)
c0t5000CCA01D5BF56Cd0 ONLINE 0 0 0
c0t5000CCA01D5E91B0d0 ONLINE 0 0 0
c0t5000CCA01D5F9B00d0 ONLINE 0 0 0
c0t5000CCA01D5E87E4d0 ONLINE 0 0 0
c0t5000CCA01D5E95B0d0 ONLINE 0 0 0
c0t5000CCA01D5F8244d0 ONLINE 0 0 0
c0t5000CCA01D58B3A4d0 ONLINE 0 0 0
c0t5000CCA01D5EED34d0 INUSE
c0t5000CCA01D5E1E3Cd0 AVAIL

errors: No known data errors

Let's see I/O statistics for the involved disks:

# iostat -xnC 1 | egrep "device| c0$|c0t5000CCA01D5EAE50d0|c0t5000CCA01D5EED34d0..."
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
16651.6 503.9 478461.6 69423.4 0.2 26.3 0.0 1.5 1 19 c0
2608.5 0.0 70280.3 0.0 0.0 1.6 0.0 0.6 3 36 c0t5000CCA01D5E95B0d0
2582.5 0.0 66708.5 0.0 0.0 1.9 0.0 0.7 3 39 c0t5000CCA01D5F9B00d0
2272.6 0.0 68571.0 0.0 0.0 2.9 0.0 1.3 2 50 c0t5000CCA01D5E91B0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t5000CCA01D5EAE50d0
0.0 503.9 0.0 69423.8 0.0 9.7 0.0 19.3 2 100 c0t5000CCA01D5EED34d0
2503.5 0.0 66508.4 0.0 0.0 2.0 0.0 0.8 3 41 c0t5000CCA01D58B3A4d0
2324.5 0.0 67093.8 0.0 0.0 2.1 0.0 0.9 3 44 c0t5000CCA01D5F8244d0
2285.5 0.0 69192.3 0.0 0.0 2.3 0.0 1.0 2 45 c0t5000CCA01D5E87E4d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t5000CCA01D5E1E3Cd0
1997.6 0.0 70006.0 0.0 0.0 3.3 0.0 1.6 2 54 c0t5000CCA01D5BF56Cd0
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
25150.8 624.9 499295.4 73559.8 0.2 33.7 0.0 1.3 1 22 c0
3436.4 0.0 68455.3 0.0 0.0 3.3 0.0 0.9 2 51 c0t5000CCA01D5E95B0d0
3477.4 0.0 71893.7 0.0 0.0 3.0 0.0 0.9 3 48 c0t5000CCA01D5F9B00d0
3784.4 0.0 72370.6 0.0 0.0 3.6 0.0 0.9 3 56 c0t5000CCA01D5E91B0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t5000CCA01D5EAE50d0
0.0 624.9 0.0 73559.8 0.0 9.4 0.0 15.1 2 100 c0t5000CCA01D5EED34d0
3170.5 0.0 72167.9 0.0 0.0 3.5 0.0 1.1 2 55 c0t5000CCA01D58B3A4d0
3881.4 0.0 72870.8 0.0 0.0 3.3 0.0 0.8 3 55 c0t5000CCA01D5F8244d0
4252.3 0.0 70709.1 0.0 0.0 3.2 0.0 0.8 3 53 c0t5000CCA01D5E87E4d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t5000CCA01D5E1E3Cd0
3063.5 0.0 70380.1 0.0 0.0 4.0 0.0 1.3 2 60 c0t5000CCA01D5BF56Cd0
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
17190.2 523.6 502346.2 56121.6 0.2 31.0 0.0 1.8 1 18 c0
2342.7 0.0 71913.8 0.0 0.0 2.9 0.0 1.2 3 43 c0t5000CCA01D5E95B0d0
2306.7 0.0 72312.9 0.0 0.0 3.0 0.0 1.3 3 43 c0t5000CCA01D5F9B00d0
2642.1 0.0 68822.9 0.0 0.0 2.9 0.0 1.1 3 45 c0t5000CCA01D5E91B0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t5000CCA01D5EAE50d0
0.0 523.6 0.0 56121.2 0.0 9.3 0.0 17.8 1 100 c0t5000CCA01D5EED34d0
2257.7 0.0 71946.9 0.0 0.0 3.2 0.0 1.4 2 44 c0t5000CCA01D58B3A4d0
2668.2 0.0 72685.4 0.0 0.0 2.9 0.0 1.1 3 43 c0t5000CCA01D5F8244d0
2236.6 0.0 71829.5 0.0 0.0 3.3 0.0 1.5 3 47 c0t5000CCA01D5E87E4d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t5000CCA01D5E1E3Cd0
2695.2 0.0 72395.4 0.0 0.0 3.2 0.0 1.2 3 45 c0t5000CCA01D5BF56Cd0
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
31265.3 578.9 342935.3 53825.1 0.2 18.3 0.0 0.6 1 15 c0
3748.0 0.0 48255.8 0.0 0.0 1.5 0.0 0.4 2 42 c0t5000CCA01D5E95B0d0
4367.0 0.0 47278.2 0.0 0.0 1.1 0.0 0.3 2 35 c0t5000CCA01D5F9B00d0
4706.1 0.0 50982.6 0.0 0.0 1.3 0.0 0.3 3 37 c0t5000CCA01D5E91B0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t5000CCA01D5EAE50d0
0.0 578.9 0.0 53824.8 0.0 9.7 0.0 16.8 1 100 c0t5000CCA01D5EED34d0
4094.1 0.0 48077.3 0.0 0.0 1.2 0.0 0.3 2 35 c0t5000CCA01D58B3A4d0
5030.1 0.0 47700.1 0.0 0.0 0.9 0.0 0.2 3 33 c0t5000CCA01D5F8244d0
4939.9 0.0 52671.2 0.0 0.0 1.1 0.0 0.2 3 33 c0t5000CCA01D5E87E4d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t5000CCA01D5E1E3Cd0
4380.1 0.0 47969.9 0.0 0.0 1.4 0.0 0.3 3 36 c0t5000CCA01D5BF56Cd0

These are pretty amazing numbers for RAID-Z - and the only reason why a single disk drive can do so many thousands reads per second is that most of them have to be very almost ideally sequential. From time to time I see even more amazing numbers: 

                    extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
73503.1 3874.0 53807.0 19166.6 0.3 9.8 0.0 0.1 1 16 c0
9534.8 0.0 6859.5 0.0 0.0 0.4 0.0 0.0 4 30 c0t5000CCA01D5E95B0d0
9475.7 0.0 6969.1 0.0 0.0 0.4 0.0 0.0 4 30 c0t5000CCA01D5F9B00d0
9646.9 0.0 7176.4 0.0 0.0 0.4 0.0 0.0 3 31 c0t5000CCA01D5E91B0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t5000CCA01D5EAE50d0
0.0 3478.6 0.0 18040.0 0.0 5.1 0.0 1.5 2 98 c0t5000CCA01D5EED34d0
8213.4 0.0 6908.0 0.0 0.0 0.8 0.0 0.1 3 38 c0t5000CCA01D58B3A4d0
9671.9 0.0 6860.5 0.0 0.0 0.4 0.0 0.0 3 30 c0t5000CCA01D5F8244d0
8572.7 0.0 6830.0 0.0 0.0 0.7 0.0 0.1 3 35 c0t5000CCA01D5E87E4d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t5000CCA01D5E1E3Cd0
18387.8 0.0 12203.5 0.0 0.1 0.7 0.0 0.0 7 57 c0t5000CCA01D5BF56Cd0

It is really good to see the new feature work so well in practice. This feature is what makes RAID-Z much more usable in many production environments. The other feature which complements this one and also makes RAID-Z much more practical to use is: RAID-Z/mirror hybrid allocator introduced in Solaris 11 (pool version 29). It makes accessing meta-data in RAID-Z much faster.

Both features are only available in Oracle Solaris 11 and not in OpenZFS deriviates.
Although OpenZFS has its own interesting new features as well.

by milek ( at December 13, 2014 01:27 AM

December 12, 2014

Google Blog

Through the Google lens: search trends December 6-11

From The Colbert Report to astronomer Annie Jump Cannon, here's a look at this week's search stars.

The presidency is just my day job
Being the President of the United State is no easy task, but Barack Obama may have just faced his toughest test yet...a seat on The Colbert Report. This is the Commander-in-Chief’s third time on the show, but it’s still no cakewalk with every topic up for grabs—including Obama’s less than ideal approval ratings and his graying hair. Obama proved himself up for the challenge, though, kicking Colbert off his segment and making it his own.

Not only does Obama moonlight as a comedian—it turns out he can also code. At a White House event with 30 middle school students, the President kicked off Hour of Code, a program that encourages young people to develop their computer and software programming skills. With a little help from one of the kids, Obama wrote a single line of JavaScript, “moveForward(100),” to move the tutorial’s character 100 pixels to the right, and in the process becoming the first U.S. president to write a computer program.
Winners and losers
Competition on The Voice is heating up; the three finalists were revealed this week. But there’s a twist in the show’s seventh season: to spice things up, its creators introduced a new wildcard spot, bringing the total number of potential finalists to four. Now the nine remaining contestants who didn’t make it to the top three will duke it out for that fourth spot and a shot at singing glory.

While The Voice contestants still have a chance to take home the grand prize, other stars were left out in the cold this week when the Golden Globe nominations included several snubs. Names left off the selection sheet were Angelina Jolie and her upcoming film Unbroken, Christopher Nolan and his much-hyped Interstellar, and Bradley Cooper, who gained 40 pounds to portray Chris Kyle in the biopic film American Sniper. Oh well—there's still the Oscars. Meanwhile, movies Birdman and Boyhood snapped up seven and five nominations, respectively—and the TV category is staying interesting with nods for several Netflix original series, Amazon’s first appearance with Transparent, and two surprise nominations for the CW’s quirky Jane the Virgin.

The sky above
This week, searchers spent a good chunk of their time looking up. The weather was top of mind as the Pineapple Express—no, not the film—hit the San Francisco Bay Area, causing flooding and power outages. The phenomenon gets its name from its origins in the waters near Hawaii, a.k.a. the Pineapple State, where it develops before heading towards the U.S Pacific Coast.

Even for those of us trapped indoors, searchers got a chance to look at the stars...on our homepage at least. Searchers looked for more information about astronomer Annie Jump Cannon after a Google doodle marked her 151st birthday. Cannon—who was deaf for most of her adult life, and often overshadowed by her colleague Edward C. Pickering—was instrumental in the development of the Harvard Classification system, which categorizes stars by their temperature (whether or not they were nominated for a Golden Globe).

Tip of the week
Need to find something in the apps on your Android phone? Now you can ask your Google app for help—even if it’s in another app. Just say “Ok Google” and then “search YouTube for holiday decorating ideas” or “search Tumblr for Taylor Swift” and jump straight to those results within the other app (if you have it installed).

And come back next week for Google's Year in Search—a review of the people, moments, and events that captured the world's attention.

by Emily Wood ( at December 12, 2014 01:54 PM

Administered by Joe. Content copyright by their respective authors.