Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

October 25, 2014

Carl's Whine Rack

CentOS 7, openssh/openssl

Yesterday I finally gave CentOS 7 a try as a Virtualbox VM. (In the following, when I talk about a guest or a host, it's in the virtualization vernacular.)

I did what I usually do with VB guests: I gave it two network interfaces. The first is configured as NAT, so that the guest can reach the internet without the host needing a second IP for a bridged interface (bridging would be fine at home, but might cause me some trouble at work). The second is configured as host-only with a static IP, so that the host (and other guests) can initiate to the guest. (There’s probably a much easier way of doing this, but it’s worked so far.)

My CentOS experience is primarily with CentOS 5, and several things were really different in C7. (CentOS and Red Hat documentation is typically pretty good and will no doubt help me through some of the following. These are just some of the things I’m stumbling on at the moment.)

There’s no /etc/cron.daily/rpm, which creates a list of packages in /var/log/rpmpkgs. I use that a lot, so I copied that over from a C5 box.

I had a pretty hard time with networking. Neither interface seemed to come up on its own at first. I had to set ONBOOT=yes in the corresponding /etc/sysconfig/network-scripts files, and then the second interface mangled the first interface’s NAT connection. I ended up setting ONBOOT to yes for the first interface (the NAT connection) and to no for the second (host-only) interface. I put an ifconfig statement in rc.local to bring up the second interface, and that (eventually) worked.

ifconfig, netstat, and probably a bunch of other useful stuff is in the net-tools package, which isn’t included in a minimal install. And although there’s an rc.local, it’s not executable, and won’t run at boot until you “chmod +x” the thing.

And the interface names are now really weird. Instead of something memorable, traditional, predictable, and sensible like eth0 and eth1, now they are called enp0s3 and enp0s8. (I just had to look those up, because I couldn’t remember them.)

The new C7 guest has a very long list of iptables rules, but /etc/sysconfig/iptables doesn’t exist, so I don’t know where those rules are coming from. Thankfully port 22 is open by default, but I don’t like to run openssh on the default port, so at some point I’ll need to figure out how to fiddle with iptables rules.

I use GNU screen all the time. (I know the cool kids like tmux, but, frankly, screw them.) I typically have a screen session in which I’m logged in to several different hosts, and each window is named for the remote host. C7 rewrites the window name in screen, so “Ctrl-A ‘, hostname” no longer works. I don’t know if I need to (somehow) tell screen (on my Ubuntu host) not to allow the window process to rewrite the window title, or if I need to (somehow) tell bash in the C7 guest to be less assertive.

I’m also having some trouble building openssh from source in C7. The version of openssh that comes with C5 lacks some desirable features in the newer versions, so we tend to build it from source. In just the last version or two of openssh, something has changed such that it won’t build against the version of openssl that comes with C5. So the other day (before messing with C7) I built the newest version of openssl on a C5 box and built openssh against that. That worked, but I see now that by default openssl doesn’t create shared libraries, so the openssh I built linked to openssl statically (which made sshd nearly three times bigger than a dynamically-linked sshd).

So far I’ve been unable to build openssh against a source-built openssl on C7. I get one error if I try to link statically, and another error if I try to link dynamically. The version of openssl that comes with C7 is pretty current, so I could just build against that and probably have no problem. Likewise I could just use C7’s version of openssh. But although I’ve enjoyed the stability of C5, everything about it is pretty old at this point. I think that in the future I’d like to build many or all network services from source. Since openssh, apache, stunnel, and several others need openssl, I’d like to keep that current, too.

So I have some work ahead of me. I think C5 hits end-of-life some time in 2017, so I’ve got some time, but the C5 EOL will probably sneak up on me if I let it.

by mbrisby (noreply@blogger.com) at October 25, 2014 12:34 PM

Geeking with Greg

At what point is an over-the-air TV antenna too long to be legal?

You can get over-the-air HDTV signals using an antenna. This antenna gets a better, stronger signal with less interference if it is direct line-of-sight and as near as possible to the broadcast towers. So, you might want an antenna that is up high or even some distance away to get the best signal.

But if you try to do this, you immediately run into a question: At what point does that antenna become too long to be legal or the signal from the antenna is transmitted in a way where it is no longer legal?

Let's say I put an antenna behind my TV hooked up with a wire. That's obviously legal and what many people currently do.

Let's say I put an antenna outside on top of a tree or my garage and run a wire inside. Still seems obviously legal.

Let's say I put an antenna on top of my roof. Still clearly fine.

Let's say I put it on my neighbor's roof and run a wire to my TV. Still ok?

Let's say I put the antenna on my neighbor's roof, but have the antenna connect to my WiFi network and transmit the signal using my local area network instead of using a direct wired cable connection. Still ok?

Let's say I put the antenna on my neighbor's roof, but have the antenna connect to my neighbor's WiFi network and transmit the signal over their WiFi, over the internet, then to my WiFi, instead of using a direct wired cable connection. Still ok?

Let's say I put my antenna on my neighbor's roof, but my neighbor won't do this for free. I have to pay a small amount of rent to my neighbor for the space on his roof used by my antenna. I also have the antenna connect to my neighbor's WiFi network and transmit its signal over their WiFi, over the internet, then to my WiFi, instead of using a direct wired cable connection. Still ok?

Let's say, like before, I put my antenna on my neighbor's roof, pay the neighbor rent for the space on his roof, use the internet to transmit the antenna's signal. But, this time, I buy the antenna from my neighbor at the beginning (and, like before, I own it now). Is that okay?

Let's say I put my antenna on my neighbor's roof, pay the neighbor rent for the space on his roof, use the internet to transmit the antenna's signal, but now I rent or lease the antenna from my neighbor. Still ok? If this is not ok, which part is not ok? Is it suddenly ok if I replace the internet connection with a direct microwave relay or hardwired connection?

Let's say I do all of the last one, but use a neighbor's roof three houses away. Still ok?

Let's say I do all of the last one, but use a roof on a building five blocks away. Still ok?

Let's say I rent an antenna on top of a skyscraper in downtown Seattle and have the signal sent to me over the internet. Not ok?

The Supreme Court recently ruled Aereo is illegal. Aereo put small antennas in a building and rented them to people. The only thing they did beyond the last thing above is time-shifting, so they would not necessary send the signal from the antenna immediately, but instead store it, and only transmit it when demanded.

You might think it's the time shifting that's the problem, but that didn't seem to be what the Supreme Court said. Rather, they said the intent of the 1976 amendments to US copyright law prohibit community antennas (which is one antenna that sends its signal to multiple homes), labelling those a "public performance". They said Aereo's system was similar in function to a community antenna, despite actually having multiple antennas, and violated the intent of the 1976 law.

So, the question is, where is the line? Where does my antenna become too distant, transmit using the wrong methods, or involve too many payments to third parties in the operation of the antenna that it becomes illegal? Can it not be longer than X meters? Not transmit its signal in particular ways? Not require rent for the equipment or space on which the antenna sits? Not store the signal at the antenna and transmit it only on demand? What is the line?

I think this question is interesting for two reasons. First, as an individual, I would love to have a personal-use over-the-air HDTV antenna that gets a much better reception than the obstructed and inefficient placement behind my TV, but I don't know at what point it becomes illegal for me to place an antenna far away from the TV. Second, I suspect many others would like a better signal from their HDTV antenna too, and I'd love to see a startup (or any group) that helped people set up these antennas, but it is very unclear what it might be legal for a startup to do.

Thoughts?

by Greg Linden (noreply@blogger.com) at October 25, 2014 09:22 AM

Chris Siebenmann

The difference in available pool space between zfs list and zpool list

For a while I've noticed that 'zpool list' would report that our pools had more available space than 'zfs list' did and I've vaguely wondered about why. We recently had a very serious issue due to a pool filling up, so suddenly I became very interested in the whole issue and did some digging. It turns out that there are two sources of the difference depending on how your vdevs are set up.

For raidz vdevs, the simple version is that 'zpool list' reports more or less the raw disk space before the raidz overhead while 'zfs list' applies the standard estimate that you expect (ie that N disks worth of space will vanish for a raidz level of N). Given that raidz overhead is variable in ZFS, it's easy to see why the two commands are behaving this way.

In addition, in general ZFS reserves a certain amount of pool space for various reasons, for example so that you can remove files even when the pool is 'full' (since ZFS is a copy on write system, removing files requires some new space to record the changes). This space is sometimes called 'slop space'. According to the code this reservation is 1/32nd of the pool's size. In my actual experimentation on our OmniOS fileservers this appears to be roughly 1/64th of the pool and definitely not 1/32nd of it, and I don't know why we're seeing this difference.

(I found out all of this from a Ben Rockwood blog entry and then found the code in the current Illumos codebase to see what the current state was (or is).)

The actual situation with what operations can (or should) use what space is complicated. Roughly speaking, user level writes and ZFS operations like 'zfs create' and 'zfs snapshot' that make things should use the 1/32nd reserved space figure, file removes and 'neutral' ZFS operations should be allowed to use half of the slop space (running the pool down to 1/64th of its size), and some operations (like 'zfs destroy') have no limit whatever and can theoretically run your pool permanently and unrecoverably out of space.

The final authority is the Illumos kernel code and its comments. These days it's on Github so I can just link to the two most relevant bits: spa_misc.c's discussion of spa_slop_shift and dsl_synctask.h's discussion of zfs_space_check_t.

(What I'm seeing with our pools would make sense if everything was actually being classified as a 'allowed to use half of the slop space' operation. I haven't traced the Illumos kernel code at this level so I have no idea how this could be happening; the comments certainly suggest that it isn't supposed to be.)

(This is the kind of thing that I write down so I can find it later, even though it's theoretically out there on the Internet already. Re-finding things on the Internet can be a hard problem.)

by cks at October 25, 2014 06:06 AM

October 24, 2014

Ubuntu Geek

New Features in Ubuntu 14.10 Desktop and Server

Sponsored Link
Codenamed "Utopic Unicorn", 14.10 continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.
(...)
Read the rest of New Features in Ubuntu 14.10 Desktop and Server (673 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , , ,

Related posts

by ruchi at October 24, 2014 11:16 PM

Aaron Johnson

Day 9 and 10: Borgarnes, Mosfellsbær, Bláa Lónið and Reykjavík: Iceland

Woke up not too early at Sundabakki Guesthouse, had a nice breakfast and enjoyed homemade “mama” cakes from the owner of the house who has 5 of her own children and 14 grandchildren, 13 of which are boys so we got extra special nice treatment. Hit the road at 8:30ish so that we could get back to Reykjavík with a chance to see some of the sights in the city.

First stop on way back was at a museum in Borgarnes called the Settlement Center which had two very nice walk through exhibits with audio guides which our band of vikings didn’t make it completely through. We did have a very nice snack break in their coffee shop while the vikings played on the floor.

Next, Karen really wanted to get an Icelandic sweater so she found THE place (called Álafoss) in a little town called Mosfellsbær, which turned out to be a really neat stop. She shopped while walked the vikings around and then we (the vikings) discovered a shop where a guy (Palli Kristjánsson) made and sold all kinds of custom knives, which was really interesting for me and the oldest, not so much the little ones who just wanted to put sheep horns on their head:

I ended up buying a really beautiful Santoku knife with a handle made from the horn of a reinder, the hoof of an Icelandic horse, ebony and marbled padauk. I’m keeping it in a box until we get back home.

We finally found Mommy, who got her sweater and then piled back into the car to drive the rest of the way to Reykjavík. The tunnel Hvalfjörður was closed for re-paving which added an hour or so to the drive but we got back to Reykjavík in the afternoon, checked back into our hotel, drove over to the Perlan to look out over all of Reykjavík:

and also had ice cream. :)

Finally, on our last drive we headed out to jump into the tourist trap that is Bláa Lónið (The Blue Lagoon) which is about 40 minutes from downtown but was well worth it for the weary travelers:

and then ended up back in the city for dinner at Íslenski barinn, which was fantastic.

On our last day (Sunday), we had breakfast, returned the rental car and then walked around downtown, checking out Sólfar (Sun Voyager):

visiting the Maritime Museum (free for adults with kids, not highly recommended but nice if you need to waste an hour before you have to get on a plane), getting hot dogs (which are apparently some kind of Icelandic specialty but weren’t any better than what you’d get in Chicago):

and then spending our last bits of cash at a crepe shop which just happened to arrive with a giant pile of ice cream:

All told, a great trip, highly recommended, even in October although I think we got really lucky with the weather. I’d love to go back in July or August and hike around some glaciers and spend some time in the highlands, maybe in a couple years.

Stats:

  • Museums : 2
  • Hot dogs : 5
  • Giant piles of ice cream: 2
  • Geocaches: 0!

by ajohnson at October 24, 2014 07:33 PM

Byron Miller

#DOES14 Conference Notes

DevOps is a thing in the Enterprise and DevOps Enterprise Summit #DOES14 certainly made the case showing organizations such as Disney, GE, Target and groups within the US Government working on DevOps styled initiatives. I got home super late and i’m super drained from too much allergy/sinus meds but I wanted to share some thoughts here and […]

by byronm at October 24, 2014 04:36 PM

Rands in Repose

Ship Sizes Across the Universe(s)

This might be my favorite thing ever. You must click and scroll:

Ship Sizes Across the Universe(s)

#

by rands at October 24, 2014 02:56 PM

Ubuntu Geek

Everything Sysadmin

How to make change when handed a $20... and help democracy

If someone owes you $5.35 and hands you a $20 bill, every reader of this blog can easily make change. You have a calculator, a cash register, or you do it in your head.

However there is a faster way that I learned when I was 12.

Today it is rare to get home delivery of a newspaper, but if you do, you probably pay by credit card directly to the newspaper company. It wasn't always like that. When I was 12 years old I delivered newspapers for The Daily Record. Back then payments were collected by visiting each house every other week. While I did eventually switch to leaving envelopes for people to leave payments for me, there was a year or so where I visited each house and collected payment directly.

Let's suppose someone owed me $5.35 and handed me a $20 bill. Doing math in real time is slow and error prone, especially if you are 12 years old and tired from lugging newspapers around.

Instead of thinking in terms of $20 minus $5.35, think in terms of equilibrium. They are handing you $20 and you need to hand back $20... the $5.35 in newspapers they've received plus the change that will total $20 and reach equilibrium.

So you basically count starting at $5.35. You say outloud, "5.35" then hand them a nickel and say "plus 5 makes 5.40". Next you hand them a dime and say "plus 10 makes 5.50". Now you can hand them 50 cents, and say "plus 50 cents makes 6". Getting from 6 to 20 is a matter of handing them 4 singles and counting out loud "7, 8, 9, and 10" as you hand them each single. Next you hand them 10 and say "and 10 makes 20".

Notice that the complexity of subtraction has been replaced by counting, which is much easier. This technique is less prone to error, and makes it easier for the customer to verify what you are doing in real time because they see what you are doing along the way. It is more transparent.

Buy a hotdog from a street vendor and you'll see them do the same thing. It may cost $3, and they'll count starting at 3 as they hand you bills, "3..., 4, 5, and 5 is 10, and 10 is 20."

I'm sure that a lot of people reading this blog are thinking, "But subtraction is so easy!" Well, it is but this is easiER and less error prone. There are plenty of things you could do the hard way and I hope you don't.

It is an important life skill to be able to do math without a calculator and this is one of the most useful tricks I know.

So why is this so important that I'm writing about it on my blog?

There are a number of memes going around right now that claim the Common Core curriculum standards in the U.S. are teaching math "wrong". They generally show a math homework assignment like 20-5.35 as being marked "wrong" because the student wrote 14.65 instead of .05+.10+.50+4+10.

What these memes aren't telling you is they are based on a misunderstanding of the Common Core requirements. The requirement is that students are to be taught both ways and that the "new way" is such that that they can do math without a calculator. It is important that, at a young age, children learn that there are multiple equivalent ways of getting the same answer in math. The multi-connectedness of mathematics is an important concept, much more important than the rote memorization of addition and multiplication tables.

If you've ever mocked the way people are being trained to "stop thinking and just press buttons on a cash register" then you should look at this "new math" as a way to turn that around. If not, what do you propose? Not teaching them to think about math in higher terms?

In the 1960s there was the "new math" movement, which was mocked extensively. However if you look at what "new math" was trying to do: it was trying to prepare students for the mathematics required for the space age where engineering and computer science would be primary occupations. I think readers of this blog should agree that is a good goal.

One of the 1960s "new math" ideas that was mocked was that it tried to teach Base 8 math in addition to normal Base 10. This was called "crazy" at the time. It wasn't crazy at all. It was recognized by educators that computers were going to be a big deal in the future (correct) and to be a software developer you needed to understand binary and octal (mostly correct) or at least have an appreciation for them (absolutely correct). History has proven they naysayers to be wrong.

When I was in 5th grade (1978-9) my teacher taught us base 8, 2 and 12. He told us this was not part of the curriculum but he felt it was important. He was basically teaching us "new math" even though it was no longer part of the curriculum. Later when I was learning about computers the concept of binary and hexadecimal didn't phase me because I had already been exposed to other bases. While other computer science students were struggling, I had an advantage because I had been exposed to these strange base systems.

One of these anti-Common Core memes includes note from a father who claims he has a Bachelor of Science Degree in Electronics Engineering which included an extensive study of differential equations and even he is unable to explain the Common Core. Well, he must be a terrible engineer since the question was not about doing the math, but to find the off-by-one error in the diagram. To quote someone on G+, "The supposed engineer must suck at his work if he can't follow the process, debug each step, and find the off-by-one error."

Beyond the educational value or non-value of Common Core, what really burns my butt is the fact that all these memes come from one of 3 sources:

  • Organizations that criticize anything related to public education while at the same time they criticize any attempt to improve it. You can't have it both ways.
  • Organizations who just criticise anything Obama is for, to the extent that if Obama changes his mind they flip and reverse their position too.
  • Organizations backed by companies that either benefit from ignorance, or profit from the privatization of education. This is blatant and cynical.

Respected computer scientist, security guru, and social commentator Gene "Spaf" Spafford recently blogged "There is an undeniable, politically-supported growth of denial -- and even hatred -- of learning, facts, and the educated. Greed (and, most likely, fear of minorities) feeds demagoguery. Demagoguery can lead to harmful policies and thereafter to mob actions."

These math memes are part of that problem.

A democracy only works if the populace is educated. Education makes democracy work. Ignorance robs us of freedom because it permits us to be controlled by fear. Education gives us economic opportunities and jobs, which permit us to maintain our freedom to move up in social strata. Ignorance robs people of the freedom to have economic mobility. The best way we can show our love for our fellow citizens, and all people, is to ensure that everyone receives the education they need to do well today and in the future. However it is not just about love. There is nothing more greedy you can do than to make sure everyone is highly educated because it grows the economy and protects your own freedom too.

Sadly, Snopes and skeptics.stackexchange.com can only do so much. Fundamentally we need much bigger solution.

October 24, 2014 02:28 PM

Google Blog

Through the Google lens: search trends October 17-23

So what’s the word on the (internet) street these days? Search trends has you covered with the latest news that had everyone talking this past week.

The hard goodbye
This week, searchers paid their respects to legendary clothing designer Oscar de La Renta, who he passed away on Monday at the age of 82. Once called “The Sultan of Suave,” De la Renta was known for evening gowns that regularly graced the red carpets of Hollywood–and the closets of the White House. From Jackie Kennedy to Michelle Obama, de la Renta dressed every First Lady since the 1960s.

Speaking of Washington bigwigs, we also said goodbye to Ben Bradlee, storied editor of The Washington Post. Bradlee is remembered for his courageous journalism; during his tenure as editor of the Post, the outlet published the “Pentagon Papers” and reported on the Watergate Scandal. Always chasing a good story, Bradlee coined the term “mego” (“my eyes glaze over”) for any reporting that bored him—unknowingly foreshadowing Internet-speak.

Is that you Betty Sue?
Back from a long career hiatus, Renee Zellweger stepped back into the spotlight in L.A. and came out with a bang—or shall we say, a new look. People were shocked to see Zellweger… looking a bit different from what they remember. The star’s reemergence caused a spike in searches for her hit movie Bridget Jones’s Diary (that was her, right? ) But Zellweger is taking the stares and comments in stride, stating she’s happy that she looks different because she’s living a happier and more fulfilling life—no shame in your game, Renee–whatever makes you feel complete.

Gone in sixty seconds
If you blinked, you already missed this trend. Toys “R” Us decided to pull a line of Breaking Bad action figures after an online petition asking the store to stop selling the toys received more than 9,000 signatures. So what was all the hoopla about? Susan Schrivjer, the Florida mom who started the petition, felt the dolls–which came with a plastic sack of cash and mock drugs—deviated from the company’s family values. Toys “R” Us agreed and put the figures on an “indefinite sabbatical”–Walter White-style.

Crime and Punishment
It was a week of crime on the trends charts as people were searching for more information about a gunman who shot and killed Cpl. Nathan Cirillo, a soldier of the Canadian army, at Ottawa's National War Memorial. This was the latest assault on a member of the Canadian armed forces in recent times and has stirred debate about extremism in the West.

...As the Black Eyed Peas would say
With the World Series underway, people were ready to scream and shout for their favorite team. Searches for the San Francisco Giants and the Kansas City Royals hit a high as the two teams began their battle for The Commissioner's Trophy. And that’s not the only party going on these days. Diwali, a Hindu holiday also known as the “Festival of Lights,” started this past Tuesday. The celebrations will continue until this Saturday—so you still have time to check out photos of the stunning light displays around the world.

Tip of the week
First there was Angry Birds, then there was Candy Crush, which was swiftly followed by Flappy Bird–it’s kind of hard to stay on top of the latest video game trends. Now when you search for video games on Google, a panel will appear with all the info you need to stay in the know.


by Emily Wood (noreply@blogger.com) at October 24, 2014 01:18 PM

Standalone Sysadmin

Accidental DoS during an intentional DoS

Funny, I remember always liking DOS as a kid...

Anyway, on Tuesday, I took a day off, but ended up getting a call at home from my boss at 4:30pm or so. We were apparently causing a DoS attack, he said, and the upstream university had disabled our net connection. He was trying to conference in the central network (ITS) admins so we could figure out what was going on.

I sat down at my computer and was able to connect to my desktop at work, so the entire network wasn't shut down. It looked like what they had done was actually turn off out-bound DNS, which made me suspect that one of the machines on our network was performing a DOS as a kid...

Anyway, on Tuesday, I took a day off, but ended up getting a call at home from my boss at 4:30pm or so. We were apparently causing a DoS attack, he said, and the upstream university had disabled our net connection. He was trying to conference in the central network (ITS) admins so we could figure out what was going on.

I sat down at my computer and was able to connect to my desktop at work, so the entire network wasn't shut down. It looked like what they had done was actually turn off out-bound DNS, which made me suspect that one of the machines on our network was performing a DOS as a kid...

Anyway, on Tuesday, I took a day off, but ended up getting a call at home from my boss at 4:30pm or so. We were apparently causing a DoS attack, he said, and the upstream university had disabled our net connection. He was trying to conference in the central network (ITS) admins so we could figure out what was going on.

I sat down at my computer and was able to connect to my desktop at work, so the entire network wasn't shut down. It looked like what they had done was actually turn off out-bound DNS, which made me suspect that one of the machines on our network was performing a DOS as a kid...

Anyway, on Tuesday, I took a day off, but ended up getting a call at home from my boss at 4:30pm or so. We were apparently causing a DoS attack, he said, and the upstream university had disabled our net connection. He was trying to conference in the central network (ITS) admins so we could figure out what was going on.

I sat down at my computer and was able to connect to my desktop at work, so the entire network wasn't shut down. It looked like what they had done was actually turn off out-bound DNS, which made me suspect that one of the machines on our network was performing a DNS reflection attack, but this was just a sign of my not thinking straight. If that had been the case, they would have shut down inbound DNS rather than outbound.

After talking with them, they saw that something on our network had been initiating a denial of service attack on DNS servers using hundreds of spoofed source IPs. Looking at graphite for that time, I suspect you'll agree when I say, "yep":

Initially, the malware was spoofing IPs from all kinds of IP ranges, not just things in our block. As it turns out, I didn't have the sanity check on my egress ACLs on my gateway that said, "nothing leaves that isn't in our IP block", which is my bad. As soon as I added that, a lot of the traffic died. Unfortunately, because the university uses private IP space in the 10.x.x.x range, I couldn't block that outbound. And, of course, the malware quickly caught up to speed and started exclusively using 10.x addresses to spoof from. So we got shut down again.

Over the course of a day, here's what the graph looked like:

Now, on the other side of the coin, I'm sure you're screaming "SHUT DOWN THE STUPID MACHINE DOING THIS", because I was too. The problem was that I couldn't find it. Mostly because of my own ineptitude, as we'll see.

Alright, it's clear from the graph above that there were some significant bits being thrown around. That should be easy to track. So, lets fire up graphite and figure out what's up.

Most of my really useful graphs are thanks to the ironically named Unhelpful Graphite Tip #6, where Jason Dixon describes the "mostDeviant" function, which is pure awesome. The idea is that, if you have a BUNCH of metrics, you probably can't see much useful information because there are so many lines. So instead, you probably want the few weirdest metrics out of that collection, and that's what you get. Here's how it works.

In the graphite box, set the time frame that you're looking for:

Then add the graph data that you're looking for. Wildcards are super-useful here. Since the uplink graph above is a lot of traffic going out of the switch (tx), I'm going to be looking for a lot of data coming into the switch (rx). The metric that I'll use is:


CCIS.systems.linux.Core*.snmp.if_octets-Ethernet*.rx

That metric, by itself, looks like this:

There's CLEARLY a lot going on there. So we'll apply the mostDeviant filter:

and we'll select the top 4 metrics. At this point, the metric line looks like this:


mostDeviant(4,CCIS.systems.linux.Core*.snmp.if_octets-Ethernet*.rx)

and the graph is much more manageable:

Plus, most usefully, now I have port numbers to investigate. Back to the hunt!

As it turns out, those two ports are running to...another switch. An old switch that isn't being used by more than a couple dozen hosts. It's destined for the scrap heap, and because of that, when I was setting up collectd to monitor the switches using the snmp plugin, I neglected to add this switch. You know, because I'm an idiot.

So, I quickly modified the collectd config and pushed the change up to the puppet server, then refreshed the puppet agent on the host that does snmp monitoring and started collecting metrics. Except that, at the moment, the attack had stopped...so it was a waiting game that might never actually happen again. As luck would have it, the attack started again, and I was able to trace it to a port:

Gotcha!

(notice how we actually WERE under attack when I started collecting metrics? It was just so tiny compared to the full on attack that we thought it might have been normal baseline behavior. Oops)

So, checking that port led to...a VM host. And again, I encountered a road block.

I've been having an issue with some of my VMware ESXi boxes where they will encounter occasional extreme disk latency and fall out of the cluster. There are a couple of knowledgebase articles ([1] [2]) that sort-of kind-of match the issue, but not entirely. In any event, I haven't ironed it out. The VMs are fine during the disconnected phase, and the fix is to restart the management agents through the console, which I was able to do and then I could manage the host again.

Once I could get a look, I could see that there wasn't a lot on that machine - around half a dozen VMs. Unfortunately, because the host had been disconnected from the vCenter Server, stats weren't being collected on the VMs, so we had to wait a little bit to figure out which one it was. But we finally did.

In the end, the culprit was a NetApp Balance appliance. There's even a knowledge base article on it being vulnerable to ShellShock. Oops. And why was that machine even available to the internet at large? Double oops.

I've snapshotted that machine and paused it. We'll probably have some of the infosec researchers do forensics on it, if they're interested, but that particular host wasn't even being used. VM cruft is real, folks.

Now, back to the actual problem...

The network uplink to the central network happens over a pair of 10Gb/s fiber links. According to the graph, you can see that the VM was pushing 100MB (800Mb/s). This is clearly Bad(tm), but it's not world-ending bad for the network, right? Right. Except...

Upstream of us, we are going through an in-line firewall (that, like OUR equipment, was not set to filter egress traffic based on spoofed source IPs - oops, but not for me, finally!). We are assigned to one of five virtual firewalls on that one physical piece of hardware...despite that, the actual physical piece of hardware has a limit of around a couple hundred thousand concurrent sessions.

For a network this size, that is probably(?) reasonable, but a session counts as a stream of packets between a source IP and a destination IP. Every time you change the source IP, you get a new session, and when you spoof thousands of source IPs...guess what? And since it's a per-physical-device limit, our one rogue VM managed to take out the resources of the big giant firewall.

In essence, this one intentional DoS attack on a couple of hosts in China successfully DoS'd our university as sheer collateral damage. Oops.

So, we're working on ways to fix things. A relatively simple step is to prevent egress traffic from IPs that aren't our own. This is done now. We've also been told that we need to block egress DNS traffic, except from known hosts or to public DNS servers. This is in place, but I really question its efficacy. So we're blocking DNS. There are a lot of other protocols that use UDP, too. NTP reflection attacks are a thing. Anyway, we're now blocking egress DNS and I've had to special-case a half-dozen research projects, but that's fine by me.

In terms of things that will make an actual difference, we're going to be re-evaluating the policies in place for putting VMs on publicly-accessible networks, and I think it's likely that there will need to be justification for providing external access to new resources, whereas in the past, it's just been the default to leave things open because we're a college, and that's what we do, I guess. I've never been a fan of that, from a security perspective, so I'm glad it's likely to change now.

So anyway, that's how my week has been. Fortunately, it's Friday, and my sign is up to "It has been [3] Days without a Network Apocalypse".

by Matt Simmons at October 24, 2014 09:26 AM

Ubuntu Geek

Ubuntu 14.10 (Utopic Unicorn) released and Download Link included

Codenamed "Utopic Unicorn", 14.10 continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.
(...)
Read the rest of Ubuntu 14.10 (Utopic Unicorn) released and Download Link included (169 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , ,

Related posts

by ruchi at October 24, 2014 07:50 AM

Geeking with Greg

Why can't I buy a solar panel somewhere else in the US and get a credit for the electricity from it?

Seattle City Light has a clever project where, instead of installing solar panels on your house where they might be obscured by trees or buildings, you can buy into a solar panel installation on top of a building in a more efficient location and get a credit for the electricity generated on your electric bill.

Why stop there? Why can't I buy a solar panel in a very different location and get the electricity from it?

Phoenix, Arizona has about twice the solar energy efficiency of Seattle. Why can't I buy a solar panel and enjoy the electricity credit from that solar panel when it is installed in a nice sunny spot in the Southwest?

This doesn't require shipping the actual electricity to your home. Instead, you fund an installation of solar panels on top of a building in an area of the US with high solar energy efficiency, then get a credit for that electricity on your monthly electricity bill.

I suppose, at some boring financing level, this starts to resemble a corporate bond, with an initial payment yielding a stream of payments over time, but people wouldn't see it that way. The attraction would be installing solar panels and getting a credit on your energy bill without installing solar panels on your own home. Perhaps the firm arranging the installations and working out the deals with local utilities could be treating the entire thing as the equivalent of marketing bonds to people who like solar energy, but the attraction to people is that visceral appeal of a near $0 electricity bill they see every month from the solar panels they feel like they own and installed.

Even with the overhead pulled out by the company selling this and arranging deals with local utilities so this all appears on your local electricity bill, the credit on your electricity bill still should be much higher than you could possibly get installing panels on your own home with all its obstructions and cloudy weather. Solar generation in an ideal location in the US easily can generate twice as much power as what is available locally, on your rooftop.

So, why hasn't someone done this? Why can't I buy solar panels and have them installed not on my own home, but in some much better spot?

by Greg Linden (noreply@blogger.com) at October 24, 2014 08:48 AM

Chris Siebenmann

In Go I've given up and I'm now using standard packages

In my Go programming, I've come around to an attitude that I'll summarize as 'there's no point in fighting city hall'. What this means is that I'm now consciously using standard packages that I don't particularly like just because they are the standard packages.

I'm on record as disliking the standard flag package, for example, and while I still believe in my reasons for this I've decided that it's simply not worth going out of my way over it. The flag package works and it's there. Similarly, I don't think that the log package is necessarily a great solution for emitting messages from Unix style command line utilities but in my latest Go program I used it anyways. It was there and it wasn't worth the effort to code warn() and die() functions and so on.

Besides, using flag and log is standard Go practice so it's going to be both familiar to and expected by anyone who might look at my code someday. There's a definite social benefit to doing things the standard way for anything that I put out in public, much like most everyone uses gofmt on their code.

In theory I could find and use some alternate getopt package (these days the go to place to find one would be godoc.org). In practice I find using external packages too much of a hassle unless I really need them. This is an odd thing to say about Go, considering that it makes them so easy and accessible, but depending on external packages comes with a whole set of hassles and concerns right now. I've seen a bit too much breakage to want that headache without a good reason.

(This may not be a rational view for Go programming, given that Go deliberately makes using people's packages so easy. Perhaps I should throw myself into using lots of packages just to get acclimatized to it. And in practice I suspect most packages don't break or vanish.)

PS: note that this is different from the people who say you should eg use the testing package for your testing because you don't really need anything more than what it provides and stick with the standard library's HTTP stuff rather than getting a framework. As mentioned, I still think that flag is not the right answer; it's just not wrong enough to be worth fighting city hall over.

Sidebar: Doing standard Unix error and warning messages with log

Here's what I do:

log.SetPrefix("<progname>: ")
log.SetFlags(0)

If I was doing this better I would derive the program name from os.Args[0] instead of hard-coding it, but if I did that I'd have to worry about various special cases and no, I'm being lazy here.

by cks at October 24, 2014 05:16 AM

October 23, 2014

Ubuntu Geek

Eric – A Full featured Python and Ruby editor and IDE

Eric is a full featured Python and Ruby editor and IDE, written in python. It is based on the cross platform Qt gui toolkit, integrating the highly flexible Scintilla editor control. It is designed to be usable as everdays' quick and dirty editor as well as being usable as a professional project management tool integrating many advanced features Python offers the professional coder. eric4 includes a plugin system, which allows easy extension of the IDE functionality with plugins downloadable from the net.
(...)
Read the rest of Eric – A Full featured Python and Ruby editor and IDE (405 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , ,

Related posts

by ruchi at October 23, 2014 11:14 PM

Aaron Johnson

Day 8: The Longest Drive: Iceland

We thought going into the trip that there’d be a couple days of long driving but for the most part we were able to make a bunch of stops every day and see a bunch of things… except this day. Think the Google Maps estimate for this day was north of 4 hours so we tried to get an early start so that we could get somewhere and maybe do something in the latter half of the day. The guesthouse we stayed at didn’t have a formal breakfast but he provided tokens for us to use at a local “bakarí”, which was fantastic. We had donuts and ham and cheese croissants at Aðalbakarí and then hit the road.

Our first stop was for an easy to find geocache that was at a statue in the middle of nowhere, everyone got to stretch their legs for a bit and then we packed it in again and drove on roads like this:

at which point I must now pause and say that driving around Iceland was like watching a really long and slow but extremely beautiful nature movie. I thought a number of times that Iceland is like the island that resulted from Hawaii (volcanoes, oceans, etc.) and Alaska (glaciers, snow, fishing, etc..) and some state in the middle of the US (cows, sheep, horses, etc..) all getting together and saying “let’s make an island that has the best parts of all of what we have.”, which is to say that driving wasn’t a chore at all, except for the gravel roads in some places… OH and the precarious cliffs that we drove right on top of to get out of Siglufjörður, other than that though, amazing.

Second stop, which I can’t remember how we found (think it was the navigator) turned out to be really cool. I think she was looking for geocaches as a place for us to stop and she found this geocache at an abandoned house (had been abandoned for 70 years) that you had to drive off road to even get close to. Apparently it’s name is Svarðbæli, but you can read more about it here. We drove out the 4×4 road to about 1/2 mile away and then hoofed it on the gravel road the rest of the way. The geocache itself was a bear to find (had to enter the really old house, climb up to the second floor and then it was hidden away in the rafters, I couldn’t find it, Karen found it later) but the views and the walk were brilliant:

Everyone got to horse around a bit and get their wiggles out which was nice. Didn’t see an option to put an offer down on the house but if I was a hermit, I think I’d want to live here:

We made our picnic lunch in the back of the car, if I remember correctly this was the day that someone decided that they didn’t want a PB&J for lunch which meant that they had to wait until dinner for food. Doesn’t pay to be fussy in our family.
And then we drove:

until our next stop, a hill called Helgafell, which is called “holy mountain” and is a 227 meter high volcanic cone that, drum roll, had a geocache on top, which we found and then got to enjoy the views from the top:

A short drive later and we were at our “hotel” for the evening in another little town, this one called Stykkishólmur, in probably the smallest of all the rooms we had on the trip but we made do. We got there a bit early hoping to find something to do but there wasn’t much open at all (more stuff in the summer) so we hiked up to the top of the lighthouse:

which is called Súgandisey and tried to make sure no one fell off a cliff, then hoofed it back a mile or so into the little town center where the only place to eat that was open at 5pm was a little pizza shop called Stykkið, which ended up having GREAT pizza.

Kids went to bed, I missed a geocache on top of the lighthouse and had to go back at night with my headlamp to find it:

which wasn’t too hard to do, short of the wind and cold.
Stats:

  • Light houses: 1
  • Abandoned houses: 1
  • Geocaches: 4

by ajohnson at October 23, 2014 09:59 PM

Day 7: Húsavík, Goðafoss, Akureyri and Siglufjörður: Iceland

Already back from the trip, didn’t have time at night to do days 7, 8, 9 and 10 but I’m trying now. It’s never as good days after when you can’t remember all the little details though. Either way, Day 7 started early just like every morning on this trip since everyone was in the same room and the littlest person that lives with us just cannot keep his wiggles and sounds to himself and MUST share them with EVERYONE else in the room as soon as it turns 6am, sometimes earlier. I tried to keep him in bed and quiet for as long as possible, didn’t work all that well.

We had breakfast at our “hotel”, which was really a house converted into a “hotel”, which was like most of the places we stayed. I remember that pickled herring was on the table set out for folks if they were in the mood, I had a pickled herring plate for dinner on the first night we were in town but could never pull the trigger for breakfast.

After breakfast I took the boys out for a short walk around the harbor in Húsavík both because it a short walk and because there were a couple geocaches but mostly because it’s impossible to pack up the room while three crazy horses are trampling everything. We found two on our short walk, one an old church built out of Norwegian wood supposedly in a Swiss style although that’s been debated in the comments on the geocache log (you can see pictures there) and the other right outside the whale musuem, which we would have loved to have visited but it opened at 10am and we had plans for the day (lots of driving!). Here’s a shot of one of the boats in the harbor that we walked / skipped / ran by:

The dudes and I walked around a bit more (saw some REALLY interesting looking shops right on the waterfront where guys were working on fixing boats) and then headed back to the guesthouse to help Mommy finish the packing.

Our first stop for the day was at another amazing waterfall (Goðafoss), which also happened to be REALLY cold, so cold in fact that the mist from the waterfall was frozen on top of the sand and rocks leading up to the waterfall which made walking a bit of an adventure.

The biggest one and I went and tracked down the geocache that was at this waterfall and we took a bunch of pictures:

got some wiggles out and then packed it in for the long drive up to the northern most point that we’d hit on the trip. We needed a break on the way and so we ended up spending a bunch of the afternoon walking around the second biggest “city” in Iceland (Akureyri) where we had a great lunch at a hostel / restaurant, took pictures with a couple trolls (Karen hasn’t uploaded her pictures yet), found a couple geocaches (had to walk the dogs, err kids) and then went to this “christmas shop” (in Icelandic: Jólahúsið) which was supposed to be really great for kids , which ended up being ok for kids but not something that was going to keep their attention for longer than 15 minutes but that was long enough for us to get our Christmas ornament for the year (hi Grammie!).

Finally, we did the drive up to Siglufjörður (66° north!), which is a teensy little town right on the water that I read later is sometimes not accessible at all in the winter and it was here that we had our first bit of rain (we were very lucky the entire trip and didn’t get much rain at all even though October is supposedly the rainiest of all seasons in Iceland). We got in pretty late (6:30pm), dropped off our bags and walked down to the harbor to find some dinner:

The guy that rented us our house for the night said there were only two restaurants open in the winter, one that had homemade food and another that had fried food and pizza. Sadly the homemade food place was closed but we ended up having a great experience at Veitingastaðurinn Torgið where the hostess and waitress brought us out crayons and TOYS and generally made our dinner really fun. Food was good and we ended up having to run back up the hill to the house because it started hailing on us.

We stayed the night at the The Herringhouse, which if you ever get a chance to stay at, is very nice and has a shower that’s to die for.

If I ever get to go back, it looks like there are some really really neat hiking trails that are easily accessible from the town, would be amazing to hike up into the hills at sunset in the summer:

Stats:

  • Trolls: 2
  • Waterfalls: 1
  • Amazing showers: 1
  • Geocaches: 5

by ajohnson at October 23, 2014 09:09 PM

Chris Siebenmann

The clarity drawback of allowing comparison functions for sorting

I've written before about my unhappiness that Python 3 dropped support for using a comparison function. Well, let me take that back a bit, because I've come around to the idea that there are some real drawbacks to supporting a comparison function here. Not drawbacks in performance (which are comparatively unimportant here) but drawbacks in code clarity.

DWiki's code is sufficiently old that it uses only .sort() cmp functions simply because, well, that's what I had (or at least that's what I was used to). As a result, in two widely scattered spots in different functions its code base contains the following lines:

def func1(...):
    ....
    dl.sort(lambda x,y: cmp(y.timestamp, x.timestamp))
    ....

def func2(...):
    ....
    coms.sort(lambda x,y: cmp(x.time, y.time))
    ....

Apart from the field name, did you see the difference there? I didn't today while I was doing some modernization in DWiki's codebase and converted both of these to the '.sort(key=lambda x: x.FIELD)' form. The difference is that the first is a reverse sort, not a forward sort, because it flips x and y in the cmp().

(This code predates .sort() having a reverse= argument or at least my general awareness and use of it.)

And that's the drawback of allowing or using a sort comparison function: it's not as clear as directly saying what you mean. Small things in the comparison function can have big impacts and they're easy to overlook. By contrast, my intentions and what's going on are clearly spelled out when these things are rewritten into the modern form:

   dl.sort(key=lambda x: x.timestamp, reverse=True)
   coms.sort(key=lambda x: x.time)

Anyone, a future me included, is much less likely to miss the difference in sort order when reading (or skimming) this code.

I now feel that in practice you want to avoid using a comparison function as much as possible even if one exists for exactly this reason. Try very hard to directly say what you mean instead of hiding it inside your cmp function unless there's no way out. A direct corollary of this is that sorting interfaces should try to let you directly express as much as possible instead of forcing you to resort to tricks.

(Note that there are some cases where you must use a comparison function in some form (see especially the second comment).)

PS: I still disagree with Python 3 about removing the cmp argument entirely. It hasn't removed the ability to have custom sort functions; it's just forced you to write a lot more code to enable them and the result is probably even less efficient than before.

by cks at October 23, 2014 04:15 AM

October 22, 2014

Adrian C.

SysV init on Arch Linux, and Debian

Arch Linux distributes systemd as its init daemon, and has deprecated SysV init in June 2013. Debian is doing the same now and we see panic and terror sweep through that community, especially since this time thousands of my sysadmin colleagues are affected. But like with Arch Linux we are witnessing irrational behavior, loud protests all the way to the BSD camp and public threats of Debian forking. Yet all that is needed, and let's face it much simpler to achieve, is organizing a specialized user group interested in keeping SysV (or your alternative) usable in your favorite GNU/Linux distribution with members that support one another, exactly as I wrote back then about Arch Linux.

Unfortunately I'm not aware of any such group forming in the Arch Linux community around sysvinit, and I've been running SysV init alone as my PID 1 since then. It was not a big deal, but I don't always have time or the willpower to break my personal systems after a 60 hour work week, and the real problems are yet to come anyway - if (when) for example udev stops working without systemd PID 1. If you had a support group, and especially one with a few coding gurus among you most of the time chances are they would solve a difficult problem first, and everyone benefits. On some other occasions an enthusiastic user would solve it first, saving gurus from a lousy weekend.

For anyone else left standing at the cheapest part of the stadium, like me, maybe uselessd as a drop-in replacement is the way to go after major subsystems stop working in our favorite GNU/Linux distributions. I personally like what they reduced systemd to (inspired by suckless.org philosophy?), but chances are without support the project ends inside 2 years, and we would be back here duct taping in isolation.

by anrxc at October 22, 2014 09:28 PM

SysAdmin1138

Getting stuck in Siberia

I went on a bit of a twitter rant recently.

Good question, since that's a very different problem than the one I was ranting about. How do you deal with that?


I hate to break it to you, but if you're in the position where your manager is actively avoiding you it's all on you to fix it. There are cases where it isn't up to you, such as if there are a lot of people being avoided and it's affecting the manager's work-performance, but that's a systemic problem. No, for this case I'm talking about you are being avoided, and not your fellow direct-reports. It's personal, not systemic.

No, it's not fair. But you still have to deal with it.

You have a question to ask yourself:

Do I want to change myself to keep the job, or do I want to change my manager by getting a new job?

Because this shunning activity is done by managers who would really rather fire your ass, but can't or won't for some reason. Perhaps they don't have firing authority. Perhaps the paperwork is too much to bother with firing someone. Perhaps they're the conflict-avoidant type and pretending you don't exist is preferable to making you Very Angry by firing them.

You've been non-verbally invited to Go Away. You get to decide if that's what you want to do.

Going Away

Start job-hunting, and good riddance. They may even overlook job-hunt activities on the job, but don't push it.

Staying and Escalating

They can't/won't get rid of you, but you're still there. It's quite tempting to stick around and intimidate your way into their presence and force them to react. They're avoiding you for a reason, so hit those buttons harder. This is not the adult way to respond to the situation, but they started it.

I shouldn't have to say that, but this makes for a toxic work environment for everyone else so... don't do that.

Staying and Reforming

Perhaps the job itself is otherwise awesome-sauce, or maybe getting another job will involve moving and you're not ready for that. Time to change yourself.

Step 1: Figure out why the manager is hiding from you.
Step 2: Stop doing that.
Step 3: See if your peace-offering is accepted.

Figure out why they're hiding

This is key to the whole thing. Maybe they see you as too aggressive. Maybe you keep saying no and they hate that. Maybe you never give an unqualified answer and they want definites. Maybe you always say, 'that will never work,' to anything put before you. Maybe you talk politics in the office and they don't agree with you. Maybe you don't go paintballing on weekends. Whatever it is...

Stop doing that.

It's not always easy to know why someone is avoiding you. That whole avoidant thing makes it hard. Sometimes you can get intelligence from coworkers about what the manager has been saying when you're not around or what happens when your name comes up. Ask around, at least it'll show you're aware of the problem.

And then... stop doing whatever it is. Calm down. Say yes more often. Start qualifying answers only in your head instead of out loud. Say, "I'll see what I can do" instead of "that'll never work." Stop talking politics in the office. Go paintballing on weekends. Whatever it is, start establishing a new set of behaviors.

And wait.

Maybe they'll notice and warm up. It'll be hard, but you probably need the practice to change your habits.

See if your peace-offering is accepted

After your new leaf is turned over, it might pay off to draw their attention to it. This step definitely depends on the manager and the source of the problem, but demonstrating a new way of behaving before saying you've been behaving better can be the key to get back into the communications stream. It also hangs a hat on the fact that you noticed you were in bad graces and took effort to change.

What if it's not accepted?

Then learn to live in Siberia and work through proxies, or lump it and get another job.

by SysAdmin1138 at October 22, 2014 08:00 PM

Ubuntu Geek

Everything Sysadmin

Katherine Daniels (@beerops) interviews Tom Limoncelli

Katherine Daniels (known as @beerops on Twitter) interviewed me about the presentations I'll be doing at the upcoming Usenix LISA '14 conference. Check it out:

https://www.usenix.org/blog/interview-tom-limoncelli

Register soon! Seating in my tutorials is limited!

October 22, 2014 02:28 PM

Google Blog

An inbox that works for you

Today, we’re introducing something new. It’s called Inbox. Years in the making, Inbox is by the same people who brought you Gmail, but it’s not Gmail: it’s a completely different type of inbox, designed to focus on what really matters.

Email started simply as a way to send digital notes around the office. But fast-forward 30 years and with just the phone in your pocket, you can use email to contact virtually anyone in the world…from your best friend to the owner of that bagel shop you discovered last week.

With this evolution comes new challenges: we get more email now than ever, important information is buried inside messages, and our most important tasks can slip through the cracks—especially when we’re working on our phones. For many of us, dealing with email has become a daily chore that distracts from what we really need to do—rather than helping us get those things done.

If this all sounds familiar, then Inbox is for you. Or more accurately, Inbox works for you. Here are some of the ways Inbox is at your service:



Bundles: stay organized automatically
Inbox expands upon the categories we introduced in Gmail last year, making it easy to deal with similar types of mail all at once. For example, all your purchase receipts or bank statements are neatly grouped together so that you can quickly review and then swipe them out of the way. You can even teach Inbox to adapt to the way you work by choosing which emails you’d like to see grouped together.

Highlights: the important info at a glance
Inbox highlights the key information from important messages, such as flight itineraries, event information, and photos and documents emailed to you by friends and family. Inbox will even display useful information from the web that wasn’t in the original email, such as the real-time status of your flights and package deliveries. Highlights and Bundles work together to give you just the information you need at a glance.
Reminders, Assists, and Snooze: your to-do’s on your own terms
Inbox makes it easy to focus on your priorities by letting you add your own Reminders, from picking up the dry cleaning to giving your parents a call. No matter what you need to remember, your inbox becomes a centralized place to keep track of the things you need to get back to.
A sampling of Assists
And speaking of to-do’s, Inbox helps you cross those off your list by providing Assists—handy pieces of information you may need to get the job done. For example, if you write a Reminder to call the hardware store, Inbox will supply the store’s phone number and tell you if it's open. Assists work for your email, too. If you make a restaurant reservation online, Inbox adds a map to your confirmation email. Book a flight online, and Inbox gives a link to check-in.

Of course, not everything needs to be done right now. Whether you’re in an inconvenient place or simply need to focus on something else first, Inbox lets you Snooze away emails and Reminders. You can set them to come back at another time or when you get to a specific location, like your home or your office.

Get started with Inbox
Starting today, we’re sending out the first round of invitations to give Inbox a try, and each new user will be able to invite their friends. If Inbox can’t arrive soon enough for you, you can email us at inbox@google.com to get an invitation as soon as more become available.

When you start using Inbox, you’ll quickly see that it doesn’t feel the same as Gmail—and that’s the point. Gmail’s still there for you, but Inbox is something new. It’s a better way to get back to what matters, and we can’t wait to share it with you.



Cross-posted from the Official Gmail Blog

by Emily Wood (noreply@blogger.com) at October 22, 2014 11:03 AM

Chris Siebenmann

Exim's (log) identifiers are basically unique on a given machine

Exim gives each incoming email message an identifier; these look like '1XgWdJ-00020d-7g'. Among other things, this identifier is used for all log messages about the particular email message. Since Exim normally splits information about each message across multiple lines, you routinely need to reassemble or at least match multiple lines for a single message. As a result of this need to aggregate multiple lines, I've quietly wondered for a long time just how unique these log identifiers were. Clearly they weren't going to repeat over the short term, but if I gathered tens or hundreds of days of logs for a particular system, would I find repeats?

The answer turns out to be no. Under normal circumstances Exim's message IDs here will be permanently unique on a single machine, although you can't count on global uniqueness across multiple machines (although the odds are pretty good). The details of how these message IDs are formed are in the Exim documentation's chapter 3.4. On most Unixes and with most Exim configurations they are a per-second timestamp, the process PID, and a final subsecond timestamp, and Exim takes care to guarantee that the timestamps will be different for the next possible message with the same PID.

(Thus a cross-machine collision would require the same message time down to the subsecond component plus the same PID on both machines. This is fairly unlikely but not impossible. Exim has a setting that can force more cross-machine uniqueness.)

This means that aggregation of multi-line logs can be done with simple brute force approaches that rely on ID uniqueness. Heck, to group all the log lines for a given message together you can just sort on the ID field, assuming you do a stable sort so that things stay in timestamp order when the IDs match.

(As they say, this is relevant to my interests and I finally wound up looking it up today. Writing it down here insures I don't have to try to remember where I found it in the Exim documentation the next time I need it.)

PS: like many other uses of Unix timestamps, all of this uniqueness potentially goes out the window if you allow time on your machine to actually go backwards. On a moderate volume machine you'd still have to be pretty unlucky to have a collision, though.

by cks at October 22, 2014 04:21 AM

October 21, 2014

Ubuntu Geek

Yellow Bricks

What is coming for vSphere and VSAN? VMworld reveals…


I’ve been prepping a presentation for upcoming VMUGs, but wanted to also share this with my readers. The session is all about vSphere futures, what is coming soon? Before anyone says I am breaking NDA, I’ve harvested all of this info from public VMworld sessions. Except for the VSAN details, those were announced to the press at VMworld EMEA. Lets start with Virtual SAN…

The Virtual SAN details were posted in this Computer Weekly article, and by the looks of it they interviewed VMware’s CEO Pat Gelsinger and Alberto Farronato from the VSAN product team. So what is coming soon?

  • All Flash Virtual SAN support
    Considering the price of MLC has lowered to roughly the same price as SAS HDDs per GB I think this is a great new feature to have. Being able to build all-flash configurations at the price point of a regular configuration, and with probably many supported configurations is a huge advantage of VSAN. I would expect VSAN to support various types of flash as the “capacity” layer, so this is an architects dream… designing your own all-flash storage system!
  • Virsto integration
    I played with Virsto when it was just released and was impressed by the performance and the scalability. Functions that were part of Virst such as snapshots and clones these have been built into VSAN and it will bring VSAN to the next level!
  • JBOD support
    Something many have requested, and primarily to be able to use VSAN in Blade environments… Well with the JBOD support announced this will be a lot easier. I don’t know the exact details, but just the “JBOD” part got me excited.
  • 64 host VSAN cluster support
    VSAN doesn’t scale? Here you go,

That is a nice list by itself, and I am sure there is plenty more for VSAN. At VMworld for instance Wade Holmes also spoke about support for disk controller based encryption for instance. Cool right?! So what about vSphere? Considering even the version number was dropped during the keynote and it hints at a major release you would expect some big functionality to be introduced. Once again, all the stuff below is harvested from various public VMworld sessions:

  • VMFork aka Project Fargo – discussed here…
  • Increased scale!
    • 64 host HA/DRS cluster, I know a handful of customers who asked for 64 host clusters, so here it is guys… or better said: soon you will have it!
  • SMP vCPU FT – up to 4 vCPU support
    • I like FT from an innovation point of view, but it isn’t a feature I would personally use too much as I feel “fault tolerance” from an app perspective needs to be solved by the app. Now, I do realize that there are MANY legacy applications out there, and if you have a scale-up application which needs to be highly available then SMP FT is very useful. Do note that with this release the architecture of FT has changed. For instance you used to share the same “VMDK” for both primary and secondary, but that is no longer the case.
  • vMotion across anything
    • vMotion across vCenter instances
    • vMotion across Distributed Switch
    • vMotion across very large distance, support up to 100ms latency
    • vMotion to vCloud Air datacenter
  • Introduction of Virtual Datacenter concept in vCenter
    • Enhance “policy driven” experience within vCenter. Virtual Datacenter aggregates compute clusters, storage clusters, networks, and policies!
  • Content Library
    • Content Library provides storage and versioning of files including VM templates, ISOs, and OVFs.
      Includes powerful publish and subscribe features to replicate content
      Backed by vSphere Datastores or NFS
  • Web Client performance / enhancement
    • Recent tasks pane drops to the bottom instead of on the right
    • Performance vastly improved
    • Menus flattened
  • DRS placement “network aware”
    • Hosts with high network contention can show low CPU and memory usage, DRS will look for more VM placements
    • Provide network bandwidth reservation for VMs and migrate VMs in response to reservation violations!
  • vSphere HA component protection
    • Helps when hitting “all paths down” situations by allowing HA to take action on impacted virtual machines
  • Virtual Volumes, bringing the VSAN “policy goodness” to traditional storage systems

Of course there is more, but these are the ones that were discussed at VMworld… for the remainder you will have to wait until the next version of vSphere is released, or you can also sign up for the beta still I believe!

"What is coming for vSphere and VSAN? VMworld reveals…" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.


Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at October 21, 2014 12:55 PM

Chris Siebenmann

Some numbers on our inbound and outbound TLS usage in SMTP

As a result of POODLE, it's suddenly rather interesting to find out the volume of SSLv3 usage that you're seeing. Fortunately for us, Exim directly logs the SSL/TLS protocol version in a relatively easy to search for format; it's recorded as the 'X=...' parameter for both inbound and outbound email. So here's some statistics, first from our external MX gateway for inbound messages and then from our other servers for external deliveries.

Over the past 90 days, we've received roughly 1.17 million external email messages. 389,000 of them were received with some version of SSL/TLS. Unfortunately our external mail gateway currently only supports up to TLS 1.0, so the only split I can report is that only 130 of these messages were received using SSLv3 instead of TLS 1.0. 130 messages is low enough for me to examine the sources by hand; the only particularly interesting and eyebrow-raising ones were a couple of servers at a US university and a .nl ISP.

(I'm a little bit surprised that our Exim doesn't support higher TLS versions, to be honest. We're using Exim on Ubuntu 12.04, which I would have thought would support something more than just TLS 1.0.)

On our user mail submission machine, we've delivered to 167,000 remote addresses over the past 90 days. Almost all of them, 158,000, were done with SSL/TLS. Only three of them used SSLv3 and they were all to the same destination; everything else was TLS 1.0.

(It turns out that very few of our user submitted messages were received with TLS, only 0.9%. This rather surprises me but maybe many IMAP programs default to not using TLS even if the submission server offers it. All of these small number of submissions used TLS 1.0, as I'd hope.)

Given that our Exim version only supports TLS 1.0, these numbers are more boring than I was hoping they'd be when I started writing this entry. That's how it goes sometimes; the research process can be disappointing as well as educating.

(I did verify that our SMTP servers really only do support up to TLS 1.0 and it's not just that no one asked for a higher version than that.)

One set of numbers I'd like to get for our inbound email is how TLS usage correlates with spam score. Unfortunately our inbound mail setup makes it basically impossible to correlate the bits together, as spam scoring is done well after TLS information is readily available.

Sidebar: these numbers don't quite mean what you might think

I've talked about inbound message deliveries and outbound destination addresses here because that's what Exim logs information about, but of course what is really encrypted is connections. One (encrypted) connection may deliver multiple inbound messages and certainly may be handed multiple RCPT TO addresses in the same conversation. I've also made no attempt to aggregate this by source or destination, so very popular sources or destinations (like, say, Gmail) will influence these numbers quite a lot.

All of this means that this sort of numbers can't be taken as an indication of how many sources or destinations do TLS with us. All I can talk about is message flows.

(I can't even talk about how many outgoing messages are completely protected by TLS, because to do that I'd have to work out how many messages had no non-TLS deliveries. This is probably possible with Exim logs, but it's more work than I'm interested in doing right now. Clearly what I need is some sort of easy to use Exim log aggregator that will group all log messages for a given email message together and then let me do relatively sophisticated queries on the result.)

by cks at October 21, 2014 03:28 AM

October 20, 2014

Everything Sysadmin

See you tomorrow evening at the Denver DevOps Meetup!

Hey Denver folks! Don't forget that tomorrow evening (Tue, Oct 21) I'll be speaking at the Denver DevOps Meetup. It starts at 6:30pm! Hope to see you there!

http://www.meetup.com/DenverDevOps/events/213369602/

October 20, 2014 04:28 PM

Mark Shuttleworth

V is for Vivid

Release week! Already! I wouldn’t call Trusty ‘vintage’ just yet, but Utopic is poised to leap into the torrent stream. We’ve all managed to land our final touches to *buntu and are excited to bring the next wave of newness to users around the world. Glad to see the unicorn theme went down well, judging from the various desktops I see on G+.

And so it’s time to open the vatic floodgates and invite your thoughts and contributions to our soon-to-be-opened iteration next. Our ventrous quest to put GNU as you love it on phones is bearing fruit, with final touches to the first image in a new era of convergence in computing. From tiny devices to personal computers of all shapes and sizes to the ventose vistas of cloud computing, our goal is to make a platform that is useful, versal and widely used.

Who would have thought – a phone! Each year in Ubuntu brings something new. It is a privilege to celebrate our tenth anniversary milestone with such vernal efforts. New ecosystems are born all the time, and it’s vital that we refresh and renew our thinking and our product in vibrant ways. That we have the chance to do so is testament to the role Linux at large is playing in modern computing, and the breadth of vision in our virtual team.

To our fledgling phone developer community, for all your votive contributions and vocal participation, thank you! Let’s not be vaunty: we have a lot to do yet, but my oh my what we’ve made together feels fantastic. You are the vigorous vanguard, the verecund visionaries and our venerable mates in this adventure. Thank you again.

This verbose tract is a venial vanity, a chance to vector verbal vibes, a map of verdant hills to be climbed in months ahead. Amongst those peaks I expect we’ll find new ways to bring secure, free and fabulous opportunities for both developers and users. This is a time when every electronic thing can be an Internet thing, and that’s a chance for us to bring our platform, with its security and its long term support, to a vast and important field. In a world where almost any device can be smart, and also subverted, our shared efforts to make trusted and trustworthy systems might find fertile ground. So our goal this next cycle is to show the way past a simple Internet of things, to a world of Internet things-you-can-trust.

In my favourite places, the smartest thing around is a particular kind of monkey. Vexatious at times, volant and vogie at others, a vervet gets in anywhere and delights in teasing cats and dogs alike. As the upstart monkey in this business I can think of no better mascot. And so let’s launch our vicenary cycle, our verist varlet, the Vivid Vervet!

by mark at October 20, 2014 01:22 PM

Google Blog

DISTRICT VOICES: Inside Panem with our finest citizens

Meet District Voices, the latest campaign in our Art, Copy & Code project—where we explore new ways for brands to connect with consumers through experiences that people love, remember and share. District Voices was created in partnership with Lionsgate to promote the upcoming release of The Hunger Games: Mockingjay Part 1. -Ed.

Greetings, Citizens of Panem!

The Capitol has joined forces with Google and YouTube to celebrate the proud achievements of our strong, lively districts. Premiering today on YouTube, a new miniseries called DISTRICT VOICES will take you behind the scenes to meet some of Panem’s most creative—and loyal—citizens.

At 4 p.m. EDT/ 1 p.m. PDT every day this week, one of your favorite Citizen creators from YouTube will give you a never-before-seen tour of their districts. First, the Threadbanger textile experts of District 8 will show how utility meets beauty in this season’s fashion—plus, you’ll get a look at a new way to wear your Capitol pride. Tomorrow, District 2's Shane Fazen will provide a riveting demonstration of how we keep our noble peacekeepers in tip-top shape. On Wednesday, Derek Muller from District 5—Panem’s center of power generation—will give you a peek at a revolutionary new way to generate electricity. Thursday The Grain District’s own Feast of Fiction will show you how to bake one of beloved victor Peeta Mellark’s most special treats. And finally, iJustine, District 6’s liaison to the Capitol, will give you an exclusive glimpse at the majestic and powerful peacekeeper vehicles in action.

Tune in at CAPITOL TV. And remember—Love your labor. Take pride in your task. Our future is in your hands.

by Emily Wood (noreply@blogger.com) at October 20, 2014 10:05 AM

Tech Teapot

New Aviosys IP Power 9858 Box Opening

A series of box opening photos of the new Aviosys IP Power 9858 4 port network power switch. This model will in due course replace the Aviosys IP Power 9258 series of power switches. The 9258 series is still available in the mean time though, so don’t worry.

The new model supports WiFi (802.11n-b/g and WPS for easy WiFi setup), auto reboot on ping failure, time of day scheduler and internal temperature sensor. Aviosys have also built apps for iOS and Android, so you can now manage your power switch on the move. Together with the 8 port Aviosys IP Power 9820 they provide very handy tools for remote power management of devices. Say goodbye to travelling to a remote site just to reboot a broadband router.

Aviosys IP Power 9858DX Closed Box Aviosys IP Power 9858DX Open Box Aviosys IP Power 9858DX Front with Wifi Aerial Aviosys IP Power 9858DX Front Panel Aviosys IP Power 9858DX Rear Panel Aviosys IP Power 9858DX Read Close Up #2

 

The post New Aviosys IP Power 9858 Box Opening appeared first on Openxtra Tech Teapot.

by Jack Hughes at October 20, 2014 07:00 AM

Chris Siebenmann

Revisiting Python's string concatenation optimization

Back in Python 2.4, CPython introduced an optimization for string concatenation that was designed to reduce memory churn in this operation and I got curious enough about this to examine it in some detail. Python 2.4 is a long time ago and I recently was prompted to wonder what had changed since then, if anything, in both Python 2 and Python 3.

To quickly summarize my earlier entry, CPython only optimizes string concatenations by attempting to grow the left side in place instead of making a new string and copying everything. It can only do this if the left side string only has (or clearly will have) a reference count of one, because otherwise it's breaking the promise that strings are immutable. Generally this requires code of the form 'avar = avar + ...' or 'avar += ...'.

As of Python 2.7.8, things have changed only slightly. In particular concatenation of Unicode strings is still not optimized; this remains a byte string only optimization. For byte strings there are two cases. Strings under somewhat less than 512 bytes can sometimes be grown in place by a few bytes, depending on their exact sizes. Strings over that can be grown if the system realloc() can find empty space after them.

(As a trivial root, CPython also optimizes concatenating an empty string to something by just returning the other string with its reference count increased.)

In Python 3, things are more complicated but the good news is that this optimization does work on Unicode strings. Python 3.3+ has a complex implementation of (Unicode) strings, but it does attempt to do in-place resizing on them under appropriate circumstances. The first complication is that internally Python 3 has a hierarchy of Unicode string storage and you can't do an in-place concatenation of a more complex sort of Unicode string into a less complex one. Once you have compatible strings in this sense, in terms of byte sizes the relevant sizes are the same as for Python 2.7.8; Unicode string objects that are less than 512 bytes can sometimes be grown by a few bytes while ones larger than that are at the mercy of the system realloc(). However, how many bytes a Unicode string takes up depends on what sort of string storage it is using, which I think mostly depends on how big your Unicode characters are (see this section of the Python 3.3 release notes and PEP 393 for the gory details).

So my overall conclusion remains as before; this optimization is chancy and should not be counted on. If you are doing repeated concatenation you're almost certainly better off using .join() on a list; if you think you have a situation that's otherwise, you should benchmark it.

(In Python 3, the place to start is PyUnicode_Append() in Objects/unicodeobject.c. You'll probably also want to read Include/unicodeobject.h and PEP 393 to understand this, and then see Objects/obmalloc.c for the small object allocator.)

Sidebar: What the funny 512 byte breakpoint is about

Current versions of CPython 2 and 3 allocate 'small' objects using an internal allocator that I think is basically a slab allocator. This allocator is used for all overall objects that are 512 bytes or less and it rounds object size up to the next 8-byte boundary. This means that if you ask for, say, a 41-byte object you actually get one that can hold up to 48 bytes and thus can be 'grown' in place up to this size.

by cks at October 20, 2014 04:37 AM

October 19, 2014

Ubuntu Geek

Configuring layer-two peer-to-peer VPN using n2n

n2n is a layer-two peer-to-peer virtual private network (VPN) which allows users to exploit features typical of P2P applications at network instead of application level. This means that users can gain native IP visibility (e.g. two PCs belonging to the same n2n network can ping each other) and be reachable with the same network IP address regardless of the network where they currently belong. In a nutshell, as OpenVPN moved SSL from application (e.g. used to implement the https protocol) to network protocol, n2n moves P2P from application to network level.
(...)
Read the rest of Configuring layer-two peer-to-peer VPN using n2n (416 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , , ,

Related posts

by ruchi at October 19, 2014 11:20 PM

Evaggelos Balaskas

SatNOGS - Satellite Networked Open Ground Station

What started as a Nasa Space App Challenge now becomes an extraordinary opensource achievement on the top five finalist of hackaday.io.

What is SatNOGS in non technical words: imagine a cheap mobile openhardware ground station that can collaborate through the internet with other ground stations and gather satellite signals all together, participating in a holistic opensource/opendata and public accessible database/site !

If you are thinking, that cant be right, the answer is that it is!!!

The amazing team behind the SatNOGS is working around the clock - non stop ONLY with openhardware and free software to do exactly that !

A fully modular system (you can choose your own antennas! or base setup) you can review the entire code on github, you can see in high quality videos and guides for every step, every process, you can participate via comments, emails or even satellite signals !

satnogs_02.jpg

3D Printing is one of the major component in their journey till now. The have already published every design they are using for the satnogs project on github! You just need to print them. Every non-3d printing hardware are available to every hardware store near by you. The members of this project have published the Arduino code and schematics for the electronics too !!

Everything is fully documented in details, everything is open source !

AMAZING!

satnogs.jpg

It’s seems that i may be bias, so dont believe anything i am writing.
See for your self and be mind-blowing impressed with the quality of their hardware documentation

Visit their facebook account for news and contact them if you have a brilliant idea about satellites or you just want to get a status of their work.

How about the team ?

I’ve met the entire team at Athens Hackerspace and the first thing that came into my mind (and it is most impressive) is the diversity of the members itself.

Not only in age (most of them are university students, but older hobbyists are participating too) but also in the technical area of expertise. This team can easily solve every practical problem they can find in the process.

SatNOGS, as I’ve already mentioned, is fully active and that all started (with the bing bang of-course) with an idea: To reach and communicate with the Space (the final frontier). Satellites are sending signals 24/7 and the ground stations cant reach every satellite (i am not talking to geo-static satellites) and there is no one to acknowledge that. The problem that the satnogs is solving is real.

And i hope with this blog post, more people can understand how important is that this project scale to more hackerspaces around the globe.

To see more, just click here and you can monitor the entire process till now.

Tag(s): SatNOGS

October 19, 2014 09:28 PM

Ferry Boender

Bexec v0.8: Execute a vim buffer and capture output in split window

I released v0.8 of my Bexec vim plugin. The Bexec plugin allows the user to execute the current buffer if it contains a script with a shebang (#!/path/to/interpreter) on the first line or if the default interpreter for the script's type is known by Bexec. The output of the script will be grabbed and displayed in a separate buffer. 

New in this release:

  • Honor splitbelow and splitright vim setting (patch by Christopher Pease).

bexec

Installation instructions:

  1. Download the Vimball
  2. Start vim with: vim bexec-v0.8.vmb
  3. In Vim, type: :source %
  4. Bexec is now installed. Type :Bexec to run it, or use <MapLeader>bx

 

 

by admin at October 19, 2014 01:22 PM

Server Density

Chris Siebenmann

Vegeta, a tool for web server stress testing

Standard stress testing tools like siege (or the venerable ab, which you shouldn't use) are all systems that do N concurrent requests at once and see how your website stands up to this. This model is a fine one for putting a consistent load on your website for a stress test, but it's not actually representative of how the real world acts. In the real world you generally don't have, say, 50 clients all trying to repeatedly make and re-make one request to you as fast as they can; instead you'll have 50 new clients (and requests) show up every second.

(I wrote about this difference at length back in this old entry.)

Vegeta is a HTTP load and stress testing tool that I stumbled over at some point. What really attracted my attention is that it uses a 'N requests a second' model, instead of the concurrent request model. As a bonus it will also report not just average performance but also on outliers in the form of 90th and 99th percentile outliers. It's written in Go, which some of my readers may find annoying but which I rather like.

I gave it a try recently and, well, it works. It does what it says it does, which means that it's now become my default load and stress testing tool; 'N new requests a second' is a more realistic and thus interesting test than 'N concurrent requests' for my software (especially here, for obvious reasons).

(I may still do N concurrent requests tests as well, but it'll probably mostly be to see if there are issues that come up under some degree of consistent load and if I have any obvious concurrency race problems.)

Note that as with any HTTP stress tester, testing with high load levels may require a fast system (or systems) with plenty of CPUs, memory, and good networking if applicable. And as always you should validate that vegeta is actually delivering the degree of load that it should be, although this is actually reasonably easy to verify for a 'N new request per second' tester.

(Barring errors, N new requests a second over an M second test run should result in N*M requests made and thus appearing in your server logs. I suppose the next time I run a test with vegeta I should verify this myself in my test environment. In my usage so far I just took it on trust that vegeta was working right, which in light of my ab experience may be a little bit optimistic.)

by cks at October 19, 2014 06:04 AM

October 18, 2014

SysAdmin1138

For other Movable Type blogs out there

If you're wondering why comments aren't working, as I was, and are on shared hosting, as I am, and get to looking at your error_log file and see something like this in it:

[Sun Oct 12 12:34:56 2014] [error] [client 192.0.2.5] 
ModSecurity: Access denied with code 406 (phase 2).
Match of "beginsWith http://%{SERVER_NAME}/" against "MATCHED_VAR" required.
[file "/etc/httpd/modsecurity.d/10_asl_rules.conf"] [line "1425"] [id "340503"] [rev "1"]
[msg "Remote File Injection attempt in ARGS (/cgi-bin/mt4/mt-comments.cgi)"]
[severity "CRITICAL"]
[hostname "example.com"]
[uri "/cgi-bin/mt/mt-comments.cgi"]
[unique_id "PIMENTOCAKE"]

It's not just you.

It seems that some webhosts have a mod_security rule in place that bans submitting anything through "mt-comments.cgi". As this is the main way MT submits comments, this kind of breaks things. Happily, working around a rule like this is dead easy.

  1. Rename your mt-comments.cgi file to something else
  2. Add "CommentScript ${renamed file}" to your mt-config.cgi file

And suddenly comments start working again!

Except for Google, since they're deprecating OpenID support.

by SysAdmin1138 at October 18, 2014 09:46 PM

Rands in Repose


Administered by Joe. Content copyright by their respective authors.