Planet Sysadmin               

          blogs for sysadmins, chosen by sysadmins...
(Click here for multi-language)

November 27, 2014

Chris Siebenmann

Using iptables to block traffic that's not protected by IPSec

When I talk about my IPSec setup, I often say that I use GRE over IPSec (or 'an IPSec based GRE tunnel'). However, this is not really what is going on; a more accurate but more opaque description is that I have a GRE tunnel that is encrypted and protected by IPSec. The problem, and the reason that the difference matters, is that there is nothing that intrinsically ties the two pieces together, unlike something where you are genuinely running X over Y such as 'forwarding X11 over SSH'. In the X11 over SSH case, if SSH is not working you do not get anything. But in my case if IPSec isn't there for some reason my GRE tunnel will cheerfully continue working, just without any protection against either eavesdropping or impersonation.

In theory this is undoubtedly not supposed to happen, since you (I) designed your GRE setup to work in conjunction with IPSec. Unfortunately in practice in practice there are any number of ways for IPSec to go away on you, possibly without destroying the GRE tunnel in the process. Your IPSec IKE daemon probably removes the IPSec security policies that reject unencrypted traffic when it shuts down, for example, and if you're manually configuring IPSec with setkey you can do all sorts of fun things like accidentally leaving a 'spdflush;' command in a control file that only (re)loads keys and is no longer used to set up the security policies.

The obvious safety method is to add some iptables rules that block unencrypted GRE traffic. If you are like me, you'll start out by writing the obvious iptables ruleset:

iptables -A INPUT -p esp -j ACCEPT
iptables -A INPUT -p gre -j DROP

This doesn't work. As far as I can tell, the Linux IPSec system effectively re-injects the decrypted packets into the IP stack, where they will be seen in their unencrypted state by iptables rules (as well as by tcpdump, which can be both confusing and alarming). The result is that after the re-injection the ipfilters rules see a plain GRE packet and drop it.

Courtesy of this netfilter mailing list message, it turns out that what you need is to match packets that will be or have been processed by IPSec. This is done with a policy match:

iptables -A INPUT -m policy --dir in --pol ipsec -j ACCEPT
iptables -A INPUT -p gre -j DROP

# and for outgoing packets:
iptables -A OUTPUT -m policy --dir out --pol ipsec -j ACCEPT
iptables -A OUTPUT -p gre -j DROP

Reading the iptables-extensions manpage suggests that I should add at least '--proto esp' to the policy match for extra paranoia.

I've tested these rules and they work. They pass GRE traffic that is protected by IPSec, but if I remove the IPSec security policies that force IPSec for my GRE traffic these iptables rules block the unprotected GRE traffic as I want.

(Extension to non-GRE traffic is left as an exercise to the reader. I have a simple IPSec story in that I'm only using it to protect GRE and I never want GRE traffic to flow without IPSec to any destination under any circumstances. Note that there are potentially tricky rule ordering issues here and you probably want to always put this set of rules at the end of your processing.)

by cks at November 27, 2014 04:16 AM

November 26, 2014

UnixDaemon

Use Ansible to Expand CloudFormation Templates

After a previous comment about " templating CloudFormation JSON from a tool higher up in your stack" I had a couple of queries about how I'm doing this. In this post I'll show a small example that explains the work flow. We're going to create a small CloudFormation template, with a single Jinja2 embedded directive, and call it from an example playbook.

This template creates an S3 bucket resource and dynamically sets the "DeletionPolicy" attribute based on a value in the playbook. We use a file extension of '.json.j2' to distinguish our pre-expanded templates from those that need no extra work. The line of interest in the template itself is "DeletionPolicy": "{{ deletion_policy }}". This is a Jinja2 directive that Ansible will interpolate and replace with a literal value from the playbook, helping us move past a CloudFormation Annoyance, Deletion Policy as a Parameter. Note that this template has no parameters, we're doing the work in Ansible itself.



    $ cat templates/deletion-policy.json.j2 

    {
      "AWSTemplateFormatVersion": "2010-09-09",

      "Description": "Retain on delete jinja2 template",

      "Resources": {

        "TestBucket": {
          "DeletionPolicy": "{{ deletion_policy }}",
          "Type": "AWS::S3::Bucket",
          "Properties": {
            "BucketName": "my-test-bucket-of-54321-semi-random-naming"
          }
        }

      }
    }


Now we move on to the playbook. The important part of the preamble is the deletion_policy variable, where we set the value for later use in the template. We then move on the the 2 essential and one house keeping task.



    $ cat playbooks/deletion-policy.playbook 
    ---
    - hosts: localhost
      connection: local
      gather_facts: False 
      vars:
        template_dir: "../templates"
        deletion_policy: "Retain" # also takes "Delete" or "Snapshot"


Because the Ansible CloudFormation module doesn't have an inbuilt option to process Jinja2 we create the stack in two stages. First we process the raw jinja JSON document and create an intermediate file. This will have the directives expanded. We then run the CloudFormation module using the newly generated file.



  tasks:
  - name: Expand the CloudFormation template for future use.
    local_action: template src={{ template_dir }}/deletion-policy.json.j2 dest={{ template_dir }}/deletion-policy.json

  - name: Create a simple stack
    cloudformation: >
      stack_name=deletion-policy
      state=present
      region="eu-west-1"
      template={{ template_dir }}/deletion-policy.json


The final task is an optional little bit of house keeping. We remove the file we generated earlier.



  - name: Clean up the local, generated, file
    file: name={{ template_dir }}/deletion-policy.json state=absent


We've only covered a simple example here but if you're willing to commit to preprocessing your templates you can add a lot of flexibility, and heavily reduce the line count, using techniques like this. Creating multiple subnets in a VPC, adding route associations and such is another good place to introduce these techniques.

November 26, 2014 02:27 PM

Chris Siebenmann

Using go get alone is a bad way to keep track of interesting packages

When I was just starting with Go, I kept running into interesting Go packages that I wanted to keep track of and maybe use someday. 'No problem', I thought, 'I'll just go get them so I have them sitting around and maybe I'll look at them too'.

Please allow yourself to learn from my painful experience here and don't do this. Specifically, don't rely on 'go get' as your only way to keep track of packages you want to keep an eye on, because in practice doing so is a great way to forget what those packages are. There's no harm in go get'ing packages you want to have handy to look through, but do something in addition to keep track of what packages you're interested in and why.

At first, there was nothing wrong with what I was doing. I could easily look through the packages and even if I didn't, they sat there in $GOPATH/src so I could keep track of them. Okay, they were about three levels down from $GOPATH/src itself, but no big deal. Then I started getting interested in Go programs like vegeta, Go Package Store, and delve, plus I was installing and using more mundane programs like goimports and golint. The problem with all of these is that they have dependencies of their own, and all of these dependencies wind up in $GOPATH/src too. Pretty soon my Go source area was a dense thicket of source trees that intermingled programs, packages I was interested in in their own right, and dependencies of these first two.

After using Go seriously for not very long I've wound up with far too many packages and repos in $GOPATH/src to keep any sort of track of, and especially to remember off the top of my head which packages I was interested in. Since I was relying purely on go get to keep track of interesting Go packages, I have now essentially lost track of most of them. The interesting packages I wanted to keep around because I might use them have become lost in the noise of the dependencies, because I can't tell one from the other without going through all 50+ of the repos to read their READMEs.

As you might guess, I'd be much better off if I'd kept an explicit list of the packages I found interesting in some form. A text file of URLs would be fine; adding notes about what they did and why I thought they were interesting would be better. That would make it trivial to sort out the wheat from the chaff that's just there because of dependencies.

(These days I've switched to doing this for new interesting packages I run across, but there's some number of packages from older times that are lost somewhere in the depths of $GOPATH/src.)

PS: This can happen with programs too, but at least there tends to be less in $GOPATH/bin than in $GOPATH/src so it's easier to keep track of them. But if you have an ever growing $GOPATH/bin with an increasing amount of programs you don't actually care about, there's the problem again.

by cks at November 26, 2014 06:45 AM

Aaron Johnson

Ubuntu Geek

Install Owncloud on Ubuntu 14.10 (Utopic Unicorn)

Sponsored Link
ownCloud is open source file sync and share software for everyone from individuals operating the free ownCloud Community Edition, to large enterprises and service providers operating the ownCloud Enterprise Edition. ownCloud provides a safe, secure, and compliant file synchronization and sharing solution on servers that you control.

With ownCloud you can share one or more files and folders on your computer, and synchronize them with your ownCloud server. Place files in your local shared directories, and those files are immediately synchronized to the server and to other devices using the ownCloud Desktop Client. Not near a device running a desktop client? No problem! Simply log in using the ownCloud web client and manage your files from there. The ownCloud Android and iOS mobile applications enable you to browse, download, and upload photos and videos. On Android, you can also create, download, edit, and upload any other files, as long as the correct software is installed.
(...)
Read the rest of Install Owncloud on Ubuntu 14.10 (Utopic Unicorn) (167 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , , ,

Related posts

by ruchi at November 26, 2014 12:43 AM

November 25, 2014

Rands in Repose

Organizational Lessons from Slime Mold

Via Kabir Chibber on Quartz:

Explore, remove hierarchies, and remember what you did wrong and tell someone:

#

by rands at November 25, 2014 03:34 PM

Chris Siebenmann

My Linux IPSec/VPN setup and requirements

In response to my entry mentioning perhaps writing my own daemon to rekey my IPSec tunnel, a number of people made suggestions in comments. Rather than write a long response, I've decided to write up how my current IPSec tunnel works and what my requirements are for it or any equivalent. As far as I know these requirements rule out most VPN software, at least in its normal setup.

My IPSec based GRE tunnel runs between my home machine and my work machine and its fundamental purpose is to cause my home machine to appear on the work network as just another distinct host with its own IP address. Importantly this IP address is publicly visible, not just an internal one. My home machine routes some but not all of its traffic over the IPSec tunnel and for various reasons I need full dual identity routing for it; traffic to or from the internal IP must flow over the IPSec tunnel while traffic to or from the external IP must not. My work machine also has additional interfaces that I need to isolate, which can get a bit complicated.

(The actual setup of this turns out to be kind of tangled, with some side effects.)

This tunnel is normally up all of the time, although under special circumstances it needs to be pulled down locally on my work machine (and independently on my home machine). Both home and work machines have static IPs. All of this works today; the only thing that my IPSec setup lacks is periodic automatic rekeying of the IPSec symmetric keys used for encryption and authentication.

Most VPN software that I've seen wants to either masquerade your traffic as coming from the VPN IP itself or to make clients appear on a (virtual) subnet behind the VPN server with explicit routing. Neither is what I want. Some VPNs will bridge networks together; this is not appropriate either because I have no desire to funnel all of the broadcast traffic running around on the work subnet over my DSL PPPoE link. Nor can I use pure IPSec alone, due to a Linux proxy ARP limitation (unless this has been fixed since then).

I suspect that there is no way to tell IKE daemons 'I don't need you to set things up, just to rekey this periodically'; this would be the minimally intrusive change. There is probably a way to configure a pair of IKE daemons to do everything, so that they fully control the whole IPSec and GRE tunnel setup; there is probably even a way to tell them to kick off the setup of policy based routing when a connection is negotiated. However for obvious reasons my motivation for learning enough about IKE configuration to recreate my whole setup is somewhat low, as much of the work is pure overhead that's required just to get me to where I already am now. On the other hand, if a working IKE based configuration for all of this fell out of the sky I would probably be perfectly happy to use it; I'm not intrinsically opposed to IKE, just far from convinced that investing a bunch of effort into decoding how I need to set it up will get me much or be interesting.

(It would be different if documentation for IKE daemons was clear and easy to follow, but so far I haven't found any that is. Any time I skim any of it I can see a great deal of annoyance in my future.)

PS: It's possible that my hardcoded IPSec setup is not the most modern in terms of security, since it dates from many years ago. Switching to a fully IKE-mediated setup would in theory give me a free ride on future best practices for IPSec algorithm choices so I don't have to worry about this.

Sidebar: why I feel that writing my own rekeying daemon is safe

The short version is that the daemon would not involved in setting up the secure tunnel itself, just getting new keys from /dev/urandom, telling the authenticated other end about them, writing them to a setkey script file, and running the necessary commands to (re)load them. I'd completely agree with everyone who is telling me to use IKE if I was attempting to actively negotiate a full IPSec setup, but I'm not. The IPSec setup is very firmly fixed; the only thing that varies is the keys. There are ways to lose badly here, but they're almost entirely covered by using a transport protocol with strong encryption and authentication and then insisting on fixed IP addresses on top of it.

(Note that I won't be negotiating keys with the other end as such. Whichever end initiates a rekey will contact the other end to say more or less 'here are my new keys, now use them'. And I don't intend to give the daemon the ability to report on the current keys. If I need to know them I can go look outside of the daemon. If the keys are out of sync or broken, well, the easy thing is to force an immediate rekey to fix it, not to read out current keys to try to resync each end.)

by cks at November 25, 2014 05:26 AM

November 24, 2014

Chris Siebenmann

Delays on bad passwords considered harmful, accidental reboot edition

Here is what I just did to myself, in transcript form:

$ /bin/su
Password: <type>
[delay...]
['oh, I must have mistyped the password']
[up-arrow CR to repeat the su]
bash# reboot <CR>

Cue my 'oh damn' reaction.

The normal root shell is bash and it had a bash history file with 'reboot' as the most recent command. When my su invocation didn't drop me into a root shell immediately I assumed that I'd fumbled the password and it was forcing a retry delay (as almost all systems are configured to do). These retry delays have trained me so that practically any time su stalls on a normal machine I just repeat the su; in a regular shell session this is through my shell's interactive history mechanism with an up-arrow and a CR, which I can type ahead before the shell prompt reappears (and so I do).

Except this time around su had succeeded and either the machine or the network path to it was slow enough that it had just looked like it had failed, so my 'up-arrow CR' sequence was handled by the just started root bash and was interpreted as 'pull the last command out of history and repeat it'. That last command happened to be a 'reboot', because I'd done that to the machine relatively recently.

(The irony here is that following my own advice I'd turned the delay off on this machine. But we have so many others with the delay on that I've gotten thoroughly trained in what to do on a delay.)

by cks at November 24, 2014 09:01 PM

Yellow Bricks

Sharing VMUG presentation “vSphere futures”


Last week I presented at the UK VMUG, Nordic VMUG and VMUG Belgium. My topic was vSphere futures… I figured I would share the deck publicly. The deck is based on this blog post and essentially is a collection of what has been revealed at last VMworld. Considering the number of announcements I am guessing that this deck is a nice summary of what is coming, feel free to use it / share it / comment etc.

Once again, I would like to thank the folks of the VMUG organizations throughout EMEA for inviting me, three great events last week with very passionate people. One thing I want to call out in particular that struck me last week: Erik from the VMUG in Belgium has created this charity program where he asks sponsors (and attendees) to contribute to charity. Last event he collected over 8000 euros which went to a local charity, it was the biggest donation that this particular charity received in a long time and you can imagine they were very thankful… all of this while keeping the event free for attendees, great work Erik! Thanks for giving back to the community in various ways.

See you next time.

"Sharing VMUG presentation “vSphere futures”" originally appeared on Yellow-Bricks.com. Follow me on twitter - @DuncanYB.


Pre-order my upcoming book Essential Virtual SAN via Pearson today!

by Duncan Epping at November 24, 2014 04:02 PM

Adams Tech Talk

SSH Fingerprint and Hostkey with Paramiko in Python

Following on from SSH and SFTP with Paramiko & Python, I recently had the need to gain a remote SSH server’s fingerprint and hostkey for verification purposes. This is achievable through setting up a socket, and then applying paramiko.Transport over our established socket. First, we include the various bits and pieces we’ll need:

import socket
import paramiko
import hashlib
import base64

Next, we establish a socket connection ‘mySocket’ to “localhost” on port 22 – our dummy SSH server. We then use paramiko.Transport to gain access to paramiko’s core SSH protocol options on the socket.

mySocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mySocket.connect(("localhost", 22))
myTransport = paramiko.Transport(mySocket)
myTransport.start_client()

To get the remote hostkey, we call myTransport.get_remote_server_key():

sshKey = myTransport.get_remote_server_key()

At this point sshKey.__str__() contains the binary representation of the hostkey. Calling print sshKey.__str__() will output said binary data to our terminal.

We can then immediately close both the paramiko transport and the socket, as sshKey contains the information we need.

myTransport.close()
mySocket.close()

For a printable (base64 encoded) representation:

printableKey = '%s %s' %(sshKey.get_name(), base64.encodestring(sshKey.__str__()).replace('\n', ''))
print printableKey

In my case, this returns:

>>> print printableKey
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0JCrB2J5YWa3m6buNqQ5/Scd/X9xs0gvDVZhokPZtFdtMgYnZhpAge3WZyFZxnt8ToE8K8d+haEQW5PaZykqD61Ur7gzW3KSZkA9L1q3IqIFLkUcI/Db82j5+rZ0w8W8oARfoOb4aR9Q+N4FbnMxO4FUzlCD1LpDA2XMnoOyDrr7WzYoopJencPuCCLm56+40QuHRoEI3gkUg34Utq8pV9vOqEBNMK21LEeG82CIIPB1nASNsaTfbAj1K9RBKrlobLiLFVPagxJY+Vd5lUPY/0RGgRBRofy7hRA4JUTPTqhkTg582Kw1GRNRDCf2AMIuPZLe3qqPWmJZ8fV897U2QQ==

To gain the host fingerprint, which is an MD5 hash of the key:

sshFingerprint = hashlib.md5(sshKey.__str__()).hexdigest()

This returns a 32 byte MD5 representation:

>>> print sshFingerprint
5203d8c22dec1016514334668f4d42f9

Lastly, to convert that to the colon separated fingerprint that we’re familiar with:

printableFingerprint = ':'.join(a+b for a,b in zip(sshFingerprint[::2], sshFingerprint[1::2]))
>>> print printableFingerprint
52:03:d8:c2:2d:ec:10:16:51:43:34:66:8f:4d:42:f9

To put it all together:

import socket
import paramiko
import hashlib
import base64
import sys

if len(sys.argv) != 3:
	print "Usage: %s <ip> <port>" % sys.argv[0]
	quit()

try:
	mySocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
	mySocket.connect((sys.argv[1], int(sys.argv[2])))
except socket.error:
	print "Error opening socket"
	quit()

try:
	myTransport = paramiko.Transport(mySocket)
	myTransport.start_client()
	sshKey = myTransport.get_remote_server_key()
except paramiko.SSHException:
	print "SSH error"
	quit()

myTransport.close()
mySocket.close()


printableType = sshKey.get_name()
printableKey = base64.encodestring(sshKey.__str__()).replace('\n', '')
sshFingerprint = hashlib.md5(sshKey.__str__()).hexdigest()
printableFingerprint = ':'.join(a+b for a,b in zip(sshFingerprint[::2], sshFingerprint[1::2]))
print "HostKey Type: %s, Key: %s (Fingerprint: %s)" %(printableType, printableKey, printableFingerprint)

Which is run as follows:

root@w:~/tmp# ./pyHostKey.py localhost 22
HostKey Type: ssh-rsa, Key: AAAAB3NzaC1yc2EAAAABIwAAAQEA0JCrB2J5YWa3m6buNqQ5/Scd/X9xs0gvDVZhokPZtFdtMgYnZhpAge3WZyFZxnt8ToE8K8d+haEQW5PaZykqD61Ur7gzW3KSZkA9L1q3IqIFLkUcI/Db82j5+rZ0w8W8oARfoOb4aR9Q+N4FbnMxO4FUzlCD1LpDA2XMnoOyDrr7WzYoopJencPuCCLm56+40QuHRoEI3gkUg34Utq8pV9vOqEBNMK21LEeG82CIIPB1nASNsaTfbAj1K9RBKrlobLiLFVPagxJY+Vd5lUPY/0RGgRBRofy7hRA4JUTPTqhkTg582Kw1GRNRDCf2AMIuPZLe3qqPWmJZ8fV897U2QQ== (Fingerprint: 52:03:d8:c2:2d:ec:10:16:51:43:34:66:8f:4d:42:f9)

by Adam Palmer at November 24, 2014 04:00 PM

UnixDaemon

CloudFormation Annoyance: Deletion Policy as a Parameter

You can create some high value resources using CloudFormation that you'd like to ensure exist even after a stack has been removed. Imagine being the admin to accidently delete the wrong stack and having to watch as your RDS master, and all your prod data, slowly vanishes in to the void of AWS reclaimed volumes. Luckily AWS provides a way to reduce this risk, the DeletionPolicy Attribute. By specifying this on a resource you can ensure that if your stack is deleted then certain resources survive and function as usual. This also helps keep down the number of stacks you have in the "DELETE_FAILED" stage if you try and remove a shared security group or such.


    "Resources": {

      "TestBucket": {
        "DeletionPolicy": "Retain",
        "Type": "AWS::S3::Bucket",
        "Properties": {
          "BucketName": "MyTestBucketOf54321SemiRandomName"
        }
      }

    }


Once you start sprinkling this attribute through your templates you'll probably feel the need to have it vary in staging and prod. While it's a lovely warm feeling to have your RDS masters in prod be a little harder to accidently kill you'll want a clean tear down of any frequently created staging or developer stacks for example. The easiest way to do this is to make the DeletionPolicy take its value from a parameter, probably using code like that below.



    {
      "AWSTemplateFormatVersion": "2010-09-09",
      "Description" : "Retain on delete test template",

      "Parameters" : {

        "RetainParam": {
          "Type": "String",
          "AllowedValues": [ "Retain", "Delete", "Snapshot" ],
          "Default": "Delete"
        }

      },
      "Resources": {

        "TestBucket": {
          "DeletionPolicy": { "Ref" : "RetainParam" },
          "Type": "AWS::S3::Bucket",
          "Properties": {
            "BucketName": "MyTestBucketOf54321SemiRandomName"
          }
        }

      }
    }


Unfortunately this doesn't work. You'll get an error that looks something like cfn-validate-template: Malformed input-Template format error: Every DeletionPolicy member must be a string. if you try to validate your template (and we always do that, right?).

There are a couple of ways around this, the two I've used are: templating your CloudFormation json from a tool higher up in your stack, Ansible for example. The downside is your templates are unrunable without expansion. A second approach is to double up on some resource declarations and use CloudFormation Conditionals. You can then create the same resource, with the DeletionPolicy set to the appropriate value, based off the value of a parameter. I'm uncomfortable using this in case of resource removal on stack updates if the wrong parameters are ever passed to your stack. I prefer the first option.

Even though there are ways to work around this limitation it really feels like it's something that' Should Just Work' and as a CloudFormation user I'll be a lot happier when it does.

November 24, 2014 01:27 PM

Aaron Johnson

Chris Siebenmann

Using the SSH protocol as a secure transport protocol

I have an IPSec problem: my IPSec tunnel uses constant keys, with no periodic automatic rekeying. While IPSec has an entire protocol to deal with this called IKE, in practice IKE daemons (at least on Linux) are such a swamp to wade into that I haven't been willing to spend that much time on it. Recently I had a realization; rather that wrestle with IKE, I could just write a special purpose daemon to rekey the tunnel for me. Since both ends of the IPSec tunnel need to know the same set of keys, I need to run the daemon at either end and the ends have to talk to each other. Since they'll be passing keys back and forth, this conversation needs to be encrypted and authenticated.

The go-to protocol for encryption and authentication is TLS. But TLS has a little problem for my particular needs here in that it very much wants to do authentication through certificate authorities. I very much don't want to. The two endpoints are fixed and so are their authentication keys, and I don't want to have to make up a CA to sign two certificates and (among other things) leave myself open to this CA someday signing a third key. In theory TLS can be used with direct verification of certificate identities, but in practice TLS libraries generally make this either hard or impossible depending on their APIs.

(I've written about this before.)

As I was thinking about this it occurred to me that there is already a secure transport protocol that does authentication exactly the way I want it to work: SSH. SSH host keys and SSH public key authentication is fundamentally based on known public keys, not on CAs. I don't want to literally run my rekeying over SSH for various reasons (including security), but these days many language environments have SSH libraries with support for both the server and client sides. The SSH protocol even has 'do command' operation that can be naturally used to send operations to a server, get responses, and perhaps supply additional input.

It's probably a little bit odd to use SSH as a secure transport protocol for your own higher level operations that have nothing to do with SSH's original purpose. But on the other hand, why not? If the protocol fits my needs, I figure that I might as well be flexible and repurpose it for this.

(The drawback is that SSH is relatively complex at the secure transport layer if all that you want is to send some text back and forth. Hopefully the actual code complexity will be minimal.)

by cks at November 24, 2014 05:16 AM

Ubuntu Geek

How to install Cacti (Monitoring tool) on ubuntu 14.10 server

Cacti is a complete network graphing solution designed to harness the power of RRDTool's data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box. All of this is wrapped in an intuitive, easy to use interface that makes sense for LAN-sized installations up to complex networks with hundreds of devices.
(...)
Read the rest of How to install Cacti (Monitoring tool) on ubuntu 14.10 server (683 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | No comment | Add to del.icio.us
Post tags: , , ,

Related posts

by ruchi at November 24, 2014 12:24 AM

November 23, 2014

Adams Tech Talk

Linux Namespaces

Starting from kernel 2.6.24, Linux supports 6 different types of namespaces. Namespaces are useful in isolating processes from the rest of the system, without needing to use full low level virtualization technology.

  • CLONE_NEWIPC: IPC Namespaces: SystemV IPC and POSIX Message Queues can be isolated.
  • CLONE_NEWPID: PID Namespaces: PIDs are isolated, meaning that a PID inside of the namespace can conflict with a PID outside of the namespace. PIDs inside the namespace will be mapped to other PIDs outside of the namespace. The first PID inside the namespace will be ‘1’ which outside of the namespace is assigned to init
  • CLONE_NEWNET: Network Namespaces: Networking (/proc/net, IPs, interfaces and routes) are isolated. Services can be run on the same ports within namespaces, and “duplicate” virtual interfaces can be created.
  • CLONE_NEWNS: Mount Namespaces. We have the ability to isolate mount points as they appear to processes. Using mount namespaces, we can achieve similar functionality to chroot() however with improved security.
  • CLONE_NEWUTS: UTS Namespaces. This namespaces primary purpose is to isolate the hostname and NIS name.
  • CLONE_NEWUSER: User Namespaces. Here, user and group IDs are different inside and outside of namespaces and can be duplicated.

Let’s look first at the structure of a C program, required to demonstrate process namespaces. The following has been tested on Debian 6 and 7.

First, we need to allocate a page of memory on the stack, and set a pointer to the end of that memory page. We use alloca to allocate stack memory rather than malloc which would allocate memory on the heap.

void *mem = alloca(sysconf(_SC_PAGESIZE)) + sysconf(_SC_PAGESIZE);

Next, we use clone to create a child process, passing the location of our child stack ‘mem’, as well as the required flags to specify a new namespace. We specify ‘callee’ as the function to execute within the child space:

mypid = clone(callee, mem, SIGCHLD | CLONE_NEWIPC | CLONE_NEWPID | CLONE_NEWNS | CLONE_FILES, NULL);

After calling clone we then wait for the child process to finish, before terminating the parent. If not, the parent execution flow will continue and terminate immediately after, clearing up the child with it:

while (waitpid(mypid, &r, 0) < 0 && errno == EINTR)
{
	continue;
}

Lastly, we’ll return to the shell with the exit code of the child:

if (WIFEXITED(r))
{
	return WEXITSTATUS(r);
}
return EXIT_FAILURE;

Now, let’s look at the callee function:

static int callee()
{
	int ret;
	mount("proc", "/proc", "proc", 0, "");
	setgid(u);
	setgroups(0, NULL);
	setuid(u);
	ret = execl("/bin/bash", "/bin/bash", NULL);
	return ret;
}

Here, we mount a /proc filesystem, and then set the uid (User ID) and gid (Group ID) to the value of ‘u’ before spawning the /bin/bash shell.

Let’s put it all together, setting ‘u’ to 65534 which is user “nobody” and group “nogroup” on Debian:

#define _GNU_SOURCE
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/mount.h>
#include <grp.h>
#include <alloca.h>
#include <errno.h>
#include <sched.h>

static int callee();
const int u = 65534;

int main(int argc, char *argv[])
{
	int r;
	pid_t mypid;
	void *mem = alloca(sysconf(_SC_PAGESIZE)) + sysconf(_SC_PAGESIZE);

	mypid = clone(callee, mem, SIGCHLD | CLONE_NEWIPC | CLONE_NEWPID | CLONE_NEWNS | CLONE_FILES, NULL);
	while (waitpid(mypid, &r, 0) < 0 && errno == EINTR)
	{
		continue;
	}
	if (WIFEXITED(r))
	{
		return WEXITSTATUS(r);
	}
	return EXIT_FAILURE;
}

static int callee()
{
	int ret;
	mount("proc", "/proc", "proc", 0, "");
	setgid(u);
	setgroups(0, NULL);
	setuid(u);
	ret = execl("/bin/bash", "/bin/bash", NULL);
	return ret;
}

To execute the code produces the following:

root@w:~/pen/tmp# gcc -O -o ns -Wall -Werror -ansi ns.c
root@w:~/pen/tmp# ./ns
nobody@w:~/pen/tmp$ id
uid=65534(nobody) gid=65534(nogroup)
nobody@w:~/pen/tmp$ ps auxw
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
nobody       1  0.0  0.0   4620  1816 pts/1    S    21:21   0:00 /bin/bash
nobody       5  0.0  0.0   2784  1064 pts/1    R+   21:21   0:00 ps auxw
nobody@w:~/pen/tmp$ 

Notice that the UID and GID are set to that of nobody and nogroup. Specifically notice that the full ps output shows only two running processes and that their PIDs are 1 and 5 respectively.

LXC is an OS level virtualization tool utilizing cgroups and namespaces for resource isolation.

Now, let’s move on to using ip netns to work with network namespaces. First, let’s confirm that no namespaces exist currently:

root@w:~# ip netns list
Object "netns" is unknown, try "ip help".

In this case, either ip needs an upgrade, or the kernel does. Assuming you have a kernel newer than 2.6.24, it’s most likely ip. After upgrading, ip netns list should by default return nothing.

Let’s add a new namespace called ‘ns1′:

root@w:~# ip netns add ns1
root@w:~# ip netns list
ns1

First, let’s list the current interfaces:

root@w:~# ip link list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT qlen 1000
    link/ether 00:0c:29:65:25:9e brd ff:ff:ff:ff:ff:ff

Now to create a new virtual interface, and add it to our new namespace. Virtual interfaces are created in pairs, and are linked to each other – imagine a virtual crossover cable:

root@w:~# ip link add veth0 type veth peer name veth1
root@w:~# ip link list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT qlen 1000
    link/ether 00:0c:29:65:25:9e brd ff:ff:ff:ff:ff:ff
3: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether d2:e9:52:18:19:ab brd ff:ff:ff:ff:ff:ff
4: veth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether f2:f7:5e:e2:22:ac brd ff:ff:ff:ff:ff:ff

ifconfig -a will also now show the addition of both veth0 and veth1.

Great, now to assign our new interfaces to the namespace. Note that ip netns exec is used to execute commands within the namespace:

root@w:~# ip link set veth1 netns ns1
root@w:~# ip netns exec ns1 ip link list
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether d2:e9:52:18:19:ab brd ff:ff:ff:ff:ff:ff

ifconfig -a will now only show veth0, as veth1 is in the ns1 namespace.

Should we want to delete veth0/veth1:

ip netns exec ns1 ip link del veth1

We can now assign IP address 192.168.5.5/24 to veth0 on our host:

ifconfig veth0 192.168.5.5/24

And assign veth1 192.168.5.10/24 within ns1:

ip netns exec ns1 ifconfig veth1 192.168.5.10/24 up

To execute ip addr list on both our host and within our namespace:

root@w:~# ip addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:65:25:9e brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.122/24 brd 192.168.3.255 scope global eth0
    inet6 fe80::20c:29ff:fe65:259e/64 scope link 
       valid_lft forever preferred_lft forever
6: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 86:b2:c7:bd:c9:11 brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.5/24 brd 192.168.5.255 scope global veth0
    inet6 fe80::84b2:c7ff:febd:c911/64 scope link 
       valid_lft forever preferred_lft forever
root@w:~# ip netns exec ns1 ip addr list
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
5: veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 12:bd:b6:76:a6:eb brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.10/24 brd 192.168.5.255 scope global veth1
    inet6 fe80::10bd:b6ff:fe76:a6eb/64 scope link 
       valid_lft forever preferred_lft forever

To view routing tables inside and outside of the namespace:

root@w:~# ip route list
default via 192.168.3.1 dev eth0  proto static 
192.168.3.0/24 dev eth0  proto kernel  scope link  src 192.168.3.122 
192.168.5.0/24 dev veth0  proto kernel  scope link  src 192.168.5.5 
root@w:~# ip netns exec ns1 ip route list
192.168.5.0/24 dev veth1  proto kernel  scope link  src 192.168.5.10 

Lastly, to connect our physical and virtual interfaces, we’ll require a bridge. Let’s bridge eth0 and veth0 on the host, and then use DHCP to gain an IP within the ns1 namespace:

root@w:~# brctl addbr br0
root@w:~# brctl addif br0 eth0
root@w:~# brctl addif br0 veth0
root@w:~# ifconfig eth0 0.0.0.0
root@w:~# ifconfig veth0 0.0.0.0
root@w:~# dhclient br0
root@w:~# ip addr list br0
7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 00:0c:29:65:25:9e brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.122/24 brd 192.168.3.255 scope global br0
    inet6 fe80::20c:29ff:fe65:259e/64 scope link 
       valid_lft forever preferred_lft forever

br0 has been assigned an IP of 192.168.3.122/24. Now for the namespace:

root@w:~# ip netns exec ns1 dhclient veth1
root@w:~# ip netns exec ns1 ip addr list
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
5: veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 12:bd:b6:76:a6:eb brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.248/24 brd 192.168.3.255 scope global veth1
    inet6 fe80::10bd:b6ff:fe76:a6eb/64 scope link 
       valid_lft forever preferred_lft forever

Excellent! veth1 has been assigned 192.168.3.248/24

by Adam Palmer at November 23, 2014 11:46 PM

Google Blog

'Tis the season for mobile shopping

It used to be that heading out to stores on Black Friday -- one of the biggest holiday shopping days of the season -- was the best way to find great deals. Now, we may be carrying the best tool for finding deals in our pockets.

This coming weekend, expect to see many of your fellow shoppers checking for deals on their smartphones while braving the lines and crowds at the mall. Nearly 50% of 25-34 year-olds use their phone to shop online while standing in line at a store. And because we want to help you research products more easily this holiday weekend, we’re rolling out new mobile features to Google Shopping.

Starting this week, when you search for a specific product on your smartphone or tablet you’ll see more detailed information about the product and where to buy it, like which stores have it available and product reviews from customers. You’ll also be able to rotate selected products on Google Shopping in 360 degrees to see them in more detail.


nexus@1x.gif Nexus10_3D@1x.gif


Getting a head start on Black Friday
Shoppers are already prepping for Black Friday shopping by researching purchases and deals online. We found that 27% of shoppers have already begun hunting for Black Friday deals online. Here are the top questions people are asking about Black Friday on Google Search. For more trends, visit our Shopping blog.

  • what time do stores open on black friday
  • what time does black friday start
  • when does black friday end
  • what to buy on black friday

Black Friday Frenzy.jpg

Let Google Shopping and your smartphone help you check off what’s on that shopping list of yours and go enjoy everything else about the “most wonderful time of the year.”


by Emily Wood (noreply@blogger.com) at November 23, 2014 09:01 PM

Adams Tech Talk

SSH and SFTP with Paramiko & Python

Paramiko is a Python implementation of SSH with a whole range of supported features. To start, let’s look at the most simple example – connecting to a remote SSH server and gathering the output of ls /tmp/

import paramiko

ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
        ssh.connect('localhost', username='testuser', password='t3st@#test123')
except paramiko.SSHException:
        print "Connection Failed"
        quit()

stdin,stdout,stderr = ssh.exec_command("ls /etc/")

for line in stdout.readlines():
        print line.strip()
ssh.close()

After importing paramiko, we create a new variable ‘ssh’ to hold our SSHClient. ssh.set_missing_host_key_policy automatically adds our server’s host key without prompting. For security, this is not a good idea in production, and host keys should be added manually. Should a host key change unexpectedly, it could indicate that the connection has been compromised and is being diverted elsewhere.

Next, we create 3 variables, stdin, stdout and stderr allowing us to access the respective streams when calling ls /etc/

Finally, for each “\n” terminated line on stdout, we print the line, stripping the trailing “\n” (as print adds one). Finally we close the SSH connection.

Let’s look at another example, where we communicate with stdin.

cat in its simplest form will print what it receives on stdin to stdout. This can be shown as follows:

root@w:~# echo "test" | cat
test

Simply, we can use:

stdin,stdout,stderr = ssh.exec_command("cat")
stdin.write("test")

To allow us to communicate with stdin. Wait! Now, the program hangs indefinitely. cat is waiting for an EOF to be received. To do so, we must close the channel:

stdin.channel.shutdown_write()

Now, let’s extend the example to read a colon separated username and password from a file:

import paramiko
import sys

ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())

if len(sys.argv) != 2:
	print "Usage %s <filename>" % sys.argv[0]
	quit()

try:
	fd = open(sys.argv[1], "r")
except IOError:
	print "Couldn't open %s" % sys.argv[1]
	quit()

username,passwd = fd.readline().strip().split(":") #TODO: add error checking!

try:
	ssh.connect('localhost', username=username, password=passwd)
	stdin,stdout,stderr = ssh.exec_command("ls /tmp")
	for line in stdout.readlines():
		print line.strip()
	ssh.close()
except paramiko.AuthenticationException:
	print "Authentication Failed"
	quit()
except:
	print "Unknown error"
	quit()


Lastly, let’s look at reading a remote directory over SFTP:

import paramiko

ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
	ssh.connect('localhost', username='testuser', password='t3st@#test123')
except paramiko.SSHException:
	print "Connection Error"
sftp = ssh.open_sftp()
sftp.chdir("/tmp/")
print sftp.listdir()
ssh.close()

Paramiko supports far more SFTP options, including of course the upload and download of files.

by Adam Palmer at November 23, 2014 06:58 PM

Sysadmin Ninja

Semi-irregular Sysadmin Ninja's Github Digest (Vol. 15)

Hi All,

Let me introduce you Volume 15 of Sysadmin Ninja's Github Digest! (as usual, in no particular order).
But why volume 15, if previous one was 13 and where's 11 and 14 then?
Well, because 11 and 14 are still in my drafts, and although I'm doubt that I'm publish them sometimes... well, let's be at least consistent - I'll try to continue digest publishing (will try to do it) on weekly basis.

1. KeyBox.
A web-based SSH console that executes commands on multiple shells. KeyBox allows you to manage keys, share terminal commands, and upload files to multiple systems simultaneously. http://sshkeybox.com
https://github.com/skavanagh/KeyBox
"A web-based ssh console to execute commands and manage multiple systems simultaneously. KeyBox allows you to share terminal commands and upload files to all your systems. Once the sessions have been opened you can select a single system or any combination to run your commands. Additional system administrators can be added and their terminal sessions and history can be audited. Also, KeyBox can manage and distribute public keys that have been setup and defined."
Can be useful for small distributed team of sysadmins across the globe.

2. FNordmetric
FnordMetric allows you collect and visualize timeseries data with SQL. http://fnordmetric.io
https://github.com/paulasmuth/fnordmetric
Hot stuff. Client-server application which "aims to be a StatsD+graphite competitor, it implements a wire compatible StatsD API". The main idea behind it that you write your query in SQL-like language, called "ChartSQL" and get your graph in SVG format. For it has quite big amount of graphing modes, but almost lacks of any functions for now. Written in C++, so, quite fast, but good luck with porting Graphite function library. Maybe it's a good idea to port Graphite as backend for it. Looks promising anyway.

And talking about Graphite -
3. graphite-stresser
A stress testing tool for Graphite
https://github.com/feangulo/graphite-stresser
Nothing unusual, but nice tool if you want to stress test your Graphite instance. Check author blog's entry for details.

4. pstop
pstop - a top-like program for MySQL
https://github.com/sjmudd/pstop
"pstop is a program which collects information from MySQL 5.6+'s performance_schema database and uses this information to display server load in real-time." For example, you can get IOPS for innodb file, or locks / operations / latencies per table. At least, you can start using performance schema in your MySQL 5.6 instance for something useful. Another useful P_S tool is "sys schema", you can read recent entry in Percona blog about it.

5. Consul-template
Generic template rendering and notifications with Consul
https://github.com/hashicorp/consul-template
Quite recent addition to service discovery tool Consul.io - now you can use it for any service which didn't understand service discovery through DNS - you can format config file templates for that service and reload it when Consul will see configuration change. Agian, blog post is better than 1000 words.

6. VCLfiddle
VclFiddle is hosted at http://www.vclfiddle.net/
https://github.com/vclfiddle/vclfiddle

"VclFiddle is an online tool for experimenting with the Varnish Cache HTTP reverse-proxy in a sandboxed environment. The name comes from a combination of the Varnish Configuration Language (VCL) and another tool that inspired this project, JSFiddle."
I.e. you can edit your VCL config on-line, using web editor, and check how it caches your website.

7. Racher.io
Rancher is an open source project that provides infrastructure services designed specifically for Docker. http://www.rancher.io
https://github.com/rancherio/rancher
Quite ambitious project for creating AWS-like environment, but for Docker containers. 

8. Atlas
A high-performance and stable proxy for MySQL
https://github.com/Qihoo360/Atlas
Another MySQL proxy. Well... Personally I never saw any production running on some MySQL-proxy solution (even on MySQL Fabric), but some China company named Qihoo360 developed this solution and insists that it's running on their production infrastructure.

9. Bosun
An advanced, open-source monitoring and alerting system by Stack Exchange http://bosun.io
https://github.com/bosun-monitor/bosun
Another Graphite competitior - OpenTSDB-backed service with own system metric's collector scollector and graphing and alerting interface. Written in Go. Looks neat and scalable:


And speaking about Go -
10. Go-opstocat
Collection of Ops related patterns for Go apps at GitHub.
https://github.com/github/go-opstocat
and
11. Delve
Delve is a Go debugger, written in Go.
https://github.com/derekparker/delve

Going further.
12. bup
Very efficient backup system based on the git packfile format, providing fast incremental saves and global deduplication (among and within files, including virtual machine images).
https://bup.github.io/

https://github.com/bup/bup
Interesting new backup tool, quite green and fresh, but looks promising.

13. osquery
SQL powered operating system instrumentation, monitoring, and analytics.
http://osquery.io

https://github.com/facebook/osquery
Facebook quite recently open-sourced that tool. Idea is looks very promising - again present system state as SQL tables on which you can run queries, by interactive console or automatically, as daemon.


And fun section
15. C4
C in four functions
https://github.com/rswier/c4
C-compile in 500 lines of C. Reading of it's sources is quite fun. :)

16. Gravity
An orbital simulation game written in Elm
https://github.com/stephenbalaban/Gravity
You can play it here.

17. Convergence
Python/OpenCl Cellular Automata design & manipulation tool
https://github.com/InfiniteSearchSpace/PyCl-Convergence
Looks like fun, but can't run that on my Mac for some reason, so, no screenshots.

by Denis Zhdanov (noreply@blogger.com) at November 23, 2014 02:21 PM

Server Density

Chris Siebenmann

I'm happier ignoring the world of spam and anti-spam

As I've mentioned a couple of times, I'm currently running a sinkhole SMTP server to collect spam samples. Doing this has let me learn or relearn a valuable lesson about anti-spam work.

My sinkhole SMTP server has several sorts of logging and monitoring, including a log of SMTP commands, and of course I can run it or turn it off as I feel like. When I first set it up, I configured it to be auto-started on system reboot and I watched the SMTP command log a lot of the time with 'tail -f'. The moment a new spam sample showed up I'd go read it.

The problem with this wasn't the time it took. Instead the problem is simpler; actively monitoring my sinkhole SMTP server all the time made me think about spam a lot, and it turns out that having spam on my mind wasn't really a great experience. In theory, well, I told myself that watching all of the spam attempts was somewhere between interesting (to see their behavior) and amusing (when they failed in various ways). In practice it was quietly wearying. Not in any obvious way that I really noticed much; instead it was a quiet drag that got me a little bit down.

Fortunately I did notice it a bit, so at a couple of points I decided to just turn things off (once this was prompted by a persistent, unblockable run of uninteresting spam that was getting on my nerves). What I found is that I was happier when I wasn't keeping an eye on the sinkhole SMTP server all the time, or even checking in on it very much. Pretty much the less I looked at the sinkhole server, the better or at least more relaxed I felt.

So what I (re)learned from all of this is that not thinking very much about the cat and mouse game played between spammers and everyone else makes me happier. If I can ignore the existence of spammers entirely, that's surprisingly fine.

As a result of this my current approach with my sinkhole SMTP server is to ignore it as much as possible. Currently I'm mostly managing to only check new samples once every few days and not to do too much with them.

(I probably wouldn't have really learned this without my sinkhole SMTP server because it has the important property that I can vary the attention I pay to it without any bad consequences for my real mail. Even running it at all is completely optional, so sometimes I don't.)

by cks at November 23, 2014 07:26 AM

John Resig

Low-cost .com Domains with Whois Privacy

In an effort to be more privacy conscious I’ve been looking to transition to having Domain Privacy enabled on all the domains that I own. As it turns out many domain registrars, including my current one, charge an additional fee for this service. In an effort to save some money I did a price comparison at some of the most popular domain registrars and came up with the following list (as of November 22, 2014).

I can’t vouch for the quality of the particular services, only that this is their stated price for renewing a .com domain through their service. I opted to only focus on the cost of renewing the domain as often-times registrars will have a much-lower cost during the first year of registration and then later increase the price (as with any privacy services). Hope this can be useful to others who are researching the price of private domain registration!

Update: After compiling this list I found this master list on Registrarowl.com, which includes many more registrars (along with privacy price information).

by John Resig at November 23, 2014 01:00 AM

Raymii.org

Build a FreeBSD 10.1-release Openstack Image with bsd-cloudinit

We are going to prepare a FreeBSD image for Openstack deployment. We do this by creating a FreeBSD 10.1-RELEASE instance, installing it and converting it using bsd-cloudinit. We'll use the CloudVPS public Openstack cloud for this. We'll be using the Openstack command line tools, like nova, cinder and glance.

November 23, 2014 12:00 AM

November 22, 2014

Adams Tech Talk

Simple IMAP Account Verification in Python

imaplib is a great library for handling IMAP communication. It supports both plaintext IMAP and IMAP over SSL (IMAPS) with ease. Connecting to an IMAP server is achieved as follows:

import imaplib

host = "mx.sasdataservices.com"
port = 143
ssl = 0

try:
	if ssl:
		imap = imaplib.IMAP4_SSL(host, port)
	else:
		imap = imaplib.IMAP4(host, port)
	welcomeMsg = imap.welcome
	print "IMAP Banner: %s" %(welcomeMsg)
except:
	print "Connection Failed"
	quit()

This results in the following output: “IMAP Banner: * OK [CAPABILITY IMAP4rev1 UIDPLUS CHILDREN NAMESPACE THREAD=ORDEREDSUBJECT THREAD=REFERENCES SORT QUOTA IDLE ACL ACL2=UNION STARTTLS] Courier-IMAP ready. Copyright 1998-2011 Double Precision, Inc. See COPYING for distribution information.” Now, to log in:

username="user@email.com"
password="password"

try:
	loginMsg = imap.login(username, password)
	print "Login Message: %s" %(loginMsg[1])
except:
	print "Login Failed"
	quit()

With acceptable credentials, the response is: “Login Message: [‘LOGIN Ok.’]”. Lastly, to print a list of all mailboxes in the account:

try:
	mBoxes = imap.list()
	for mBox in mBoxes[1]:
		print mBox
except:
	print "Couldn't get Mail Boxes"
quit()

In my case, this returns:

Mailbox: (\Unmarked \HasNoChildren) "." "INBOX.Drafts"
Mailbox: (\Unmarked \HasNoChildren) "." "INBOX.Trash"
Mailbox: (\Unmarked \HasNoChildren) "." "INBOX.Sent Messages"
Mailbox: (\Unmarked \HasChildren) "." "INBOX"
Mailbox: (\Unmarked \HasNoChildren) "." "INBOX.Deleted Messages"
Mailbox: (\Unmarked \HasNoChildren) "." "INBOX.Sent"
Mailbox: (\Unmarked \HasNoChildren) "." "INBOX.Notes"

For more information on imaplib’s feature set visit the documentation.

by Adam Palmer at November 22, 2014 07:15 PM

UnixDaemon

AWS CloudFormation Parameters Tips: Size and AWS Types

While AWS CloudFormation is one of the best ways to ensure your AWS environments are reproducible it can also be a bit of an awkward beast to use. Here are a couple of simple time saving tips for refining your CFN template parameters.

The first one is also the simplest, always define at the least a MinLength property on your parameters and ideally an AllowedValues or AllowedPattern. This ensures that your stack will fail early if no value is provided. Once you start using other tools, like Ansible, to glue your stacks together it becomes very easy to create a stack parameter that has an undefined value. Without one of the above parameters CloudFormation will happily use the null and you'll either get an awkward failure later in the stack creation or a stack that doesn't quite work.

The second tip is for the parameters type property. While it's possible to use a 'type' of 'String' and an 'AllowedPattern' to ensure a value looks like an AWS resource, such as a subnet id, the addition of AWS- specific types, available from November 2014, allows you to get a lot more specific:



  # note the value of "Type"
  "Parameters" : {

    "KeyName" : {
      "Description" : "Name of an existing EC2 KeyPair",
      "Type" : "AWS::EC2::KeyPair::KeyName",
      "Default" : "i-am-the-gate-keeper" 
    }

  }


This goes one step beyond 'Allowed*' and actually verifies the resource exists in the users account. It doesn't do this at the template validation stage, which would be -really- nice, but it does it early in the stack creation so you don't have a long wait and a failed, rolled back, set of resources.



    # a parameter with a default key name that does not exist in aws
    "KeyName" : {
      "Description" : "Name of an existing EC2 KeyPair",
      "Type" : "AWS::EC2::KeyPair::KeyName",
      "MinLength": "1",
      "Default" : "non-existent-key"
    }

    # validate shows no errors
    $ aws cloudformation validate-template --template-body file://constraint-tester.json
    {
        "Description": "Test an AWS-specific type constraint", 
        "Parameters": [
            {
                "NoEcho": false, 
                "Description": "Name of an existing EC2 KeyPair", 
                "ParameterKey": "KeyName"
            }
        ], 
        "Capabilities": []
    }

    # but after we start stack creation and check the dashboard
    # CloudFormation shows an error as the second line in events
    ROLLBACK_IN_PROGRESS    AWS::CloudFormation::Stack      dsw-test-sg
    Parameter value non-existent-key for parameter name KeyName
    does not exist. . Rollback requested by user.


Neither of these tips will prevent you from making the error, or unfortunately catch them on validation. They will surface the issues much quicker on actual stack creation and make your templates more robust. Here's a list of the available AWS Specific Parameter Types, in the table under the 'Type' property and you can find more details in the 'AWS-Specific Parameter Types' section.

November 22, 2014 04:27 PM

Adams Tech Talk

DNS Black List / RBL Checking in Python

Following on from performing basic DNS Lookups in Python, it’s relatively trivial to begin testing DNS Block Lists/Real Time Black Lists for blocked mail server IP addresses. To assist in preventing spam, a number of public and private RBLs are available. These track the IP addresses of mail servers that are known to produce spam, thus allowing recipient mail servers to deny delivery from known spammers.

RBLs operate over DNS. In order to test a RBL, a DNS query is made. As an example, zen.spamhaus.org is a popular RBL. If I wanted to test IP address 148.251.196.147 against the zen.spamhaus.org blocklist, I would reverse the octets in the IP address and then append ‘.zen.spamhaus.org’, i.e. 147.196.251.148.zen.spamhaus.org. I then perform an ‘A’ record lookup on said host:

root@w:~/tmp# host -t a 147.196.251.148.zen.spamhaus.org
Host 147.196.251.148.zen.spamhaus.org not found: 3(NXDOMAIN)

Excellent. IP 148.251.196.147 was not found in zen.spamhaus.org. NXDOMAIN is returned.

Now, to take a known spammer’s IP: 144.76.252.9:

root@w:~/tmp# host -t a 9.252.76.144.zen.spamhaus.org
9.252.76.144.zen.spamhaus.org has address 127.0.0.4

IP 144.76.252.9 IS listed at zen.spamhaus.org. We can now query the TXT record to find out any accompanying data that zen.spamhaus.org provides:

root@w:~/tmp# host -t txt 9.252.76.144.zen.spamhaus.org
9.252.76.144.zen.spamhaus.org descriptive text "http://www.spamhaus.org/query/bl?ip=144.76.252.9"

Moving on.. we can now implement these tests programatically within Python. Here’s a commented example:

import dns.resolver
bl = "zen.spamhaus.org"
myIP = "144.76.252.9"

try:
	my_resolver = dns.resolver.Resolver() #create a new resolver
	query = '.'.join(reversed(str(myIP).split("."))) + "." + bl #convert 144.76.252.9 to 9.252.76.144.zen.spamhaus.org
	answers = my_resolver.query(query, "A") #perform a record lookup. A failure will trigger the NXDOMAIN exception
	answer_txt = my_resolver.query(query, "TXT") #No exception was triggered, IP is listed in bl. Now get TXT record
	print 'IP: %s IS listed in %s (%s: %s)' %(myIP, bl, answers[0], answer_txt[0])
except dns.resolver.NXDOMAIN:
	print 'IP: %s is NOT listed in %s' %(myIP, bl)

This code produces output:

IP: 144.76.252.9 IS listed in zen.spamhaus.org(127.0.0.4: "http://www.spamhaus.org/query/bl?ip=144.76.252.9")

Finally, we can implement multiple blocklists and have the script accept command line input:

import dns.resolver
import sys

bls = ["zen.spamhaus.org", "spam.abuse.ch", "cbl.abuseat.org", "virbl.dnsbl.bit.nl", "dnsbl.inps.de", 
	"ix.dnsbl.manitu.net", "dnsbl.sorbs.net", "bl.spamcannibal.org", "bl.spamcop.net", 
	"xbl.spamhaus.org", "pbl.spamhaus.org", "dnsbl-1.uceprotect.net", "dnsbl-2.uceprotect.net", 
	"dnsbl-3.uceprotect.net", "db.wpbl.info"]

if len(sys.argv) != 2:
	print 'Usage: %s <ip>' %(sys.argv[0])
	quit()

myIP = sys.argv[1]

for bl in bls:
	try:
		my_resolver = dns.resolver.Resolver()
		query = '.'.join(reversed(str(myIP).split("."))) + "." + bl
		answers = my_resolver.query(query, "A")
		answer_txt = my_resolver.query(query, "TXT")
		print 'IP: %s IS listed in %s (%s: %s)' %(myIP, bl, answers[0], answer_txt[0])
	except dns.resolver.NXDOMAIN:
		print 'IP: %s is NOT listed in %s' %(myIP, bl)

This produces the following output:

root@w:~/tmp# ./bl.py 144.76.252.9
IP: 144.76.252.9 IS listed in zen.spamhaus.org (127.0.0.4: "http://www.spamhaus.org/query/bl?ip=144.76.252.9")
IP: 144.76.252.9 is NOT listed in spam.abuse.ch
IP: 144.76.252.9 IS listed in cbl.abuseat.org (127.0.0.2: "Blocked - see http://cbl.abuseat.org/lookup.cgi?ip=144.76.252.9")
IP: 144.76.252.9 is NOT listed in virbl.dnsbl.bit.nl
IP: 144.76.252.9 is NOT listed in dnsbl.inps.de
IP: 144.76.252.9 IS listed in ix.dnsbl.manitu.net (127.0.0.2: "Your e-mail service was detected by mx.selfip.biz (NiX Spam) as spamming at Sat, 22 Nov 2014 11:17:11 +0100. Your admin should visit http://www.dnsbl.manitu.net/lookup.php?value=144.76.252.9")
IP: 144.76.252.9 IS listed in dnsbl.sorbs.net (127.0.0.6: "Currently Sending Spam See: http://www.sorbs.net/lookup.shtml?144.76.252.9")
IP: 144.76.252.9 is NOT listed in bl.spamcannibal.org
IP: 144.76.252.9 IS listed in bl.spamcop.net (127.0.0.2: "Blocked - see http://www.spamcop.net/bl.shtml?144.76.252.9")
IP: 144.76.252.9 IS listed in xbl.spamhaus.org (127.0.0.4: "http://www.spamhaus.org/query/bl?ip=144.76.252.9")
IP: 144.76.252.9 is NOT listed in pbl.spamhaus.org
IP: 144.76.252.9 IS listed in dnsbl-1.uceprotect.net (127.0.0.2: "IP 144.76.252.9 is UCEPROTECT-Level 1 listed. See http://www.uceprotect.net/rblcheck.php?ipr=144.76.252.9")
IP: 144.76.252.9 is NOT listed in dnsbl-2.uceprotect.net
IP: 144.76.252.9 is NOT listed in dnsbl-3.uceprotect.net
IP: 144.76.252.9 IS listed in db.wpbl.info (127.0.0.2: "Spam source - http://wpbl.info/record?ip=144.76.252.9")

by Adam Palmer at November 22, 2014 04:25 PM

Aaron Johnson

Links: 11-21-2014

  • Against Productivity — The Message — Medium
    Quote: "We should spend more time wasting time. We all need to be bored more. We all need to spend more time looking quizzically at birds we don’t recognize. We all need a little more time to connect the dots and see if they matter. I don’t know how much more, but sometimes you have to do things without knowing how much you need."
    (categories: productivity work life )

by ajohnson at November 22, 2014 06:30 AM

Chris Siebenmann

The effects of a moderate Hacker News link to here

A few days ago my entry on Intel screwing up their DC S3500 SSDs was posted to Hacker News here and rose moderately highly up the rankings, although I don't think it made the front page (I saw it on the second page at one point). Fulfilling an old promise, here's a report of what the resulting traffic volume looked like.

First, some crude numbers from this Monday onwards for HTTP requests for Wandering Thoughts, excluding Atom feed requests. As a simple measurement of how many new people visited, I've counted unique IPs fetching my CSS file. So the numbers:

(day) (that entry) (other pages) (CSS fetches)
November 17 0 5041 453
November 18 18255 6178 13585
November 19 17112 10141 11940
November 20 908 6341 876
November 21 228 4811 530

(Some amount of my regular traffic is robots and some of it is from regular visitors who already have my CSS file cached and don't re-fetch it.)

Right away I can say that it looks like people spilled over from the directly linked entry to other parts of Wandering Thoughts. The logs suggest that this mostly went to the blog's main page and my entry on our OmniOS fileservers, which was linked to in the entry (much less traffic went to my entry explaining why 4K disks can be a problem for ZFS). Traffic for the immediately preceding and following entries also went up, pretty much as I'd expect, but this is nowhere near all of the extra traffic so people clearly did explore around Wandering Thoughts to some extent.

Per-day request breakdowns are less interesting for load than per minute or even per second breakdowns. At peak activity, I was seeing six to nine requests for the entry per second and I hit 150 requests for it a minute (for only one minute). The activity peak came very shortly after I started getting any significant volume of hits; things start heating up around 18:24 on the 18th, go over 100 views a minute at 18:40, peak at 19:03, and then by 20:00 or so I'm back down to 50 a minute. Unfortunately I don't have latency figures for DWiki so I don't know for sure how well it responded while under this load.

(Total page views on the blog go higher than this but track the same activity curve. CSpace as a whole was over 100 requests a minute by 18:39 and peaked at 167 requests at 19:05.)

The most surprising thing to me is the amount of extra traffic to things other than that entry on the 19th. Before this happened I would have (and did) predict a much more concentrated load profile, with almost all of the traffic going to the directly linked entry. This is certainly the initial pattern on the 18th, but then something clearly changed.

(I was surprised by the total amount of traffic and how many people seem to have visited but that's just on a personal basis where it's surprising for so many people to be interested in looking at something I've written.)

This set of stats may well still leave people with questions. If so, let me know and I'll see if I can answer them. Right now I've stared at enough Apache logs for one day and I've run out of things to say, so I'm stopping this entry here.

Sidebar: HTTP Referers

HTTP Referers for that entry over the 18th to the 20th are kind of interesting. There were 17,508 requests with an empty Referer, 13,908 from the HTTPS Hacker News front page, 592 from a google.co.uk redirector of some sort, 314 from the t.co link in this HN repeater tweet, and then we're down to a longer tail (including reddit's /r/sysadmin, where it was also posted). The Referers feature a bunch of various alternate interfaces and apps for Hacker News and so on (pipes.yahoo.com was surprisingly popular). Note that there were basically no Referers from any Hacker News page except the front page, despite that as far as I know the story never made it to the front page. I don't have an explanation for this.

by cks at November 22, 2014 05:47 AM

November 21, 2014

Adams Tech Talk

Performing DNS Queries in Python

dnspython provides a detailed interface into DNS. In its simplest form, it’s possible to perform queries in only a couple of lines of code. Here’s a commented example:

import dns.resolver #import the module
myResolver = dns.resolver.Resolver() #create a new instance named 'myResolver'
myAnswers = myResolver.query("google.com", "A") #Lookup the 'A' record(s) for google.com
for rdata in myAnswers: #for each response
    print rdata #print the data

The results in my case are:

173.194.125.3
173.194.125.7
173.194.125.4
173.194.125.8
173.194.125.9
173.194.125.5
173.194.125.2
173.194.125.0
173.194.125.6
173.194.125.1
173.194.125.14


In the same way, we can perform MX and NS queries with:

myAnswers = myResolver.query("google.com", "MX")

and

myAnswers = myResolver.query("google.com", "NS")

We can easily look up TXT records, which will contain SPF records for a domain if present:

myAnswers = myResolver.query("iodigitalsec.com", "TXT")

Which results in:

"v=spf1 mx a ptr ip4:148.251.196.144/28 ip4:85.10.227.160/28 ip4:85.10.227.160/28 ~all"

These are some of the more common types, however DNS is an expansive protocol and further information on query types can be found here.

When it comes to reverse DNS (IP to hostname), it’s not as simple as performing an A record lookup on the IP address. We need to perform a PTR lookup instead, but not just on the IP address. The IP needs to be reversed, and have “.in-addr.arpa” appended to it.

To resolve the IP 173.194.125.3 to a hostname, we use the code:

myAnswers = myResolver.query("3.125.194.174.in-addr.arpa", "PTR")

We can handle the crafting of the request programatically as follows:

ip = "173.194.125.3"
req = '.'.join(reversed(ip.split("."))) + ".in-addr.arpa"
myAnswers = myResolver.query(req, "PTR")

The DNS resolver also gives us the option of specifying our own nameservers. This can be achieved by using:

myResolver = dns.resolver.Resolver()
myResolver.nameservers = ['8.8.8.8', '8.8.4.4']

Including an error catch, we can put the whole thing together with:

import dns.resolver

myResolver = dns.resolver.Resolver()
myResolver.nameservers = ['8.8.8.8', '8.8.4.4']

try:
        myAnswers = myResolver.query("google.com", "A")
        for rdata in myAnswers:
                print rdata
except:
        print "Query failed"

Now try and put together an application that performs command line DNS queries, i.e.:

./pydnslookup.py A google.com
./pydnslookup.py MX msn.com
./pydnslookup.py PTR 8.8.8.8

by Adam Palmer at November 21, 2014 11:55 PM

Google Blog

Through the Google lens: search trends November 14-20

Devastating snowstorms, bizarre interviews and addictive podcasts? It was an unusual week on the search charts this time around.

A frosty reception
If you looked on Maps for Buffalo, you wouldn’t find it. OK, that’s a bit of an exaggeration, but the city is buried underneath six feet of snow… literally. While people across the country are just getting ready for Turkey Day, Buffalonians are dealing with a snowstorm that’s set to break several records and may keep them trapped in their houses for a while–white Thanksgiving, anyone?

In the court of public opinion
People were searching for more information about famed comedian Bill Cosby this week after sexual abuse allegations made headlines.

And in the political world, Democrats in the Senate blocked the Keystone XL proposal, a hotly contested initiative to build an oil pipeline from Canada to Nebraska. While searchers were wondering how this bill would affect gas prices, the door is closed on the issue at the moment.
Teens aren't what they used to be
A toymaker with a mission decided it was makeover time for Barbie, the doll everyone loves to hate. Nickolay Lamm created “normal Barbie,” a doll that everyone could relate to -- less “material girl” and more “girl next door”—non-size zero waist included. Reflecting the body of the average 19-year old woman, both parents and kids have taken a liking to the fact that toy actually...looks like a real person (she looks like my sister!) Complete with freckles and acne sticker expansion packs, we think Lamm’s got the awkward teenage years down pat.

Speaking of teenagers: 16-year-old and 14-year-old celebrity siblings Jaden and Willow Smith, heirs to The Fresh Prince of Bel Air’s throne, were in the spotlight this week after giving what some might describe as a pretty spacey interview to the New York Times’ T Magazine. The wide-ranging piece covered their thoughts on topics like Prana energy (what?), the duality of the mind (how??) and goals of imprinting yourself on everything (why???) — and baffled social media and searchers alike. Time Magazine got in on the fun and released a poem generator made from the interview’s most interesting quotes. Here’s our Jaden and Willow Smith haiku (spoiler: it doesn’t make any sense).

             Babies remember
             The most craziest person of all time
             Driver’s ed? What’s up?

Colonel Mustard in the library
There’s always time for a tale of murder and mystery. This week the Internet played the role of detective as people were curious to learn more about NPR’s new serial Podcast which explores a 15-year-old real life homicide case. The series is insanely popular, hitting the 5 million downloads and streams mark more quickly than any other podcast before it, but not without its fair share of controversy. The victim’s family members have expressed concern about the sensationalization of the case.

Tip of the week
Bored on the bus or subway? Just say “OK Google, flip a coin.” What do yo have to lose?

by Emily Wood (noreply@blogger.com) at November 21, 2014 01:21 PM

Chris Siebenmann

Lisp and data structures: one reason it hasn't attracted me

I've written before about some small scale issues with reading languages that use Lisp style syntax, but I don't think I've said what I did the other day on Twitter, which is that the syntax of how Lisp languages are written is probably the primary reason that I slide right off any real interest in them. I like the ideas and concepts of Lisp style languages, the features certainly sound neat, and I often use all of these in other languages when I can, but actual Lisp syntax languages have been a big 'nope' for a long time.

(I once wrote some moderately complex Emacs Lisp modules, so I'm not coming from a position of complete ignorance on Lisp. Although my ELisp code didn't exactly make use of advanced Lisp features.)

I don't know exactly why I really don't like Lisp syntax and find it such a turn-off, but I had an insight on Twitter. One of the things about the syntax of S-expressions is that they very clearly are a data structure. Specifically, they are a list. In effect this gives lists (yes, I know, they're really cons cells) a privileged position in the language. Lisp is lists; you cannot have S-expressions without them. Other languages are more neutral on what they consider to be fundamental data structures; there is very little in the syntax of, say, C that privileges any particular data structure over another.

(Languages like Pyhton privilege a few data structures by giving them explicit syntax for initializers, but that's about it. The rest is in the language environment, which is subject to change.)

Lisp is very clearly in love with lists. If it's terribly in love with lists, it doesn't feel as if it can be fully in love with other data structures; whether or not it's actually true, it feels like other data structures are going to be second class citizens. And this matters to how I feel about the language, because lists are often not the data structure I want to use. Even being second class in just syntax matters, because syntactic sugar matters.

(In case it's not clear, I do somewhat regret that Lisp and I have never clicked. Many very smart people love Lisp a lot and so it's clear that there are very good things there.)

by cks at November 21, 2014 06:36 AM

Daniel E. Markle

Now on Linked In

I have resisted creating a LinkedIn profile for a while now as I didn't want to manage yet another social media site I won't use. However, as it seems to be a prominent place for career connection building, I decided to give it a try.

Especially as it is more about connections than posting content, it will be interesting to see if it brings any friendships or opportunities my way.

by dmarkle@ashtech.net (Daniel E. Markle) at November 21, 2014 03:02 AM

Ubuntu Geek

Install webmin on ubuntu 14.10 (Utopic Unicorn) Server

Webmin is a web-based interface for system administration for Unix. Using any modern web browser, you can setup user accounts, Apache, DNS, file sharing and much more. Webmin removes the need to manually edit Unix configuration files like /etc/passwd, and lets you manage a system from the console or remotely.
(...)
Read the rest of Install webmin on ubuntu 14.10 (Utopic Unicorn) Server (141 words)


© ruchi for Ubuntu Geek, 2014. | Permalink | One comment | Add to del.icio.us
Post tags: , ,

Related posts

by ruchi at November 21, 2014 12:15 AM

November 20, 2014

SLAPTIJACK

OS X Not Appending Search Domains - Yosemite Edition

FinderIt seems this problem has resurfaced again with the new version of Mac OS X. More specifically, this problem seems to affect appending search domains when the hostname contains a dot. In Yosemite (10.10), mDNSResponder has been replaced with discoveryd. Fortunately, all we need to do here is add the --AlwaysAppendSearchDomains argument to the LaunchDaemon startup file and everything should work as expected.

  1. Before you do anything, make sure you have updated to at least OS X 10.10.1.
  2. You will need to edit /System/Library/LaunchDaemons/com.apple.discoveryd.plist. Add <string>--AlwaysAppendSearchDomains</string> to the list of strings in the ProgramArguments <array>.
  3. Restart discoveryd to see your changes take effect.
    sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.discoveryd.plist
    sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.discoveryd.plist
  4. Profit!

by Scott Hebert at November 20, 2014 04:20 PM


Administered by Joe. Content copyright by their respective authors.