Blog

My random thoughts, musings, rants about: Linux, Software Engineering, Hardware and anything else which comes to mind.

A Simple Shorewall Firewall

I've built Linux / IPTables based routers / firewalls many times over the years. I figured it was probably time I documented building a simple SOHO solution.

I use Shorewall because it makes dealing with IPTables simple. As much as I like IPTables its rule syntax is f**king awful. Shorewall offers a layer of abstraction on IPTables and makes common use cases trivial. It offers more features than other solutions such a ufw.

For the sake of this example, our Linux box has the following network interfaces:

  • ppp0 - the intertubes
  • eth0 - our private internal network
  • eth1 - our public guest network
  • tun0 - our OpenVPN server

Core shorewall config

In the main shorewall configuration file (/etc/shorewall/shorewall.conf), ensure the following properties are set as follows:

STARTUP_ENABLED=Yes
IP_FORWARDING=On

Setting up zones

Shorewall's world is all about zones, a zone is merely a network that we are going to firewall between. In this example we have the following zones:

  • net - the internet
  • loc - our local network
  • gst - our guest network
  • vpn - our VPN network

The zones configuration file (/etc/shorewall/zones) will look like:

#
# Shorewall version 4 - Zones File
#
# For information about this file, type "man shorewall-zones"
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-zones.html
#
###############################################################################
#ZONE   TYPE            OPTIONS         IN                      OUT
#                                       OPTIONS                 OPTIONS
fw       firewall
net      ipv4
loc      ipv4
gst      ipv4
vpn      ipv4

Note that the fw zone means this local machine.

Now we have zones defined, we need to assign our interfaces to our zones. The file (/etc/shorewall/interfaces) configures these assignments and will look like:

#
# Shorewall version 4 - Interfaces File
#
# For information about entries in this file, type "man shorewall-interfaces"
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-interfaces.html
#
###############################################################################
#ZONE   INTERFACE       BROADCAST       OPTIONS
net     ppp0            detect
loc     eth0            detect
gst     eth1            detect
vpn     tun0            detect

Setting up policies

Policies specify the default rule action for traffic between zones. In our example by default we will:

  • permit traffic from the local network to the internet
  • permit traffic from the guest network to the internet
  • permit traffic from the vpn network to the local network

The file (/etc/shorewall/policy) will look like:

#
# Shorewall version 4 - Policy File
#
# For information about entries in this file, type "man shorewall-policy"
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-policy.html
#
###############################################################################
#SOURCE DEST    POLICY          LOG     LIMIT:          CONNLIMIT:
#                               LEVEL   BURST           MASK
loc     net     ACCEPT
gst     net     ACCEPT
vpn     loc     ACCEPT
fw      all     ACCEPT
net     all     DROP
all     all     REJECT

Note that all is a pseudo-zone which means any zone, as such the last line means that by default traffic between zones will be rejected.

Setting up rules

Rules are exceptions to policy, defining specific traffic which will be allowed through. In this example, we are going to permit ICMP Ping and SSH traffic from any network to access the local machine. We will also forward ports 80 and 443 into a specific server in the local network

The rules configuration file (/etc/shorewall/rules) will look like:

#
# Shorewall version 4 - Rules File
#
# For information on the settings in this file, type "man shorewall-rules"
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-rules.html
#
######################################################################################################################################################################################
#ACTION         SOURCE                  DEST                    PROTO   DEST          SOURCE            ORIGINAL        RATE       USER/    MARK    CONNLIMIT       TIME         HEADERS         SWITCH
#                                                                       PORT          PORT(S)           DEST            LIMIT      GROUP
#SECTION ALL
#SECTION ESTABLISHED
#SECTION RELATED
SECTION NEW
ACCEPT          all                     fw                      tcp     22            -                 -
ACCEPT          all                     fw                      icmp    0,8           -                 -
DNAT:info       net                     loc:172.30.14.187       tcp     80,443        -                 -

Setting up NAT

Due to the joys of IPv4, we need to masquerade local traffic to access the internet. The shorewall masq configuration file (/etc/shorewall/masq) will look like:

#
# Shorewall version 4 - Masq file
#
# For information about entries in this file, type "man shorewall-masq"
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-masq.html
#
#############################################################################################
#INTERFACE:DEST         SOURCE          ADDRESS         PROTO   PORT(S) IPSEC   MARK    USER/
#                                                                                       GROUP
ppp0                    172.30.0.0/16

Note the specified source network matches traffic from our local and guest networks.

Thats all folks

It reall is that straight forward, simply run shorewall restart to make the new ruleset active

Brew A Beer Day

Yesterday rather than the usual daily grind I went over to Sadler's Ales in Stourbridge for a Brew A Beer Day (it was a Christmas present). It turned out to be rather more hands-on than I expected and was immense fun. As someone who enjoys beer, it was fascinating to get involved with the process and to chat with a brewer.

Time to mash

The day kicked off by meeting up with two other people which were on the same event: Martin and Peter. We didn't know each other before hand and had all been given the experience as a present.

All beer starts off with malt, malted barley to be precise.

Our first order of the day was to mash 300kg of malted barley with around four barrels (a barrel is 147l) of water at 71c in the mash tun. The mash is essentially a porridge, which extracts the sugar from the barley and also converts the starch in the barley to sugar. Malted barley has started to germinate and has present the enzymes needed to convert starch to sugar.

Barley can be roasted (a bit like coffee), in different grades: caramalt, chocolate malt, etc. This roasting gives the malt a dark colour and a caramel to bitter taste. A pale ale is made using just plain malt, other malts are added to create amber ales, porters and stouts. A stout would have a proportion of chocolate malt. We were brewing an amber ale which had 25kg of caramalt in it. There is also a small proportion of Wheat added, this helps in forming the head of the beer.

This involved loading 300kg of malt into the mash tun, in 25kg sacks. Each sack had to be lugged up a ladder and poured in to a hopper. The hopper helps to evenly mix the malt with the hot water.

The mash is left for 1 hour to steep, giving us time for breakfast and a beer.

Tapping the wort

After breakfast it was time to extract the wort (the liquid from the mash) from the mash and transfer it to the kettle. The wort is boiled up with hops added for bitterness and aroma. The mash is also sparged during this process, a sparge arm rotates around the mash tun drizzling hot water over the mash, this further extract more sugar from the malt.

At this stage the wort is surprisingly sweet, an almost syrup like consistency, with malty overtones.

Whilst the pump was busy transferring the wort into the kettle, it was time to prepare the hops. After malt, hops is the other key ingredient in beer, providing all the bittering and aroma. Different varieties of hops and hops from different climbs vary in the level of bitterness and in aroma.

The hops came vacuum packed, so we needed to flake the hops whilst weighting it. It was surprising how sticky our hands were after this, covered in a kind of green resin, smelt rather good though!

Emptying the mash tun

With all the wort extracted from the mash, it was time to empty the mash tun. That involved shovelling out 300kg of wet barley into sacks. Not particularly glamorous and a bit of hard work, but fun all the same.

With the mash tun empty and the wort busy getting up to boiling point, it was time to sit down and relax over lunch, and some more beer.

Adding the hops

Lunch over, it was time to add the hops to the boiling wort. Hops is added at various times during the boiling. Hops added early on (and boiled up for a while) adds bitterness to the beer. Hops added late on adds aroma to the beer. We were brewing a golden beer, which had about 3kg of bittering hops added early on and 5kg of aroma hops added late on.

The brew house was filled with a wonderful aroma when the hops was added.

Fermenting

With the wort all boiled and infused with the hops, it is time to transfer to the fermentation vessel. Over three days in the fermentation vessel the yeast will turn the sugar into ethanol!

Having just been boiled the wort is rather hot, far to hot to add the yeast directly too. The wort is pumped through a heat exchanger en-route to the fermentation vessel. Cold water is pumped through the other channel of the heat exchanger. This results in a quickly cooled wort at 20c and hot water at 71c ready for the following days brew.

The final and somewhat critical step is to add the yeast.

The last and by far the least enviable task is to clean out the kettle, thankfully we were spared that job. To be honest I'm not sure I'd be able to get in through the opening to clean it out.

Pulling my first pint

At this point, I can safely say, I've (helped) brewed roughly 3,000 pints of beer. With the yeast bubbling away in the fermentation vessel, we retired to the bar for a farewell pint and I got chance to pull my first pint (well a half actually)

All in all, I had a wonderful day, it was fun all around and I learnt a fair bit too. The people at Sadler's Ales were really friendly and made it an excellent day. I'd recommend it to anyone who likes beer, or as a present to anyone who knows someone who likes beer.

openSUSE on the Odroid U3

A while back I got an Odroid U3 , an ARM based single board computer, much like the Raspberry PI, only a lot more powerful - the Odroid U3, has a quad core 1.7GHz ARM CPU and 2GB RAM.

However only images for xubuntu are provided officially, being an openSUSE user I wanted to run openSUSE on the Odroid U3. I managed to hack together an image a few months back, but sadly didn't really document how I did it, rather silly of me.

The basic approach is to hack together an image, from the xubuntu Odroid U3 images and the openSUSE JEOS root file system images. This allows the use of the Odroid U3 customised kernels with an openSUSE based user space. We can then use the Odroid U3 kernel updating script once our image boots to get the latest kernel images.

To build our image, we are going to need a machine, with a few gigs of free space to be able to assemble and then copy the image over. You will also need a USB micro-SD card reader to be able to copy the image to the card.

All steps should be executed as root!

First we need to download some stuff that we need to build our franken-image:

buildhost:~ # wget http://dn.odroid.com/4412/Linux/ubuntu-u2-u3/xubuntu-13.10-desktop-armhf_odroidu_20140211.img.xz
buildhost:~ # xz -d xubuntu-13.10-desktop-armhf_odroidu_20140211.img.xz
buildhost:~ # wget http://download.opensuse.org/ports/armv7hl/distribution/13.1/appliances/openSUSE-13.1-ARM-JeOS.armv7-rootfs.armv7l-1.12.1-Build33.1.tbz
buildhost:~ # wget http://builder.mdrjr.net/tools/boot.scr_opensuse.tar
buildhost:~ # wget http://builder.mdrjr.net/tools/kernel-update.sh
buildhost:~ # wget http://builder.mdrjr.net/kernel-3.8/00-LATEST/odroidu2.tar.xz
buildhost:~ # wget http://builder.mdrjr.net/tools/firmware.tar.xz
buildhost:~ # xz -d odroidu2.tar.xz
buildhost:~ # xz -d firmware.tar.xz

Now take a copy of the xubuntu image, this will form the basis for our custom image. By starting with a working image, we have the partitioning and bootloader already installed. As I understand it, the first 1MiB of the image is taken up by the phase one bootloader and the position of partitions is important. The phase one bootloader will read boot.scr from the vfat partition which then loads the zImage (compressed kernel).

buildhost:~ # cp xubuntu-13.10-desktop-armhf_odroidu_20140211.img custom_opensuse.img

Next we need to ensure that the loop device module is loaded.

buildhost:~ # modprobe loop.

Now we can mount our image.

buildhost:~ # losetup /dev/loop0 custom_opensuse.img
buildhost:~ # kpartx -a /dev/loop0
buildhost:~ # mkdir tmp_boot
buildhost:~ # mkdir tmp_root
buildhost:~ # mount /dev/mapper/loop0p1 tmp_boot
buildhost:~ # mount /dev/mapper/loop0p2 tmp_root

Now empty out the entire root file system, we are going to replace that with the openSUSE root file system. We will then patch in the latest kernels.

buildhost:~ # cd tmp_root
buildhost:~/tmp_root # rm -rf *

Now extract the openSUSE root file system into our image

buildhost:~/tmp_root # tar -xjf ~/openSUSE-13.1-ARM-JeOS.armv7-rootfs.armv7l-1.12.1-Build33.1.tbz

Now we can patch in the odroid kernel

buildhost:~/tmp_root # cd boot
buildhost:~/tmp_root/boot # cp ../../boot.scr_opensuse.tar ./
buildhost:~/tmp_root/boot # tar -xvf boot.scr_opensuse.tar
buildhost:~/tmp_root/boot # rm boot.scr_opensuse.tar
buildhost:~/tmp_root/boot # rm -rvf x
buildhost:~/tmp_root/boot # mv x2u2/* ./
buildhost:~/tmp_root/boot # rm -rvf x2u2
buildhost:~/tmp_root/boot # cp boot-hdmi-720p60hz.scr boot.scr
buildhost:~/tmp_root/boot # cd ..
buildhost:~/tmp_root # cd lib/modules
buildhost:~/tmp_root/lib/modules # rm -rf *
buildhost:~/tmp_root/lib/modules # cd ../..
buildhost:~/tmp_root # cp ../odroidu2.tar ./
buildhost:~/tmp_root # tar -xvf odroidu2.tar
buildhost:~/tmp_root # rm odroidu2.tar
buildhost:~/tmp_root # cd lib/firmware
buildhost:~/tmp_root/lib/firmware # rm -rf *
buildhost:~/tmp_root/lib/firmware # cp ../../../firmware.tar.xz ./
buildhost:~/tmp_root/lib/firmware # tar -xvf firmware.tar.xz
buildhost:~/tmp_root/lib/firmware # rm firmware.tar.xz
buildhost:~/tmp_root/lib/firmware # cd ../../..
buildhost:~ # cd tmp_boot
buildhost:~/tmp_boot # rm -rvf *
buildhost:~/tmp_boot # cp ../tmp_root/boot/* ./

Now we need to setup fstab, edit ~/tmp_root/etc/fstab to have the following contents:

devpts  /dev/pts          devpts  mode=0620,gid=5 0 0
proc    /proc             proc    defaults        0 0
sysfs   /sys              sysfs   noauto          0 0
debugfs /sys/kernel/debug debugfs noauto          0 0
usbfs   /proc/bus/usb     usbfs   noauto          0 0
tmpfs   /run              tmpfs   noauto          0 0
UUID=e139ce78-9841-40fe-8823-96a304a09859 / ext4 defaults 0 0
/dev/mmcblk0p1 /boot vfat defaults 0 0

Note: use blkid to check the UUID of the root fs (/dev/mapper/loop0p2) is e139ce78-9841-40fe-8823-96a304a09859

Now lets make some changes to the systemd journal config, edit ~/tmp_root/etc/systemd/journald.conf change the following variables:

Storage=volatile
ForwardToConsole=yes
TTYPath=/dev/tty12

This allows us to see the journal on TTY 12 ( CTRL-ALT-F12) which is handy for debugging, it also doesn't store the journal on disk (as SD cards can be horrifically slow).

Now we need to make some changes to the systemd config:

buildhost:~/tmp_root # cd etc/systemd/system
buildhost:~/tmp_root/etc/systemd/system # cd default.target.wants
buildhost:~/tmp_root/etc/systemd/system/default.target.wants # rm -rvf *
buildhost:~/tmp_root/etc/systemd/system/default.target.wants # cd ..
buildhost:~/tmp_root/etc/systemd/system # cd multi-user.target.wants
buildhost:~/tmp_root/etc/systemd/system/multi-user.target.wants # rm -f wpa_supplicant.service
buildhost:~/tmp_root/etc/systemd/system/multi-user.target.wants # rm -f remote-fs.target
buildhost:~/tmp_root/etc/systemd/system/multi-user.target.wants # cd ../..
buildhost:~/tmp_root/etc/systemd/ # ln -sf /usr/lib/systemd/system/multi-user.target default.target

It's handy to copy in the kernel update script:

buildhost:~/tmp_root/root # cp ../../kernel-update.sh ./
buildhost:~/tmp_root/root # chmod +x kernel-update.sh

Finally, we can unmount and flash the image

buildhost:~/ # sync
buildhost:~/ # umount tmp_root
buildhost:~/ # umount tmp_boot
buildhost:~ # kpartx -d /dev/loop0
buildhost:~ # losetup -d /dev/loop0
buildhost:~/ # dd if=./custom_opensuse.img of=/dev/sdb bs=32M

Note: make sure /dev/sdb is the SD card you want to write to, check dmesg if need be.

Nore: your liable to get odd errors from the SD card if you fail to set bs, a 32MiB block seemed the fastest for me.

Now you should be able to boot your Odroid U3 into openSUSE 13.1 (console). Once booted you can use the zypper to update any RPMs or to install a GUI. The root password is linux. Note that SSH is enabled by default.

It is worth noting, that the I/O performance of cheap SD cards is pretty terrible (or at least this is true for SD card one I have), remember to be patient.

Any issues and I can probably give you a hand in the #suse IRC channel or tweet me @intrbiz

TL; DR

Download my image: odroid_u3_opensuse_131.img.xz . The image is around 4GiB uncompressed and the SHA256 hash is 96a236f7779d08f1cba25a91ef6375f0b000eafb94e2018ec8a9ace67e995272.

The extract and flash to your SD card:

buildhost:~/ # xz -d odroid_u3_opensuse_131.img.xz
buildhost:~/ # dd if=./odroid_u3_opensuse_131.img of=/dev/sdb bs=32M

Note: make sure /dev/sdb is the SD card you want to write to, check dmesg if need be.

Plug in, power up and enjoy openSUSE on your Odroid U3. The root password is linux, note that SSH is enabled by default.

My Blog, Reborn

This blog has been broken for a while and I've finally gotten around to fixing it. However rather than just reviving the old blog, I thought it was time for a change. Whilst it may look thhe same, under the hood its a total rewrite.

I didn't dislike the previous Wordpress incarnation, althought there were some nasty hacks in that. But viven I've developed Balsa, a fast and lightweight Java web application framework, I figured I should use it for my own stuff.

So, this blog is now being served from a Balsa application, the content is written in Markdown and stored in Git. It only took a day to knock up the application, Balsa has good support for rendering Markdown content.

Creating a post is now as simple as:

  • Firing up Kate (a rather lovely text editor)
  • Attempting to extract my toughts in a coherent manner (the hardest part for me)
  • Finally the git commit; git push origin

Its refreshingly simple to add a post now, just write, commit and push.

I've even put the code behind my blog on GitHub for people who are really interested.

Java 8

I've been using Java 8 since it was released earlier this year and have found some of the new features game changing. To the extent I've now moved most of my projects to make use of new Java 8 features. Support for Java 8 in Eclipse Luna is good and I've not run into any major issues using Java 8.

Lambda Expressions

The single biggest feature in Java 8 is support for Lambda expressions, this further supports more functional programming styles in Java. Java has always had closures via its anonymous classes functionality, these have often been used to implement callbacks.

Lambda expressions are extremely useful when working with collections. As a simple example, filtering a List prior to Java 8 would require something along the lines of:

// List<String input;
List<String> filtered = new LinkedList<String>();
for (String e : input)
{
    if ("filter".equals(e))
        filtered.add(e);
}

With Java 8 we can transform this into:

// List<String input;
List<String> filtered = input.stream()
                        .filter((e) -> { return "filter".equals(e); })
                        .collect(Collectors.toList());

Certainly a major improvement in semantics and readability. While using Lambda expressions would be slightly slower. Java 8 has taken care to implement them as efficiently as possible, by making use of InvokeDynamic and compiling each Lambda expression to a synthetic method.

Having been using Java 8 for the last few months, I can honestly say that Lambda expressions have changed how I code. The addition of Lambda expressions has made as much of an impact as adding generics in Java 5 did.

Default Methods

Default methods allow concrete functionality to be added to interface methods.

Prior to Java 8 methods of an interface could only be abstract. Interfaces defined how objects should be interacted with only, they were Java's solution to multi-inheritance while attempting to avoid some of the issues with it.

I've always liked the simplicity of the Java object model, however at times it was a straitjacket for certain use cases.

Default methods seem like a good compromise between flexibility and simplicity.

I've found them useful for avoiding having to copy and paste trivial code.

For example:

public interface Parameterised
{
    List<Parameter> getParameters();

    default Parameter getParameter(String name)
    {
        return this.getParameters().stream()
                .filter((p) -> {return name.equals(p.getName());})
                .findFirst()
                .get();
    }
}

Repeatable Annotations

I really like annotations in Java, they allow metadata to be added to code elements. This is really handy for frameworks which can then use this information to customise how objects are interacted with, allowing for more declarative coding.

Since annotations were added in Java 5, I've never understood why they were not repeatable, it seems obvious that they should be. Its a shame that it has taken until Java 8 to address this limitation.

I make heavy use of annotations in Balsa to declare routes (a route handles a specific HTTP request for an application). Annotations give a rather nice way to declare this routing information, making it simple and readable to declare routes. Allowing developers to focus upon the actual functionality of the application.

Prior to Java 8 to make annotations repeatable, you needed to define another annotation to contain them. The user would then need to define both annotations on whatever they were annotating.

For example, the API developer defines the following annotations:

public @interface RequirePermission
{
    String value();
}

public @interface RequirePermissions
{
    RequirePermission[] value();
}

To consume the API, we would then do:

@RequirePermissions({
    @RequirePermission("ui.access"),
    @RequirePermission("ui.read")
})
public void myHttpRoute()
{
}

With Java 8, the API developer only needs to annotate the singular annotation as repeatable:

@Repeatable(RequirePermissions.class)
public @interface RequirePermission
{
    String value();
}

This has the advantage for the API developer that it doesn't alter how they process the annotations. However for the API consumer life is a little easier, as we can now do:

@RequirePermission("ui.access")
@RequirePermission("ui.read")
public void myHttpRoute()
{
}

That makes things a fair bit easier and doesn't have any backwards compatibility problems, quite a clever solution really.

Nashorn

Nashorn is a new Javascript engine for Java, it is fast and easy to work with.

It boasts performance comparable to that of Google's V8 and has the massive advantage of being able to make use of any Java APIs from Javascript, including threading. Again it makes use of InvokeDynamic for performance. It is usable via the ScriptEngine API as well as directly from the command line.

The quickest way to have a play with Nashorn is via jjs on the command line:

jjs> print("Hello World");
Hello World
jjs> exit();

It isn't that hard to execute a script from Java either:

// create the script engine
ScriptEngineManager factory = new ScriptEngineManager();
ScriptEngine script = factory.getEngineByName("nashorn");
// execute
script.eval("print(\"Hello World\");");

To pass variables into the ScriptEngine, we need to setup some bindings:

SimpleBindings bindings = new SimpleBindings();
bindings.put("message", "Hello World");
script.setBindings(bindings, ScriptContext.ENGINE_SCOPE);

Variables are contained by a ScriptEngine context, and are not shared across different ScriptEngine instances, we can change the previous example to:

// create the script engine
ScriptEngineManager factory = new ScriptEngineManager();
ScriptEngine script = factory.getEngineByName("nashorn");
// bindings
SimpleBindings bindings = new SimpleBindings();
bindings.put("message", "Hello World");
script.setBindings(bindings, ScriptContext.ENGINE_SCOPE);
// execute
script.eval("print(message);");

As mentioned Nashorn allows Javascript to inter-operate with Java, Javascript can invoke Java methods and Java can invoke Javascript functions. Nashorn also automatically maps functions to single method interfaces. For example, we can create a new thread to print Hello World twice a second:

jjs> new java.lang.Thread(function() { while (1) { print("Hello World"); java.lang.Thread.sleep(500); } }).start();

Children Of The Grave

This is a bit of a rehash of a post I originally wrote a 18 months ago, before the revelations of Edward Snowden and the more recent DRIP débâcle.

We've seen time and time again, the power of the Internet in disseminating information, especially during chaotic times, the most recent example being in the Ukraine.

The Internet offers us a utopia where information can be freely shared. Where it can be shared without restriction. Where people can communicate with each other. Where geography does not exist. It is by definition transnational. It offers all of us freedom.

By enabling anyone to communicate, without prejudice, without inference, with anyone else. The Internet represents the single most powerful tool humanity has.

This power, these freedoms. Seem even more significant given the recent events.

The Internet can provide me and you directly with raw information, when it happens, where it happens. No media organisation, what ever there motives can compete with that.

Recent events in the UK has reignited debate over the interception and retention of communications data.

My view remains unchanged, my thoughts have not been swayed. I remain firmly opposed to any capability for Government to snoop on its citizens.

To paraphrase a book I recently read:

This post is not a manifesto. There is no time for that. This is a warning

The book in question: Cypherpunks - Freedom and the future of the Internet. It was a book I found immense satisfaction in reading, and a book I would recommend every Internet user to read.

The Internet is humanities best chance at a transnational utopia. Yet it is a paradox. It is also a transnational surveillance dystopia.

The Internet's offer of free communication also offers total surveillance, leading to totalitarian control by an elite.

Its in encumbered upon every single person whom uses the Internet to: realise and understand their freedom and most importantly defend it.

We must not enable a minority to snoop on our activities, to ultimately control and dominate us. Lets us understand our freedom, lets us embrace it, lets us be defiant.

We must not let our politicians react disproportionally. Existing powers are too wide reaching and invasive, we must fight to get these reduced. But above all we need debate. We cannot let politicians to collude and pass these draconian powers without any consultation or proper consideration.

The title of this post, is a track by Black Sabbath. What does Heavy Metal have to do with politics I hear you ask. Its merely my interpretation of the lyrics, but I feel it embodies my point. Even given it was composed in a different era (has that much really changed), I feel it is still relevant.

"Revolution in their minds - the children start to march 
 Against the world in which they have to live

 Show the world that love is still alive you must be brave 
 Or you children of today are 
 Children of the Grave"

50 years on

This weekend, 27th October 2012, is 50 years on from the Cuban missile crisis.

Arguably the closest the world has come to the Mutually Assured Destruction of a nuclear war.

To my generation the thought of a nuclear war seems mad. It seems unimaginable, a relic of history. I never lived through the events my parents did. I never felt the fear that their generation did. While, I'm interested, and somewhat well informed on the subject. I'm in many ways atypical.

Yet, while the world has moved on, progressed. Nuclear weapons are still reality. Britain still has Trident II D5 missiles, stationed on board Vanguard class SSBNs (submarines). Right now there is probably 16 missiles somewhere in the Atlantic, each with up to 14 warheads.

With tensions between the west and Iran growing. Is the ignorance of my generation acceptable? I suspect not, my generation will have to tackle these issues head on in the near future.

Soon a decision over Trident will need to be made. Should we be renewing, extending, decommissioning our nuclear deterrent. This is a massively complex decision. A question to which I don't think there is an easy answer.

I would rule out, renewing Trident. I don't think we should be creating any more nuclear weapons. But I honestly remain undecided as to whether we should decommission Trident.

It would be a bold move by Britain if we decided to decommission Trident. That might be a good thing.

Ultimately it's probably time we had serious international accords to completely disarm all states with nuclear weapons and finally lay to rest the cold war.

How the Iran situation is handled has significant impact of this. I've read about proposals, for an independent international Uranium bank, which could provide fuel for peaceful nuclear projects.

Whatever happens, we must ensure that we have an intelligent, well informed discussion over the issues leading to a carefully thought out decision.

Shropgeek (R)evolution 2012

On Friday I attended Shropgeek (R)evolution (a web conference in Shrewsbury). It was a excellent way to spend the evening, a friendly and fun atmosphere, with beer and interesting conversation. There was a number of talks throughout the evening and plenty of time for networking / drinking.

On arrival, we were given some goodies, a name badge (with lanyard) and a Sharpie keyring, to write our names and Twitter handles (I'm @intrbiz ) on our badges. Followed by a trip to the bar to get a pint of EPA.

Neil Kinnish got the event under way with an interesting talk on Shropshire Screen . A simple site to aggregate rural cinema listings across Shropshire. A neat site, which shows how easily WordPress can be abused to fit a wide array of needs.

The next hour allowed for some drinking and networking, a chance for me to catchup with a number for friends I don't get to see that often.

Next up was Jake Smith talking about the D&AD awards system they built. It was a fascinating talk, even if I'm still not sure who D&AD is and the videos dragged on a little. It sure looked an interesting project to have been involved in. Their attitude of the project is more important than the client is refreshing. Their approach of fully involving the client during the analysis and model stages of a project is something I agree with.

Neil Kinnish and Mike Kus followed on with a brave presentation on WorkFu and how/why it failed. It's not often that people stand up and talk about how they failed, making it an unusual topic for a conference. However they made some very interesting points. One point which stuck was launching the least viable product, in order to minimise your losses in case it doesn't work.

The last talk of the evening was Paul Annett about gov.uk .

While at Shropshire I had followed the gov.uk project closely. It was amusing listening to how Government has to an extent resisted gov.uk, it's good to finally see the government focusing on the user, for a change. Directgov, had the glimmer of a good idea, just implemented utterly the wrong way. It was interesting hearing aobut how gov.uk is using their design principles to change the fundamental workings of government.

Following the talks, a number of people hung around to socialise into the night. I even got to see Paul Annett do some magic tricks.

Throughout the evening, one topic seemed to keep getting mentioned: Iterate quick and often. Make it easy to deploy your application. Listen to feedback. Code, test, deploy, listen and repeat.

Lastly, thanks to the Shropgeek team , sponsors and speakers for putting on a great evening. I look forward to next year

Setting the CPUID of a XEN guest

After some reading around I've discovered it is possible to configure the CPU vendor and model information that a XEN guest sees.

Most Linux sysadmins will be familiar with cat /proc/cpuinfo to get information about the systems processors. This gives information about the CPU vendor, model and features. For example my desktop gives:

[cellis@cedesktop ~]$ cat /proc/cpuinfo 
processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 67
model name      : AMD Athlon(tm) 64 X2 Dual Core Processor 5200+
- snip -

This information is actually exposed via cpuid instruction . This instruction takes a command in the EAX register and populates the EAX, EBX, ECX and EDX registers with the requested data.

If EAX is set to zero, the CPU vendor string is returned. This is a 12 byte ASCII string. Which is stored in the EBX, EDX, ECX (that is certainly logical, I suspect a hangup from the circuit complexity).

The CPU model string is a little more complex, it is a 48 byte ASCII string.

This string is obtained by executing the cpuid instruction 3 times, with EAX set to: 0x80000002, 0x80000003 and 0x80000004.

XEN has the cpuid config option , which defines the values of EAX, EBX, ECX and EDX for specific values of EAX.

Lets take the following XEN config:

cpuid=['0:eax=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,ebx=01110010011101000110111001001001, 
ecx=00000000000000000000000000000000,edx=00000000011110100110100101100010']

This causes the XEN guest to see the CPU vendor string 'Intrbiz', by providing the cpuid register values when EAX = 0.

As such, my XEN VM now reports:

xenvm1:~ # cat /proc/cpuinfo
processor       : 0
vendor_id       : Intrbiz
cpu family      : 0
model           : 0
model name      : Intrbiz XEN virtual CPU
stepping        : 0

The hard part of setting this, is working out the values of the registers. It takes time to convert the text to binary and get the orderings correct.

Instead, you can generate the XEN config right here:

Simply enter your desired CPU vendor and model string and the XEN config will be generated

CPU Vendor: Model:


							

Leaping CPU

As have many people around the world, I've found a number of my computers having high CPU load since the Leap Second.

Specifically I've found, the following to be using well over 100% CPU usage:

  • Java
  • Ruby
  • MySQL
  • Firefox
  • Akonadi (Uses MySQL)

Researching the issue, it seems there is an issue in the Linux kernel affecting futexes with the Leap Second. This is a new issue, not to be confused with other issues which have previously been patched. Futexes are a form of userspace lock, which are used heavily by the likes of Java, etc. This flaw seems to be in essentially every kernel since 2.6.22.

Note: This is a kernel bug, it is not a bug in Java or any other application.

A patch is already on the LKML: [PATCH] [RFC] Potential fix for leapsecond caused futex related load spikes

There is a work around for the issue, which is to simply set the date on the server, using the following:

date `date +\"%m%d%H%M%C%y.%S\"`

or (if you prefer)

reboot

Setting the time certainly sorted my problems, that took Firefox from 163% CPU to 1% and similar for Ruby and Java. Note that just restarting the Java or whatever process will not solve the problem.

This is because setting the system time, will call a kernel function: clock_was_set() ensuring the hrtimer subsystem is correct. Futexes often use the hrtimer subsystem in a loop, these sub-second waits are expiring immediately, causing high CPU usage. More detail