Scripted HTTP Checks

Bergamot Monitoring V2.0.0 (Yellow Sun) introduces a scripted HTTP check engine. This allowing you to control HTTP checks via Javascript. So... what is so cool about this?

Well, it allows you to implement checks which call a HTTP based API using nothing but configuration in Bergamot Monitoring. If an application, product, service has an API you can implement a customised check really easily, without the need to deploy anything.

As an example of this, a number of RabbitMQ checks are provided in the default Bergamot Monitoring site configuration templates. These checks use a Javascript snippet within the command definition to define the logic of the check. The check makes a call to the RabbitMQ HTTP REST API, which returns a JSON response. This JSON is then parsed (just like you would in the browser) and the logic implemented.

Obviously this technique can be used to implement a whole raft of checks, especially with the growing number of things which provide a HTTP API.

Implementing A Check

So, lets look at how we implement a check. The following is the definition of a command to check the number of active connections which are connected to a RabbitMQ server.

    <command name="rabbitmq_active_connections" extends="http_script_check">
        <summary>RabbitMQ Active Connections</summary>
        <parameter name="host">#{host.address}</parameter>
        <parameter name="port">15672</parameter>
        <parameter name="username">monitor</parameter>
        <parameter name="password">monitor</parameter>
        <parameter description="Warning threshold" name="warning">20</parameter>
        <parameter description="Critical threshold" name="critical">50</parameter>
        <script>
        <![CDATA[
            /* Validate parameters */
            bergamot.require('host');
            bergamot.require('port');
            bergamot.require('username');
            bergamot.require('password');
            bergamot.require('warning');
            bergamot.require('critical');

            /* Call the RabbitMQ HTTP API */
            http.check()
            .connect(check.getParameter('host'))
            .port(check.getIntParameter('port'))
            .get('/api/overview')
            .basicAuth(check.getParameter('username'), check.getParameter('password'))
            .execute(
                function(r) {
                    if (r.status() == 200)
                    { 
                        var res = JSON.parse(r.content());
                        bergamot.publish(
                            bergamot.createResult().applyGreaterThanThreshold(
                                res.object_totals.connections,
                                check.getIntParameter('warning'),
                                check.getIntParameter('critical'),
                                'Active connections: ' + res.object_totals.connections
                            )
                        );
                        bergamot.publishReadings(
                            bergamot.createLongGaugeReading('connections', null, res.object_totals.connections, check.getLongParameter('warning'), check.getLongParameter('critical'), null, null)
                        );
                    }
                    else
                    {
                        bergamot.error('RabbitMQ API returned: ' + r.status());
                    }
                }, 
                function(e) { 
                    bergamot.error(e); 
                }
            );
        ]]>
        </script>
        <description>Check RabbitMQ active connections</description>
    </command>

No doubt the above block of XML configuration is somewhat bewildering at first glance, so lets break it down. For the purpose of this article, we will only look at the code defined in the script parameter.

First off, the script starts with some basic validation, the following lines simply require that a parameter value is specified.

bergamot.require('host');
bergamot.require('port');
bergamot.require('username');
bergamot.require('password');
bergamot.require('warning');
bergamot.require('critical');

Next, we construct a HTTP call to make, this uses a fluent style interface to build the HTTP request.

http.check()
.connect(check.getParameter('host'))
.port(check.getIntParameter('port'))
.get('/api/overview')
.basicAuth(check.getParameter('username'), check.getParameter('password'))

When constructing the HTTP request, parameters are fetched using:

check.getParameter('host')
check.getIntParameter('port')

Once the HTTP request is defined, it is executed asynchonously. One of two functions will be called back when the request is complete. The first function defines the on success callback, the second the on error callback.

.execute(
    function(r) {
        if (r.status() == 200)
        { 
            var res = JSON.parse(r.content());
            bergamot.publish(
                bergamot.createResult().applyGreaterThanThreshold(
                    res.object_totals.connections,
                    check.getLongParameter('warning'),
                    check.getLongParameter('critical'),
                    'Active connections: ' + res.object_totals.connections
                )
            );
            bergamot.publishReadings(
                bergamot.createLongGaugeReading('connections', null, res.object_totals.connections, check.getLongParameter('warning'), check.getLongParameter('critical'), null, null)
            );
        }
        else
        {
            bergamot.error('RabbitMQ API returned: ' + r.status());
        }
    }, 
    function(e) { 
        bergamot.error(e); 
    }
);

In the event the HTTP call returns 200 (OK), we publish a result dependent upon the data returned. First we need to parse the JSON response, using the normal JSON.parse method. Here r.content() will return the content of the response in string form. Once we've parsed the request, we apply a threshold decision based on the object_totals.connections property of the response. The warning and critical threshold parameters are used to decide the state of the check. If the value is greater than the critical threshold a critical result is published. Should the value be greater than the warning threshold a warning result is published. Otherwise an OK result is published.

var res = JSON.parse(r.content());
bergamot.publish(
    bergamot.createResult().applyGreaterThanThreshold(
        res.object_totals.connections,
        check.getIntParameter('warning'),
        check.getIntParameter('critical'),
        'Active connections: ' + res.object_totals.connections
    )
);

After the result has been published, a metric reading is published, this is used to build a graph of the active connections into RabbitMQ. The function bergamot.publishReadings publishes a set of readings, a long gauge reading is created using bergamot.createLongGaugeReading. This takes a few arguments: name, unit of measure, the value, the warning threshold, the critical threshold, the minium and the maximum. In this instance the reading name is connections. There is no unit of measure. The value is taken from object_totals.connections property. The warning and critical thresholds are taken from the defined parameters. Finally min and max are null as they are not applicable in this use case. Note that all value arguments to create a long gauge must be of type long (or null). Note the obviously the reading name must be unique in a command definition, you can't publish two readings with the same name, you also cannot change the type of a reading.

bergamot.publishReadings(
    bergamot.createLongGaugeReading('connections', null, res.object_totals.connections, check.getLongParameter('warning'), check.getLongParameter('critical'), null, null)
);

In the event the HTTP API does not return a 200 (OK) response, an error result is published.

bergamot.error('RabbitMQ API returned: ' + r.status());

In the event we hit any other exception, for example not being able to connect to the host or an error in the Javascript, the on error callback function is invoked. This callback simply publishes an error result, using the exception as the error message.

bergamot.error(e);
The great thing with this approach is that new checks can be defined only using configuration. Nothing need be deployed to worker servers or targeted hosts.

Developing A Monitoring System

I currently spend most of my spare time developing Bergamot Monitoring . Developing a monitoring throws up some interesting challenges. I want to discuss some of the things, as a web developer that I've realised during the course of this project.

Caching

Caching is a technique often used by web applications to improve performance. Often is applied at multiple levels within a web application: data layer caching, view caching, etc. For most web application, the caching of a rendered page provides a massive performance gain. However for a monitoring system, caching is next to useless. The key issue with monitoring systems, is that that everything changes and changes often.

In the worst case (with defaults) a check could be executed every minute by Bergamot. Oh and users need to know the second that something changes, after all that is the point of a monitoring system. This means it is guaranteed that a view will change within one minute, little point in caching that.

This problem is compounded by group views, where the result of multiple checks are displayed. Even on a modest sized system, these views could change every 10 seconds. On larger deployments, the state of a group can change multiple times a second.

The core issue with monitoring, is that stuff is changing all the time.

Coherency

Failing out of the caching problem, and the constant change problem. Is that users need to see consistent result. To scale the resources of multiple servers are needed. But unlike simpler web applications, coherency needs to be managed across these machines.

When the result of a check is processed all data caches need to be updated and invalidated coherently across the cluster. This is further complicated by the result transition logic being transactional.

Recursive SQL

Groups form a heirarchial tree, where child groups exist for a parent, fairly normal stuff really. However the state of child groups needs to be encompassed by the parent group. In other words to compute the state of a group at any moment in time, we need to compute the state of the whole tree.

If you naively attempt to solve that problem in your application, be prepared for the performance and scalability hit. The latency of round tripping to the database (even when on Localhost) completely kills performance, due to the sheer volume of queries that need to be executed (even for a trivaially small system).

Enter the joys of recursive SQL, where we can use a single SQL query to compute the state of the entire graph with one query (albeit about 30 lines of SQL). SQL is an often underused powerhouse of data querying and manipulation, fuck that ORM and spend the time learning propper SQL, you'll thank yourself that you did.

Message Queues

Message queues are awesome, Bergamot makes use of RabbitMQ to pass messages between multiple nodes. This is how Bergamot distributes work across multiple servers.

We push alot of routing logic down into RabbitMQ. Using features such as exchange to exchange bindings, alternate exchanges, per message time to live, dead lettering. This allows Bergamot to build a really flexible routing model without having to deal with any of the mechanics of it.

Word of advice, get a peice of papper and sketch out your routing before you code it up.

Websockets

Websockets are seriously cool, they allow Bergamot to realise updating checks in real time. Websockets implement true push messaging for the web and the technology should not be overlooked, it is super easy to use via the browser.

The server side however is a little more complex. Websockets rely upon a long running TCP / HTTP connection, as such you need to ensure that the backend server is non-blocking / event based (like wise for all servers in the connection path.

Programming for non-blocking / event based servers is very different from programming for threaded servers. Bergamot makes use of Netty to handle websockets, Netty is an event based networking library for Java and has support for websockets. Bergamot uses Netty to bridge between websockets and message queues. The change in state of a check is published to a message queue, Netty is used to simply listen to these messages and transmit them to browsers.

This allows for less than 200ms of latency between telling Bergamot to execute a check in the UI, to Bergamot executing the check and publishing the result to the browser. I deliberately had to have a slow animation effect in the UI so that users could realise that a check had actually updated!

A Simple Shorewall Firewall

I've built Linux / IPTables based routers / firewalls many times over the years. I figured it was probably time I documented building a simple SOHO solution.

I use Shorewall because it makes dealing with IPTables simple. As much as I like IPTables its rule syntax is f**king awful. Shorewall offers a layer of abstraction on IPTables and makes common use cases trivial. It offers more features than other solutions such a ufw.

For the sake of this example, our Linux box has the following network interfaces:

  • ppp0 - the intertubes
  • eth0 - our private internal network
  • eth1 - our public guest network
  • tun0 - our OpenVPN server

Core shorewall config

In the main shorewall configuration file (/etc/shorewall/shorewall.conf), ensure the following properties are set as follows:

STARTUP_ENABLED=Yes
IP_FORWARDING=On

Setting up zones

Shorewall's world is all about zones, a zone is merely a network that we are going to firewall between. In this example we have the following zones:

  • net - the internet
  • loc - our local network
  • gst - our guest network
  • vpn - our VPN network

The zones configuration file (/etc/shorewall/zones) will look like:

#
# Shorewall version 4 - Zones File
#
# For information about this file, type "man shorewall-zones"
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-zones.html
#
###############################################################################
#ZONE   TYPE            OPTIONS         IN                      OUT
#                                       OPTIONS                 OPTIONS
fw       firewall
net      ipv4
loc      ipv4
gst      ipv4
vpn      ipv4

Note that the fw zone means this local machine.

Now we have zones defined, we need to assign our interfaces to our zones. The file (/etc/shorewall/interfaces) configures these assignments and will look like:

#
# Shorewall version 4 - Interfaces File
#
# For information about entries in this file, type "man shorewall-interfaces"
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-interfaces.html
#
###############################################################################
#ZONE   INTERFACE       BROADCAST       OPTIONS
net     ppp0            detect
loc     eth0            detect
gst     eth1            detect
vpn     tun0            detect

Setting up policies

Policies specify the default rule action for traffic between zones. In our example by default we will:

  • permit traffic from the local network to the internet
  • permit traffic from the guest network to the internet
  • permit traffic from the vpn network to the local network

The file (/etc/shorewall/policy) will look like:

#
# Shorewall version 4 - Policy File
#
# For information about entries in this file, type "man shorewall-policy"
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-policy.html
#
###############################################################################
#SOURCE DEST    POLICY          LOG     LIMIT:          CONNLIMIT:
#                               LEVEL   BURST           MASK
loc     net     ACCEPT
gst     net     ACCEPT
vpn     loc     ACCEPT
fw      all     ACCEPT
net     all     DROP
all     all     REJECT

Note that all is a pseudo-zone which means any zone, as such the last line means that by default traffic between zones will be rejected.

Setting up rules

Rules are exceptions to policy, defining specific traffic which will be allowed through. In this example, we are going to permit ICMP Ping and SSH traffic from any network to access the local machine. We will also forward ports 80 and 443 into a specific server in the local network

The rules configuration file (/etc/shorewall/rules) will look like:

#
# Shorewall version 4 - Rules File
#
# For information on the settings in this file, type "man shorewall-rules"
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-rules.html
#
######################################################################################################################################################################################
#ACTION         SOURCE                  DEST                    PROTO   DEST          SOURCE            ORIGINAL        RATE       USER/    MARK    CONNLIMIT       TIME         HEADERS         SWITCH
#                                                                       PORT          PORT(S)           DEST            LIMIT      GROUP
#SECTION ALL
#SECTION ESTABLISHED
#SECTION RELATED
SECTION NEW
ACCEPT          all                     fw                      tcp     22            -                 -
ACCEPT          all                     fw                      icmp    0,8           -                 -
DNAT:info       net                     loc:172.30.14.187       tcp     80,443        -                 -

Setting up NAT

Due to the joys of IPv4, we need to masquerade local traffic to access the internet. The shorewall masq configuration file (/etc/shorewall/masq) will look like:

#
# Shorewall version 4 - Masq file
#
# For information about entries in this file, type "man shorewall-masq"
#
# The manpage is also online at
# http://www.shorewall.net/manpages/shorewall-masq.html
#
#############################################################################################
#INTERFACE:DEST         SOURCE          ADDRESS         PROTO   PORT(S) IPSEC   MARK    USER/
#                                                                                       GROUP
ppp0                    172.30.0.0/16

Note the specified source network matches traffic from our local and guest networks.

Thats all folks

It reall is that straight forward, simply run shorewall restart to make the new ruleset active

HOT PostgreSQL

A design goal of my monitoring system, Bergamot Monitoring, was to ensure that the monitoring state was persisted. As a long time PostgreSQL user, PostgreSQL was the obvious choice and it hasn't been a bad decision.

An interesting aspect of monitoring systems is that they are constantly busy. Even a small scale deployment is likely to be executing one check every second. This translates to around two database updates a second.

At the outset I was concerned about table bloat. A facet of the MVCC concurrency system used in PostgreSQL (and most databases) is that updating a row is essentially a delete and insert of the row. As such for tables which get constantly updated a large number of dead tuples will build up. In PostgreSQL cleaning up these dead tuples is the job of vacuum, which happens automatically via the autovacuum processes.

PostgreSQL has an update specific optimisation called Heap-Only-Tuples (HOT). Normally an update would leave a dead tuple in both the index and table, which would need to be cleaned up by vacuum. However when updating columns which are not part of an index, hopefully a HOT update will be used. A HOT update will attempt to place the new tuple copy within the same page and points the old tuple to the new tuple. This means that the index does not need to be updated, reducing the clean-up which needs to be performed by vacuum.

Looking at the statistics from my demo system, the check_state and check_stats tables, which gets updated everytime a result is processed are almost exclusively using HOT updates. My statistics show that 99.4% of updates to these tables are HOT updates.

Again looking at the statistics, autovacuum is being invoked roughly every two to three minutes. I suppose this is unsurprising considering that every row in the check_state and check_stats tables is being updated every five minutes.

Brew A Beer Day

Yesterday rather than the usual daily grind I went over to Sadler's Ales in Stourbridge for a Brew A Beer Day (it was a Christmas present). It turned out to be rather more hands-on than I expected and was immense fun. As someone who enjoys beer, it was fascinating to get involved with the process and to chat with a brewer.

Time to mash

The day kicked off by meeting up with two other people which were on the same event: Martin and Peter. We didn't know each other before hand and had all been given the experience as a present.

All beer starts off with malt, malted barley to be precise.

Our first order of the day was to mash 300kg of malted barley with around four barrels (a barrel is 147l) of water at 71c in the mash tun. The mash is essentially a porridge, which extracts the sugar from the barley and also converts the starch in the barley to sugar. Malted barley has started to germinate and has present the enzymes needed to convert starch to sugar.

Barley can be roasted (a bit like coffee), in different grades: caramalt, chocolate malt, etc. This roasting gives the malt a dark colour and a caramel to bitter taste. A pale ale is made using just plain malt, other malts are added to create amber ales, porters and stouts. A stout would have a proportion of chocolate malt. We were brewing an amber ale which had 25kg of caramalt in it. There is also a small proportion of Wheat added, this helps in forming the head of the beer.

This involved loading 300kg of malt into the mash tun, in 25kg sacks. Each sack had to be lugged up a ladder and poured in to a hopper. The hopper helps to evenly mix the malt with the hot water.

The mash is left for 1 hour to steep, giving us time for breakfast and a beer.

Tapping the wort

After breakfast it was time to extract the wort (the liquid from the mash) from the mash and transfer it to the kettle. The wort is boiled up with hops added for bitterness and aroma. The mash is also sparged during this process, a sparge arm rotates around the mash tun drizzling hot water over the mash, this further extract more sugar from the malt.

At this stage the wort is surprisingly sweet, an almost syrup like consistency, with malty overtones.

Whilst the pump was busy transferring the wort into the kettle, it was time to prepare the hops. After malt, hops is the other key ingredient in beer, providing all the bittering and aroma. Different varieties of hops and hops from different climbs vary in the level of bitterness and in aroma.

The hops came vacuum packed, so we needed to flake the hops whilst weighting it. It was surprising how sticky our hands were after this, covered in a kind of green resin, smelt rather good though!

Emptying the mash tun

With all the wort extracted from the mash, it was time to empty the mash tun. That involved shovelling out 300kg of wet barley into sacks. Not particularly glamorous and a bit of hard work, but fun all the same.

With the mash tun empty and the wort busy getting up to boiling point, it was time to sit down and relax over lunch, and some more beer.

Adding the hops

Lunch over, it was time to add the hops to the boiling wort. Hops is added at various times during the boiling. Hops added early on (and boiled up for a while) adds bitterness to the beer. Hops added late on adds aroma to the beer. We were brewing a golden beer, which had about 3kg of bittering hops added early on and 5kg of aroma hops added late on.

The brew house was filled with a wonderful aroma when the hops was added.

Fermenting

With the wort all boiled and infused with the hops, it is time to transfer to the fermentation vessel. Over three days in the fermentation vessel the yeast will turn the sugar into ethanol!

Having just been boiled the wort is rather hot, far to hot to add the yeast directly too. The wort is pumped through a heat exchanger en-route to the fermentation vessel. Cold water is pumped through the other channel of the heat exchanger. This results in a quickly cooled wort at 20c and hot water at 71c ready for the following days brew.

The final and somewhat critical step is to add the yeast.

The last and by far the least enviable task is to clean out the kettle, thankfully we were spared that job. To be honest I'm not sure I'd be able to get in through the opening to clean it out.

Pulling my first pint

At this point, I can safely say, I've (helped) brewed roughly 3,000 pints of beer. With the yeast bubbling away in the fermentation vessel, we retired to the bar for a farewell pint and I got chance to pull my first pint (well a half actually)

All in all, I had a wonderful day, it was fun all around and I learnt a fair bit too. The people at Sadler's Ales were really friendly and made it an excellent day. I'd recommend it to anyone who likes beer, or as a present to anyone who knows someone who likes beer.

openSUSE on the Odroid U3

A while back I got an Odroid U3 , an ARM based single board computer, much like the Raspberry PI, only a lot more powerful - the Odroid U3, has a quad core 1.7GHz ARM CPU and 2GB RAM.

However only images for xubuntu are provided officially, being an openSUSE user I wanted to run openSUSE on the Odroid U3. I managed to hack together an image a few months back, but sadly didn't really document how I did it, rather silly of me.

The basic approach is to hack together an image, from the xubuntu Odroid U3 images and the openSUSE JEOS root file system images. This allows the use of the Odroid U3 customised kernels with an openSUSE based user space. We can then use the Odroid U3 kernel updating script once our image boots to get the latest kernel images.

To build our image, we are going to need a machine, with a few gigs of free space to be able to assemble and then copy the image over. You will also need a USB micro-SD card reader to be able to copy the image to the card.

All steps should be executed as root!

First we need to download some stuff that we need to build our franken-image:

buildhost:~ # wget http://dn.odroid.com/4412/Linux/ubuntu-u2-u3/xubuntu-13.10-desktop-armhf_odroidu_20140211.img.xz
buildhost:~ # xz -d xubuntu-13.10-desktop-armhf_odroidu_20140211.img.xz
buildhost:~ # wget http://download.opensuse.org/ports/armv7hl/distribution/13.1/appliances/openSUSE-13.1-ARM-JeOS.armv7-rootfs.armv7l-1.12.1-Build33.1.tbz
buildhost:~ # wget http://builder.mdrjr.net/tools/boot.scr_opensuse.tar
buildhost:~ # wget http://builder.mdrjr.net/tools/kernel-update.sh
buildhost:~ # wget http://builder.mdrjr.net/kernel-3.8/00-LATEST/odroidu2.tar.xz
buildhost:~ # wget http://builder.mdrjr.net/tools/firmware.tar.xz
buildhost:~ # xz -d odroidu2.tar.xz
buildhost:~ # xz -d firmware.tar.xz

Now take a copy of the xubuntu image, this will form the basis for our custom image. By starting with a working image, we have the partitioning and bootloader already installed. As I understand it, the first 1MiB of the image is taken up by the phase one bootloader and the position of partitions is important. The phase one bootloader will read boot.scr from the vfat partition which then loads the zImage (compressed kernel).

buildhost:~ # cp xubuntu-13.10-desktop-armhf_odroidu_20140211.img custom_opensuse.img

Next we need to ensure that the loop device module is loaded.

buildhost:~ # modprobe loop.

Now we can mount our image.

buildhost:~ # losetup /dev/loop0 custom_opensuse.img
buildhost:~ # kpartx -a /dev/loop0
buildhost:~ # mkdir tmp_boot
buildhost:~ # mkdir tmp_root
buildhost:~ # mount /dev/mapper/loop0p1 tmp_boot
buildhost:~ # mount /dev/mapper/loop0p2 tmp_root

Now empty out the entire root file system, we are going to replace that with the openSUSE root file system. We will then patch in the latest kernels.

buildhost:~ # cd tmp_root
buildhost:~/tmp_root # rm -rf *

Now extract the openSUSE root file system into our image

buildhost:~/tmp_root # tar -xjf ~/openSUSE-13.1-ARM-JeOS.armv7-rootfs.armv7l-1.12.1-Build33.1.tbz

Now we can patch in the odroid kernel

buildhost:~/tmp_root # cd boot
buildhost:~/tmp_root/boot # cp ../../boot.scr_opensuse.tar ./
buildhost:~/tmp_root/boot # tar -xvf boot.scr_opensuse.tar
buildhost:~/tmp_root/boot # rm boot.scr_opensuse.tar
buildhost:~/tmp_root/boot # rm -rvf x
buildhost:~/tmp_root/boot # mv x2u2/* ./
buildhost:~/tmp_root/boot # rm -rvf x2u2
buildhost:~/tmp_root/boot # cp boot-hdmi-720p60hz.scr boot.scr
buildhost:~/tmp_root/boot # cd ..
buildhost:~/tmp_root # cd lib/modules
buildhost:~/tmp_root/lib/modules # rm -rf *
buildhost:~/tmp_root/lib/modules # cd ../..
buildhost:~/tmp_root # cp ../odroidu2.tar ./
buildhost:~/tmp_root # tar -xvf odroidu2.tar
buildhost:~/tmp_root # rm odroidu2.tar
buildhost:~/tmp_root # cd lib/firmware
buildhost:~/tmp_root/lib/firmware # rm -rf *
buildhost:~/tmp_root/lib/firmware # cp ../../../firmware.tar.xz ./
buildhost:~/tmp_root/lib/firmware # tar -xvf firmware.tar.xz
buildhost:~/tmp_root/lib/firmware # rm firmware.tar.xz
buildhost:~/tmp_root/lib/firmware # cd ../../..
buildhost:~ # cd tmp_boot
buildhost:~/tmp_boot # rm -rvf *
buildhost:~/tmp_boot # cp ../tmp_root/boot/* ./

Now we need to setup fstab, edit ~/tmp_root/etc/fstab to have the following contents:

devpts  /dev/pts          devpts  mode=0620,gid=5 0 0
proc    /proc             proc    defaults        0 0
sysfs   /sys              sysfs   noauto          0 0
debugfs /sys/kernel/debug debugfs noauto          0 0
usbfs   /proc/bus/usb     usbfs   noauto          0 0
tmpfs   /run              tmpfs   noauto          0 0
UUID=e139ce78-9841-40fe-8823-96a304a09859 / ext4 defaults 0 0
/dev/mmcblk0p1 /boot vfat defaults 0 0

Note: use blkid to check the UUID of the root fs (/dev/mapper/loop0p2) is e139ce78-9841-40fe-8823-96a304a09859

Now lets make some changes to the systemd journal config, edit ~/tmp_root/etc/systemd/journald.conf change the following variables:

Storage=volatile
ForwardToConsole=yes
TTYPath=/dev/tty12

This allows us to see the journal on TTY 12 ( CTRL-ALT-F12) which is handy for debugging, it also doesn't store the journal on disk (as SD cards can be horrifically slow).

Now we need to make some changes to the systemd config:

buildhost:~/tmp_root # cd etc/systemd/system
buildhost:~/tmp_root/etc/systemd/system # cd default.target.wants
buildhost:~/tmp_root/etc/systemd/system/default.target.wants # rm -rvf *
buildhost:~/tmp_root/etc/systemd/system/default.target.wants # cd ..
buildhost:~/tmp_root/etc/systemd/system # cd multi-user.target.wants
buildhost:~/tmp_root/etc/systemd/system/multi-user.target.wants # rm -f wpa_supplicant.service
buildhost:~/tmp_root/etc/systemd/system/multi-user.target.wants # rm -f remote-fs.target
buildhost:~/tmp_root/etc/systemd/system/multi-user.target.wants # cd ../..
buildhost:~/tmp_root/etc/systemd/ # ln -sf /usr/lib/systemd/system/multi-user.target default.target

It's handy to copy in the kernel update script:

buildhost:~/tmp_root/root # cp ../../kernel-update.sh ./
buildhost:~/tmp_root/root # chmod +x kernel-update.sh

Finally, we can unmount and flash the image

buildhost:~/ # sync
buildhost:~/ # umount tmp_root
buildhost:~/ # umount tmp_boot
buildhost:~ # kpartx -d /dev/loop0
buildhost:~ # losetup -d /dev/loop0
buildhost:~/ # dd if=./custom_opensuse.img of=/dev/sdb bs=32M

Note: make sure /dev/sdb is the SD card you want to write to, check dmesg if need be.

Nore: your liable to get odd errors from the SD card if you fail to set bs, a 32MiB block seemed the fastest for me.

Now you should be able to boot your Odroid U3 into openSUSE 13.1 (console). Once booted you can use the zypper to update any RPMs or to install a GUI. The root password is linux. Note that SSH is enabled by default.

It is worth noting, that the I/O performance of cheap SD cards is pretty terrible (or at least this is true for SD card one I have), remember to be patient.

Any issues and I can probably give you a hand in the #suse IRC channel or tweet me @intrbiz

TL; DR

Download my image: odroid_u3_opensuse_131.img.xz . The image is around 4GiB uncompressed and the SHA256 hash is 96a236f7779d08f1cba25a91ef6375f0b000eafb94e2018ec8a9ace67e995272.

The extract and flash to your SD card:

buildhost:~/ # xz -d odroid_u3_opensuse_131.img.xz
buildhost:~/ # dd if=./odroid_u3_opensuse_131.img of=/dev/sdb bs=32M

Note: make sure /dev/sdb is the SD card you want to write to, check dmesg if need be.

Plug in, power up and enjoy openSUSE on your Odroid U3. The root password is linux, note that SSH is enabled by default.

My Blog, Reborn

This blog has been broken for a while and I've finally gotten around to fixing it. However rather than just reviving the old blog, I thought it was time for a change. Whilst it may look thhe same, under the hood its a total rewrite.

I didn't dislike the previous Wordpress incarnation, althought there were some nasty hacks in that. But viven I've developed Balsa, a fast and lightweight Java web application framework, I figured I should use it for my own stuff.

So, this blog is now being served from a Balsa application, the content is written in Markdown and stored in Git. It only took a day to knock up the application, Balsa has good support for rendering Markdown content.

Creating a post is now as simple as:

  • Firing up Kate (a rather lovely text editor)
  • Attempting to extract my toughts in a coherent manner (the hardest part for me)
  • Finally the git commit; git push origin

Its refreshingly simple to add a post now, just write, commit and push.

I've even put the code behind my blog on GitHub for people who are really interested.

Java 8

I've been using Java 8 since it was released earlier this year and have found some of the new features game changing. To the extent I've now moved most of my projects to make use of new Java 8 features. Support for Java 8 in Eclipse Luna is good and I've not run into any major issues using Java 8.

Lambda Expressions

The single biggest feature in Java 8 is support for Lambda expressions, this further supports more functional programming styles in Java. Java has always had closures via its anonymous classes functionality, these have often been used to implement callbacks.

Lambda expressions are extremely useful when working with collections. As a simple example, filtering a List prior to Java 8 would require something along the lines of:

// List<String input;
List<String> filtered = new LinkedList<String>();
for (String e : input)
{
    if ("filter".equals(e))
        filtered.add(e);
}

With Java 8 we can transform this into:

// List<String input;
List<String> filtered = input.stream()
                        .filter((e) -> { return "filter".equals(e); })
                        .collect(Collectors.toList());

Certainly a major improvement in semantics and readability. While using Lambda expressions would be slightly slower. Java 8 has taken care to implement them as efficiently as possible, by making use of InvokeDynamic and compiling each Lambda expression to a synthetic method.

Having been using Java 8 for the last few months, I can honestly say that Lambda expressions have changed how I code. The addition of Lambda expressions has made as much of an impact as adding generics in Java 5 did.

Default Methods

Default methods allow concrete functionality to be added to interface methods.

Prior to Java 8 methods of an interface could only be abstract. Interfaces defined how objects should be interacted with only, they were Java's solution to multi-inheritance while attempting to avoid some of the issues with it.

I've always liked the simplicity of the Java object model, however at times it was a straitjacket for certain use cases.

Default methods seem like a good compromise between flexibility and simplicity.

I've found them useful for avoiding having to copy and paste trivial code.

For example:

public interface Parameterised
{
    List<Parameter> getParameters();

    default Parameter getParameter(String name)
    {
        return this.getParameters().stream()
                .filter((p) -> {return name.equals(p.getName());})
                .findFirst()
                .get();
    }
}

Repeatable Annotations

I really like annotations in Java, they allow metadata to be added to code elements. This is really handy for frameworks which can then use this information to customise how objects are interacted with, allowing for more declarative coding.

Since annotations were added in Java 5, I've never understood why they were not repeatable, it seems obvious that they should be. Its a shame that it has taken until Java 8 to address this limitation.

I make heavy use of annotations in Balsa to declare routes (a route handles a specific HTTP request for an application). Annotations give a rather nice way to declare this routing information, making it simple and readable to declare routes. Allowing developers to focus upon the actual functionality of the application.

Prior to Java 8 to make annotations repeatable, you needed to define another annotation to contain them. The user would then need to define both annotations on whatever they were annotating.

For example, the API developer defines the following annotations:

public @interface RequirePermission
{
    String value();
}

public @interface RequirePermissions
{
    RequirePermission[] value();
}

To consume the API, we would then do:

@RequirePermissions({
    @RequirePermission("ui.access"),
    @RequirePermission("ui.read")
})
public void myHttpRoute()
{
}

With Java 8, the API developer only needs to annotate the singular annotation as repeatable:

@Repeatable(RequirePermissions.class)
public @interface RequirePermission
{
    String value();
}

This has the advantage for the API developer that it doesn't alter how they process the annotations. However for the API consumer life is a little easier, as we can now do:

@RequirePermission("ui.access")
@RequirePermission("ui.read")
public void myHttpRoute()
{
}

That makes things a fair bit easier and doesn't have any backwards compatibility problems, quite a clever solution really.

Nashorn

Nashorn is a new Javascript engine for Java, it is fast and easy to work with.

It boasts performance comparable to that of Google's V8 and has the massive advantage of being able to make use of any Java APIs from Javascript, including threading. Again it makes use of InvokeDynamic for performance. It is usable via the ScriptEngine API as well as directly from the command line.

The quickest way to have a play with Nashorn is via jjs on the command line:

jjs> print("Hello World");
Hello World
jjs> exit();

It isn't that hard to execute a script from Java either:

// create the script engine
ScriptEngineManager factory = new ScriptEngineManager();
ScriptEngine script = factory.getEngineByName("nashorn");
// execute
script.eval("print(\"Hello World\");");

To pass variables into the ScriptEngine, we need to setup some bindings:

SimpleBindings bindings = new SimpleBindings();
bindings.put("message", "Hello World");
script.setBindings(bindings, ScriptContext.ENGINE_SCOPE);

Variables are contained by a ScriptEngine context, and are not shared across different ScriptEngine instances, we can change the previous example to:

// create the script engine
ScriptEngineManager factory = new ScriptEngineManager();
ScriptEngine script = factory.getEngineByName("nashorn");
// bindings
SimpleBindings bindings = new SimpleBindings();
bindings.put("message", "Hello World");
script.setBindings(bindings, ScriptContext.ENGINE_SCOPE);
// execute
script.eval("print(message);");

As mentioned Nashorn allows Javascript to inter-operate with Java, Javascript can invoke Java methods and Java can invoke Javascript functions. Nashorn also automatically maps functions to single method interfaces. For example, we can create a new thread to print Hello World twice a second:

jjs> new java.lang.Thread(function() { while (1) { print("Hello World"); java.lang.Thread.sleep(500); } }).start();

About SNMP-IB

SNMP-IB is a minimalist non-blocking asynchronous SNMP V1, V2c and V3 client implementation. It implements enough to be able to query information from things and receive traps from things.

I started writing SNMP-IB as a way to get to understand SNMP better, I thought: it calls its self simple it can't be that hard. To an extent getting version 2c implemented was simple, I had a working implementation after an evenings work. Version 3 took a little longer, mainly getting my head around the bat shit crazy design by committee which is version 3. I'll post a bit more on this some time.

I wanted the library to be clean and simple to use. It makes use of Java NIO at the network level and is non-blocking, asynchronous, callback based. One instance (and thread) is capable of efficiently communicating with many devices.

What does it support

SNMP-IB currently supports: Get, GetNext, GetBulk and Set requests for both V1, V2c and V3. It also supports receiving Traps for V1, V2c and V3.

Only the user security model of V3 is supported. MD5 and SHA1 are supported for authentication and DES (56bit) and AES (128bit) are supported for privacy.

The core code is fairly stable, however it's real world exposure to devices is somewhat limited. Mainly being tested against 3COM switches, Aerohive access points and Cisco switches, basically what ever devices I have / have access to.

Using SNMP-IB

A key design goal was creating a simple, clean API which is easy to use, the following will fetch the system description and uptime from two devices:

// Create the transport which will be used to send our SNMP messages
SNMPTransport transport = SNMPTransport.open();

// A context represents an Agent we are going to contact, or which is going to contact us
SNMPV2Context lcAgent  = transport.openV2Context("127.0.0.1").setCommunity("public");
SNMPV2Context swAgent  = transport.openV2Context("172.30.12.1").setCommunity("public");

// Use the context to send messages
// The callback will be executed when a response to a request is received
lcAgent.get(new OnResponse.LoggingAdapter(), new OnError.LoggingAdapter(), 
            "1.3.6.1.2.1.1.1.0", "1.3.6.1.2.1.1.3.0");
swAgent.get(new OnResponse.LoggingAdapter(), new OnError.LoggingAdapter(), 
            "1.3.6.1.2.1.1.1.0", "1.3.6.1.2.1.1.3.0");

// Run our transport to send and receive messages
transport.run();

The SNMPContext is the key abstraction, use it to send requests to a device and recieve a callback when the response has been received. The callback classes are designed to be useable from Java 8, without taking a dependency on Java 8.

Where can I get it

You can find the code on GitHub it is licensed under the LGPL V3.

Children Of The Grave

This is a bit of a rehash of a post I originally wrote a 18 months ago, before the revelations of Edward Snowden and the more recent DRIP débâcle.

We've seen time and time again, the power of the Internet in disseminating information, especially during chaotic times, the most recent example being in the Ukraine.

The Internet offers us a utopia where information can be freely shared. Where it can be shared without restriction. Where people can communicate with each other. Where geography does not exist. It is by definition transnational. It offers all of us freedom.

By enabling anyone to communicate, without prejudice, without inference, with anyone else. The Internet represents the single most powerful tool humanity has.

This power, these freedoms. Seem even more significant given the recent events.

The Internet can provide me and you directly with raw information, when it happens, where it happens. No media organisation, what ever there motives can compete with that.

Recent events in the UK has reignited debate over the interception and retention of communications data.

My view remains unchanged, my thoughts have not been swayed. I remain firmly opposed to any capability for Government to snoop on its citizens.

To paraphrase a book I recently read:

This post is not a manifesto. There is no time for that. This is a warning

The book in question: Cypherpunks - Freedom and the future of the Internet. It was a book I found immense satisfaction in reading, and a book I would recommend every Internet user to read.

The Internet is humanities best chance at a transnational utopia. Yet it is a paradox. It is also a transnational surveillance dystopia.

The Internet's offer of free communication also offers total surveillance, leading to totalitarian control by an elite.

Its in encumbered upon every single person whom uses the Internet to: realise and understand their freedom and most importantly defend it.

We must not enable a minority to snoop on our activities, to ultimately control and dominate us. Lets us understand our freedom, lets us embrace it, lets us be defiant.

We must not let our politicians react disproportionally. Existing powers are too wide reaching and invasive, we must fight to get these reduced. But above all we need debate. We cannot let politicians to collude and pass these draconian powers without any consultation or proper consideration.

The title of this post, is a track by Black Sabbath. What does Heavy Metal have to do with politics I hear you ask. Its merely my interpretation of the lyrics, but I feel it embodies my point. Even given it was composed in a different era (has that much really changed), I feel it is still relevant.

"Revolution in their minds - the children start to march 
 Against the world in which they have to live

 Show the world that love is still alive you must be brave 
 Or you children of today are 
 Children of the Grave"