Citizen Science: Getting to discovery?

For the last few weeks I have been thinking about an exchange with the ever dynamic Francois Grey about citizen science, and what it would take to get to an actual significant discovery. This is in the context of my involvement with the long running Cosmic Pi project, an attempt to produce open source cosmic ray detectors based on cutting edge technology, so I will also do my best to share the lessons learned from this endeavour. While we haven’t formally terminated the project, unfortunately none of the current team members has time needed to continue the project so it is currently on “pause”. Added to this, there are a lot of supply issues at the moment – so let’s just say it’s in stasis for the time being, hopefully to re-emerge at some point in the not too distant future.

Choosing your battle: Physics vs Biology vs Other things

How easy is it to discover a new force or fundamental particle, in comparison to a new type of fungi? I watched the excellent documentary “Fantastic Fungi” featuring Paul Stamets recently on Netflix. It hadn’t previously occurred to me that you could potentially discover a new type of mushroom (or other biological entity) in your back garden or local wilderness – but it seems to be quite plausible with a reasonable amount of effort. And for a few hundred dollars, you can probably even get the DNA of your new find sequenced!

However, if you wanted to discover the Higgs Boson on your own (or even with a few like minded individuals), you would need very deep pockets and a ridiculous amount of time. Forbes estimated the cost of the discovery at $13.25 billion, plus the time of over 5000 researchers, not including the efforts of all those working on the infrastructure to support the discovery (like me since 2010).

These are probably the two extremes of the science spectrum, in terms of the validity of findings and general usefulness to the wider human species. There are also doubtless many other fields of endeavour and inquiry that fall between the two extremes, with a range of cost (money, time and resources) and reward (discovery, or significant advancement in human knowledge) trade-offs.

What does it take for Particle Physics?

The standard for a discovery in Particle Physics is 5 sigma. For those of you familiar with p-values, it’s the same principle – a statistical test to determine the likelihood that the observed result could be a fluke, rather than a real discovery. 5 sigma means 5 “standard deviations”, on a traditional bell curve. It is interesting to note that lower levels of significance can still be worth publishing, with significance of 3 sigma and beyond considered “evidence” of something new, but insufficient for a discovery. The probability of a false result at 5 sigma significance is 0.00006%, but of course it isn’t just a statistical test that determines a discovery, everything else also has to line up.

More practically, such a high level of confidence can only be reached with a large number of trials or observations. I’ve spent about a week thinking about ways to explain this concisely with some statistics examples, but to do the subject justice it really requires a full blog post on it’s own. Until I get round to writing it, I’d suggest you check out this article in Scientific American.

I started working on the CosmicPi project a few years ago now (in 2014!) with some other young, enthusiastic engineers and physicists I knew at working CERN. We all did something with particle detectors and their supporting infrastructure as part of our day jobs, but each of us had only a very small slice of the overall knowledge required. We decided to build a muon detector, using the latest technology we could find. And we knew it would be difficult…

It took several years and a lot of help before we detected our first “muons”. And then a couple more years when we figured out that these weren’t actually muons, but electronic noise and to redesign things to capture the real thing. I’ve lost count of the number of prototypes we built, it’s at least 10 different versions. In short, if you want to build distributed hardware for particle physics, you will need to build a device that can take in particles of whatever type you are interested in (I would strongly recommend muons), and emit some form of TCP/IP packet that codifies the detection and goes to a central server where someone can look at it in combination with all the other packets your devices are detecting.

Consumer electronics

The more astute readers will have already guessed that a device which detects a particle, and gives out a digital signal as an output, could also be described as a “product”. It is a manufactured object of non-trivial complexity, with a moderate cost associated. We were aiming to build a device 10x cheaper than the competition, and we managed this – in terms of costs, (but not sale price, because a) we haven’t started selling it yet, and b) some margin is required when selling anything).

The trap (perhaps it isn’t a trap) is that to scale your detectors you will either need someone with a lot of money (a lot), or to do some form of crowdfunding – where you sell your products to customers, who will host them. We’re not just talking about a box with a flashing light on it, but actually a very complicated, sensitive piece of equipment – an overall level of difficulty that puts most kickstarter campaigns to shame.

You also need to take the components and put them in a box. This is a very non-trivial activity, and since everything needs to fit in the box, and housings are either off the shelf, or custom molded ($$$ unless you have a huge volume of manufacture into the tens of thousands) it’s a good idea to choose your case appropriately. If you want to go the full distance, you will also need a manufacturer to put the components together in the boxes (and test them!). But even after nearly a decade we still haven’t got this far yet.

Lots of moving parts

There are many stages to detecting a particle, each is very sensitive.

Building a cutting edge particle detector is not easy. You will need a detector medium, we chose plastic scintillators, as they can be extremely cheap – but are rather hard to get hold of commercially unless you are buying by the tonne. You will also need some electronics to read out the detector, which will include some specialist analog circuits, as this is what it takes to detect very small, fast moving particles that are only going to generate a few photons of light in your chosen medium. These electronics have to be designed and prototyped iteratively. Before we had even finished our first generation of prototype, the main microcontroller we were using was already obsolete! So a redesign was required before we could move to the next stage of manufacture.

There are plenty of other options, such as getting recycled or spare detector chips from full size physics experiments, or repurposing existing semiconductors which are sensitive to various forms of radiation. The former may run into availability issues and export constraints, while the later path can massively reduce the amount of data collected by a particular detector. Ultimately data is what can lead to discoveries, so the more you capture the better.

A fine balance of many skills

Building a working detector is just the smallest Matryoshka doll. Around this you also need to build an infrastructure, both in the conventional sense (team, website, means to get your detector to the customer and their money into your bank account) and in the Physics sense. To use the oil analogy, raw data is just a resource, the value only comes when it is exploited with a means to process it. There are plenty of physics data analysis frameworks which exist, with varying degrees of complexity, but they all require significant pre-processing of the raw data and the addition of constructs that constitute the physics process you are searching for. A very reductive way of viewing a modern Physics PhD is that it involves three years writing and running data analysis scripts in order to generate some new insight from the vast array of high energy physics data collected by modern large scale detectors.

Full Stack software.

I find job adverts for ‘full stack’ developers rather funny. Because typically they only really want a couple of layers of the stack at most and certainly nothing that touches real hardware. The development stack for a particle detector goes all the way to the bottom. If you are building a new detector, you will need to read in an analog signal via some electronics, and somehow get it all the way up the software stack so it prints out on a screen or webpage. Practically, this means there is a need for both firmware (embedded software that runs on a microcontroller) and software, which can interface the microcontroller with a computer and share your data with the world. To build a ‘product’ appliance, that can be operated without the benefit of a full PhD, you will also need to handle everything from the calibration of the device (ideally automatic!) to setting up a device hosted Wi-Fi network and building a web interface, so that users can actually connect to your magic discovery box with their PC or Phone.

Who has done this before?

We wasted an inordinate amount of time discussing the totally irrelevant. Could we manufacture our own scintillators with embedded wavelength shifting optical fibres? Should our device have a battery inside? Would a solar panel on the top be enough to power it? This was due to inexperience, but also a learning and sharing of knowledge, which (inconveniently) is not a linear process.

What we needed was someone who had done this before to act as a mentor and guide. Someone with experience in electronics design, prototyping, manufacture. We connected with a lot of people but there are very few at the intersection of science and consumer electronics with all the relevant experience – and fewer still with sufficient free time for a project like this. There are plenty of science experts, but very few emerging experts in DIY electronics at scale, who are mostly self-educated via the mistakes of various crowd-funding campaigns they have just about survived. It’s still a rarefied skillset, even if you happen to be located at CERN.

A personal inspiration to me has been Bunnie Huang, and I can’t recommend his book The Hardware Hacker highly enough. We have been using it, recommending it to other teams we cross attempting a similar challenge, and generally trying to learn from his mistakes when we haven’t already made them ourselves. In retrospect, we could really have used a mentor to guide us on this journey. We are still looking, and in the meantime the next best thing is to share our experience with others. While we have been on our journey, open science hardware communities have started to emerge, the most notable being GOSH – the Gathering for Open Science Hardware. This is also the Journal of Open Hardware, which has also started while we’ve been working on Cosmic Pi, and maybe one day we’ll even get round to publishing an article in it about our detector!

The Profit Motive

What motivated our team? It was a lot of things, the fun of working together with like minded people, learning new skills and trying new things, the potential for discovering something, and democratising access to the technology through the code and schematics we published online. The profit motive doesn’t really feature, and as a group we are are missing a marketing department. Unfortunately (?), we are the type of people who would price our product based on what it cost to build, plus a markup we thought was reasonable. Typically in commercial electronics, if you aren’t making a 5-10x mark-up, you don’t stay in business for very long. In addition to sage advice from Bunnie, the EEVblog from Dave Jones is a resource beyond compare for those on this journey.

Design For Manufacture (DFM)

Our design has many weak points, which of course have been exposed by the ‘Great components shortage’ of 2021/2/3/n. If you open up two seemingly identical consumer electronics products manufactured a few months apart, there’s a fair chance you will find they have some different components and integrated circuits inside. This is because large volume manufacturers (and smart smaller volume ones), tend to design with at least one alternate part for each critical component. This allows them to continue production when something is out of stock. The alternative is to redesign the board on the fly, based on available parts – and of course you will probably want to test it again before making a lot! Or you can pay a ridiculous amount of money to someone who has some stock of the chips you need.

And then there is the more mundane, “traditional” side of DFM – making sure that your circuit board complies with design rules and good practices for the production process, ensuring you have sufficient margins on your passive components and design calculations to ensure that you get a reasonable yield.

This is a hugely time consuming activity. I have some friends who spend their day jobs right now redesigning existing products to work around the chip shortage. This type of operation is far beyond the resources we have as a bunch of individuals trying to build a cosmic ray detector. While it doesn’t take 100% of the effort all over again to produce a pair of design variants, even if another 20% is needed this is a lot for a volunteer project.

Putting it all together

I’ve filled out a typical business model canvas for the Cosmic Pi project. You can download it for your own remixing via the link below. We haven’t even started down the commercial part of this adventure, so I’ll just leave this here for now.

Some Lessons Learned

I have learned many things on this journey about how to build a particle detector and the top to bottom architecture of a globally-scalable IoT class device. Most of my biggest learning points come from the mistakes but not all, here are my top 5 lessons.

  1. Footprints for PCB components. The first fevered weekend of building a detector was spent painstakingly soldering tiny wires to inverted components that looked like dead spiders, all because I hadn’t verified the pad dimensions well enough on our very first prototype. Always double check your device footprints (and pin outs). Always.
  2. Humans. This project has been kind of a mash-up of science and open source, with a side helping of start-up. The most important part of the puzzle is the human element. As usual I roped in a few friends, made some new friends along the way, and we had some fallings out too! Trying to wrangle a team of very skilled, highly intelligent volunteers with divergent ideas into a project can be challenging. When conflict erupts, which it will, make sure that your friends know that any disagreements aren’t personal, and that you value your friendship independently from the project. If you see tempers rising around a particular issue, don’t wait for things to boil over before getting involved. And if you are wrong, or over the line on something, apologise as soon as you realise it. Things have been a lot of fun, but it hasn’t always been easy. I don’t think I destroyed any friendships (so far)?
  3. Ignorance. I know thing X. Thing X is obvious (to me)… but it turns out that some team members didn’t know thing X, and didn’t even know that they didn’t know it. They took on a challenge, and got into difficulties that affected the whole project because of their ignorance. We’ve all done it in different ways, with impacts that vary from expensive to time consuming. Of course, it is necessary to assume some level of common knowledge (and trust) when any two people are working together, but I find it is always worth taking the time to frame the task and go over the first principles at the start of any new collaboration.
  4. Interns are amazing. We have been fortunate enough to have a few interns working on the project, some of whom were even full time and funded. The progress they have been able to make on the project, working full time, as opposed to free evenings and weekends for the rest of the team, has been inspirational. We were able to have a good win-win relationship with all the students who worked on the project so far. The ones who were funded got paid and all of them learned valuable skills in programming, electronics and particle detector assembly, plus the lesson of how hard it all is to put together.
  5. Entropy is a problem, especially in software. Just because you have a set of awesome software for your device that’s tested on hardware platform Y.0.00.0, doesn’t mean it will work at all on hardware platform Y.0.00.1. Or even on your original platform with a version update to your OS or it’s major libraries. Software requires maintenance! The rules, settings, configuration requirements, dependent libraries are all shifting. To minimise your exposure to entropy I recommend:
    • Keep it simple. The less code you write, the less there is to maintain (and you should have fewer bugs too). The software problem hasn’t changed fundamentally since the 1970’s, you should read The Mythical Man Month by F. Brooks Jr for wisdom and inspiration. It’s the best book I didn’t read at university.
    • Put as much of the data processing into your embedded elements, i.e. firmware, as you can (within reason), as this will be stable across software changes. Keeping our data output format stable for versions 1.6 through 1.8 saved us a lot of time.
    • Scripts not software. It’s much easier to maintain a hundred lines of Python than something compiled. If you can rely on software platforms (InfluxDB, Grafana) for the heavy lifting that is ideal, and if not then consider ‘module’ level systems such as SQLite and Python libraries. Writing your own linux kernel drivers in C is always possible, but will require a lot of upkeep.
    • When it comes to embedded binaries, make sure you keep a copy of the latest version you have compiled for distribution (and all previous versions too..). This is especially important if you are using development environments such as Arduino, where the actual .bin/.hex file can be abstracted away under the plug and play upload button.
    • Git. Things which are put in a repository are great. Things which aren’t are usually lost over the years it takes a project to get to maturity.

A conclusion, for now at least.

I hope to have shared insights into at least some of the ground we covered with Cosmic Pi so far. We’ve come a long way, but just like climbing a mountain we might have scaled the first and the second rise, it’s still a long way to the summit. If you are full of enthusiasm and want to get involved please drop me a line, or if you would like to chat about your own open hardware science projects feel free to get in touch with me via twitter, where you can find me as @pingu_98.

Tagged , , , , , , , , , , , , , , , ,

Our COVID-19 Social Rules: A new world disorder

The final frontier: the supermarket!

“We need to figure out a way to hang out with other people and not get Covid again.”

My wife

It’s been a rough two years for most people since the Covid-19 pandemic got started, and the same is true for our household. We’ve both been sick at least twice with Covid-19 symptoms, some of which have persisted for months. But now we’re both back to ‘normal’, fully vaccinated and we’d like to keep it that way for as long as possible, even when the case numbers around us are taking off again. States are tearing up their own rules, so we’ve made some of our own to keep ourselves and our friends safe.

Our Covid-Safe rules for home & away

1. Wearing Masks – When

Masks are the most effective thing you can do to protect yourselves and others. They are bearable, but not always comfortable, especially when it’s warm outside or you are doing major physical activity. We wear masks when leaving the house, in the communal areas of apartment blocks (entrance, stairways, basements, parking garage, lift, bicycle parking) and anywhere else we go indoors (shops, offices, doctors surgery etc.). We don’t generally wear them outside, unless we are passing people on the street or it’s really busy.

2. Wearing Masks – What type?

Our go-to mask combination is a ‘fish style’ FFP2, with a blue surgical mask on top for higher risk situations, changed daily. For indoors, it’s always an FFP2, ideally ‘double masked’ with a surgical one on top if the situation merits it (poor ventilation, lots of people, anyone we consider high risk!). Outdoors we the risks are reduced, so if we wear a mask it’s normally just a surgical one, unless we only have an FFP2 handy.

We are fortunate enough to have picked up a couple of MicroClimate helmet style masks from the US. These are for grocery trips, and during mask mandates we wear a surgical mask inside, mostly so that nobody accuses us of not wearing a mask or complying with the mask rules for supermarkets. The MicroClimate is a transparent acrylic dome fitted with an impermeable fabric surround and equipped with battery powered fans and HEPA filters on both the intake and exhaust air. They look like astronaut helmets and are reasonably sound from an engineering perspective, though they don’t carry any kind of formal certification. They are also rather heavy, so we don’t tend to wear them outside the grocery store for very long.

3. HEPA filters and open windows at home

We used to live in a small apartment block. It had communal ventilation in the corridor areas, we have an extract vent from our hallway, also from the bathroom, toilet and kitchen. These are driven via fans on the roof, which don’t run/don’t run at full speed all the time. There are cases document in South Korea where Covid-19 spread through the ventilation system across a building from individual apartment to apartment via shared air ducts – there is also a very well documented case from the SARS outbreak in Hong Kong (Amoy Gardens) where the virus spread through the drainage system via water traps that had dried out.

Consequently, we have a couple of large HEPA filters on wheels of which at least 1 was usually running somewhere in our apartment, particularly at night when the building ventilation is running in low power mode. Now we live in a house, but we still keep a HEPA filter on low in the living room and bedroom.

4. Antigen tests for social events

We had a couple of social events once upon a time in the not too distant past. We had real people visit us in our apartment physically instead of via zoom. We agreed our test protocol with them at the invitation stage – our guests took an antigen test just before heading over, and at home we both took tests before they arrived. We are of course very fortunate to have access to a generous supply of antigen tests, and it’s important to remember that they are not 100% reliable, but it’s still a big step towards cutting the risk that someone in the group will be spreading the disease. It is increasingly important to wait the full 15 minutes after doing the test to ensure you don’t have a faint line, which is a telltale sign for the start of a COVID-19 infection. Likewise, we agreed beforehand that if anyone is feeling unwell, or just not quite right, even with a negative test that we’ll reschedule – because it’s no big deal.

5. Meeting outside if possible

Covid-19 is an airborne virus. This means it spreads through the air we breath. Sitting in a poorly ventilated room with someone who is infected is the best way to catch it. However, the virus is still fragile. It is destroyed by high air velocities, easily diluted and destroyed by UV light. Based on the current medical understanding, you can catch Covid-19 in two main ways (here are two analogies I’ve wholeheartedly stolen…):

Garlic Breath – If you can smell someone’s (bad!) breath, then you are actually smelling small aerosolised particles that are coming out of their mouth and nose in a jet of exhaled air. Normally these just contain volatile organic chemicals which we perceive as (unpleasant) smells. But for someone who is shedding Covid-19 virus, it hitches a ride on the same droplets. If you are close enough to smell someone’s breath, then you are in an excellent situation to contract Covid, or any other airborne virus they are shedding. Just like meeting smelly people, it’s always more pleasant to do outside.

Cigarette Smoke– The second way you can catch Covid-19, with a slightly lower risk, is by inhaling the very small aerosolised particles which can remain airborne for a long time (several hours). The best analogy for this type of spread is cigarette smoke. Even if someone isn’t smoking right next to you, it’s easy to tell if a smoker has been in a room, or if someone is smoking in proximity to you. Of course it’s best not to smoke at all, but if you have to then doing it outside has the least consequences for those around you. It’s the same with Covid-19 spread, except of course that you can’t smell virus particles in the air.

While meeting outside doesn’t eliminate these two forms of transmission, it does substantially reduce the risks. The outside typically has at least some airflow to carry away the cigarette smoke and to mitigate the garlic breath, as well as UV which reduces the viability of the virus. Meeting outside is definitely the safest way to do things, assuming the weather cooperates.

The end.

Fight Covid-19 with software from CERN on your Raspberry Pi

Here’s a how-to guide for running the CERN CARA Covid-19 risk assessment tool at home on a Raspberry Pi.

One of the most interesting projects I’ve been involved with at work over the last year has been CARA, the Covid-19 Airborne Risk Assessment tool. It is a relatively simple tool that uses some equations to model the risk of transmitting Covid-19 in an indoor environment (for example in an office, or during a face to face meeting, or in a workshop). The tool has been developed to help CERN manage Covid-19 risk better at the lab, but fortunately it has also been released under an open source license, so anyone can use it and build upon our project (so long as you respect the license conditions).

The tool takes some inputs to describe a scenario (number of people, type of activity, room size and ventilation, geographical location, activity duration, types of mask worn), uses a mathematical model based on the latest understanding of Covid-19 and generates a risk score for the probability of spreading Covid-19 in the particular scenario. It will also indicate what measures you can take to decrease the risks, such as wearing face coverings. The model is based on aerosol transmission, and does not take into account large droplet spread, which means that physical distances (1.5-2m in most places) between individuals must be followed for a valid result.

In this post I’ll be taking you through all the stages needed to run your own instance of the CARA tool at home on a Raspberry Pi 4. I’ve not tried it on the Zero/2/3/3B+, and I would recommend using the 4 as some simulations can be quite computationally intensive – especially if you are using natural ventilation or making your instance available to more than one user. My test setup was a Pi 4 with 2GB RAM and it worked pretty well. CARA is written in python which you can interact with via a web interface. The whole thing should take about 40 minutes from start to finish.

Stage 0: What you need

For this tutorial, you will need either a Mac OS X or Linux computer, or a Linux VM running in Windows/OS X if you want the absolute latest Debian image, or SD card imaging software for Windows/OS X/Linux if you are happy to use the link below. You will also need an 8GB microSD card (minimum) and a Raspberry Pi 4 (I used a 2GB, but should work fine with any of them) with a wired network connection. Temporary access to a screen and keyboard (to set the root password and enable SSH) will also be needed for a couple of minutes to set things up.

Stage 1: Download an OS

I tried initially to get it up and running in Raspberry Pi OS Lite, however at the time of writing this is still 32-bit so I rapidly gave up. It’s probably possible and if you do get this running let me know and I’ll link to you. In search of a 64-bit Pi friendly OS, I went for Debian, because this is what the Raspberry OS is built on in the first place. The raspi-Debian project is still a bit of a work in progress and not all the hardware features are working yet, but there is more than enough already available for our needs in this tutorial.

Go and get the daily image (for reference I used 20210802). If you are using a Linux/OS X computer, you can do the following to download and image your SD card:

export RPI_MODEL=4
#set the sdf to your SD card device... not your hard disk!
#you can locate your SD card with 'sudo fdisk -l'
export SD_CARD=/dev/sdf
export DEBIAN_RELEASE=buster
#checksum verification
sha256sum -c raspi_${RPI_MODEL}_${DEBIAN_RELEASE}.xz.sha256
#write to SD card
xzcat raspi_${RPI_MODEL}_${DEBIAN_RELEASE}.img.xz | dd of=${SD_CARD} bs=64k oflag=dsync status=progress
sudo sync

This will download the latest Debian image for your Raspberry Pi 4 and write it directly to the SD card. Alternatively, you can download the image via this link and use these instructions to write it to the SD card.

Stage 2: Boot and configure the Pi

Once you have copied the image to your SD card, pop it in to your Raspberry Pi and power things up. At this stage you should have power, network, a keyboard and a screen connected to your Raspberry. You will need the keyboard and screen to set a root password and enable SSH for remote access, after that you can run the Pi with only power and network – or continue to enter things manually as you prefer.

When your Pi has booted, login as root and set a new root password (don’t forget it!). You can then create a user for the project and enable SSH (remote) access. You will also need to remember your user password, here I called the user “cara”.

rpi-4-20210802 login: root
#enter your new root password, choose wisely and do not forget it
adduser cara 
#specify the password for the cara user
reboot now #logout and restart

Now we log in again as root and install some useful packages, we will also change the host name to cara-pi.

rpi-4-20210802 login: root
apt-get update && apt-get upgrade -y
apt-get install git git-lfs sudo openssh-server -y
#set the hostname to cara-pi
echo 'cara-pi' > /etc/hostname
reboot now
#you can also shutdown now if you want to unplug things gracefully

With this, we no longer need the screen or keyboard. I would recommend connecting to your Raspberry using SSH and install Python 3.9 by compiling it from source:

ssh root@cara-pi
#:sudo apt install wget build-essential checkinstall libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev libffi-dev zlib1g-dev -y
tar -Jxf Python-3.9.5.tar.xz
cd Python-3.9.5
./configure --enable-optimizations
sudo make install
cd ..
sudo rm -r Python-3.9.5
rm -rf Python-3.9.5.tar.xz
#set python 3.9 as default ...
echo "alias python='/usr/local/bin/python3.9'" >> ~/.bashrc
. ~/.bashrc
#check we've got python 3.9 working
python -V
exit #logout as root

Stage 3: Run CARA

And now we are ready to clone the CARA source code, install it and run CARA from the CERN repository:

ssh cara@cara-pi
cara@cara-pi:~$ git clone
cd cara
git lfs pull   # Fetch the data from LFS - weather profiles and weather stations
pip3 install -e .   # At the root of the repository, install python libraries
python3 -m cara.apps.calculator #and we're done - it's now running

At this point, we are now running CARA on our Raspberry. You can connect to it via http://cara-pi:8080/ assuming it is accessible via your local network. If you can’t find it via the hostname, you should use the Raspberry IP address (probably something like, which you can find by entering the command below into the terminal on your Pi:

ip addr

If you want CARA to run automatically at boot on your Raspberry, we need to install it as a service. You can find out more about how to do this here.

You can even open your service up to the whole internet, but you should think about the security implications before you do this. For example, consider applying firewall rules on your Raspberry and permitting SSH login only with a key file. If you are hosting it at home, you will probably also want to modify the port (from 8080) and make sure you have an SSL certificate, a good way to do this is with an nginx proxy and the EFF’s certbot, but this is something for a follow up post.

Some notes, questions and answers:

How accurate is this Covid transmission model? Great question, if you want a detailed look under the hood of the model, check out the paper we wrote about how it works. You can also find all the scientific references to the original published sources which we have used in constructing our model.

This tutorial is just for the CARA Calculator, a simplified version of the model. If you want to run the expert app, I’d suggest using something more powerful than a Raspberry!

If you are using an x86 (Intel/AMD) machine, you can run CARA locally (and without having to go through all the above steps!) via the docker images also provided. You can find links to them in the CERN repository.

At the time of writing the CARA web pages still contains some CERN branding. The software is free (Apache License, V2), but the CERN logo is copyright. If you want to host a version of CARA for your friends and family, or at work for your organisation etc. please make sure you aren’t using the CERN brand.

If you have questions about running CARA on a Raspberry, drop me a line via twitter @pingu_98. If you have questions about the software itself, you can contact the development team via the repository.

What is my personal role in all this? I’m just one of the members of the CARA development team, each of us brings different skills and perspectives to the project. Covid-19 has been a real challenge for everyone, but it gives me a lot of personal satisfaction that some of the work I’m doing can be shared freely with the whole world to improve our understanding of how this disease spreads and what we can do to protect ourselves.

What about liability issues with presenting Covid-19 risks – what if someone catches it even when the risk is low? Unfortunately it’s still possible to catch Covid-19 even in low risk situations, this is why it is important to respect the health advice (masks, physical distance and hand-washing). The model also comes with a lot of fine print, which you should read.

Footnote: Thanks to Adrian Monnich for the comments on improving user privilege levels, I’ve amended the instructions!

Tagged , , , , , , , , , , ,

Home Indoor Air Quality Monitor

How to build your own indoor air monitoring station. This is a work in progress and will be updated, completed and made more pretty as I get spare time! I’m thinking of turning this build into an online workshop, let me know if you are interested.

What does it do?

This project creates an indoor air quality measuring station, a little larger than a credit card, capable of measuring Temperature, Air Pressure, Relative Humidity, Volatile Organic Chemicals and also equivalent CO2. It will measure the air every minute and log the results into a database, accessible with your web browser. You can also share the data via the internet if you want to. The capabilities are impressive, but bear in mind that we’re using very cheap sensors so your mileage may vary!

I’m currently thinking about designing a badge with the same sensors, and also an external air sensor station with GPS. We’ll see how far I get with those ideas!

Parts: (The essential items should cost < 30 units of money, including shipping)

CCS811 – eCO2 and VOC sensor

BME680 – Temperature, Pressure, Humidity and VOC sensor

1602 LCD screen with I2C converter – These are blue or green, but you can also get them in red or orange from other suppliers. Make sure you get one with a PCF8574 I2C adapter, as we need to modify it a little.

Stand for the LCD screen

Raspberry Pi (Minimum Zero, but should work with all Raspberry Pi’s)

microSD Card for the Raspberry Pi (minimum 8GB)

Some 2.54mm pitch jumper cables, you will need female-male and female-female


You may need these if you don’t already have them lying around in your box of geeky accessories!

USB micro OTG cable

HDMI mini to HDMI converter

USB micro cable (you’ll need a USB C cable if you are using a Raspberry Pi 4).

USB to mains power adapter (for the Pi Zero 500mA is fine, for a Pi 4 3A is recommended)

USB keyboard – if you have desktop PC, you should be able to borrow the keyboard from it.


There are three stages to assembly, firstly the I2C module needs to be modified and soldered on to the LCD screen. Then the two sensor breakouts need to be soldered to the jumper cables. Finally the screen needs to be put into the housing and everything connected to the Raspberry Pi.

I’ll add some photos to describe each step. For those who are already experienced makers, you need to remove the two pull up resistors on the LCD I2C module, in order to prevent the 5V from breaking the input pins on your Raspberry Pi. The two sensor breakouts are both 3.3V powered, whereas the LCD needs 5V.


The software is what makes it all go. There are three parts here:

  1. Python Script + libraries. This reads the I2C data and puts it into the Raspberry Pi
  2. InfluxDB. Stores local data in a really efficient and neat way.
  3. Grafana. Provides fancy visualisation and web interface.

There is also a 4th element, MQTT, which allows your data to be shared with a centralised server via the internet, which is optional.

To add – download the image, put it on the SD card, run it, connect and set up.

My Summer Internships: A Retrospective

Yesterday I was thinking about all the wonderful summer internship experiences I had while at university. I felt bad about the effect of the COVID-19 pandemic will have on those currently studying who won’t be able to take up their internships this year. I thought I’d write down my own experiences to share, in the hope that it will be useful for someone planning their future adventures, in 2021 summer and beyond! 

My internships, 2001-2005

Summer 200110 Gresham St. Project, Bovis Lend Lease

In December 2000 I accepted an undergraduate sponsorship from Bovis Lend Lease (now Lend Lease), to do electrical engineering at university. The offer was independent of my choice of university, tied only to the subject. I had looked up undergraduate sponsorship options when I was 17 and thinking about university, as a way of reducing the costs and boosting my employment prospects. Bovis was the first company that I had applied to and I accepted without hesitation. I’d also looked at IBM for their PUE scheme, Microsoft and also some of the military/defence related options, but they had later deadlines and certainly at the time IBM didn’t commit to anything more than a gap year employment. The deal I was offered by Bovis included annual trainee conferences, some industry skills training at the CITB training centre in Bircham Newton and annual postings as summer holiday cover to construction projects in my area. In return I would receive a book allowance during term time and be paid as a junior site supervisor during my summer work. The programme still exists today, so definitely take a look if you are seeking undergraduate sponsorship for a career in construction!

Normally the programme didn’t start until the first summer of university, but I asked if it was possible to start on site directly after my A-levels and the company was very accomodating. I was sent as a junior to help out the building services team on the 10 Gresham Street construction project, which at the time was in the process of excavation. The building was to be a low rise office, designed by a team renowned architects Foster & Partners. My boss was Gary Sturges (Hello if you are reading this!), who was the building services manager for the project and started my education in what it is that makes a modern building light up! The services designs were a long way from installation, so I also got to follow some of the groundworks and steelworks. Being realistic, a fresh faced 18 year old can’t necessarily contribute a whole lot to a multi-million pound construction site, but here are some of the more concrete things I did included:

Learning how to fold a drawing and keep the site drawing sticks updated with the latest revisions. It’s basic grunt work, but having the latest revision of a drawing is fundamentally important.

How to work in an office! For the first time I wasn’t going to school every day, I was taking the tube into London like an adult. There were all kinds of things to navigate, including finding my way to the office on day one, finding my way to site thereafter, buying lunch from a sandwich shop. Using email for work, handling a photocopier and sending and receiving faxes (yes, faxes were still very much a thing back in 2001!) with cover pages.

After a month or so I had done the necessary safety training myself, called a ‘license to practice’ back then, and was authorised to give site inductions to visitors and some of the smaller or specialist groups of workers attending the construction site. I remain impressed by the commitment to safety on all the Bovis Lend Lease sites I visited and worked on, the process of giving these briefings gave me my first understandings of the statutory duties of employers to provide a safe place of work and a safe system of work for their employees. The other thing I learned is that safety works best when it’s a human to human interaction, knowing that before anyone steps out onto the potentially hazardous environment of a construction site, someone else has taken the time to explain the particular dangers of that specific site and to remind them that safety is a partnership between employer and employee.

I also attended my first work meetings, learning essential skills such as staying awake, followed by more advanced meeting skills such as taking and preparing minutes.

Finally, I had time to read the company site safety manuals and safety system. These were a couple of very large A4 ring binders with the distilled procedural knowledge of the company for how to run a construction site. They were a great resource and it was especially useful having the time to read through them – I often wish I still had a copy to refer back to even today.

 Bank of America European HQ

Summer 2002Bank of America fit-out, Bovis Lend Lease

After my first year of university, I went home for the summer and was soon commuting daily to Canary Wharf. At that time Bovis had a specialist fit-out division called Bovis Lend Lease Interiors who had won the contract to fit out trading floors. Initially I was based at their office in Farringdon, then as the project moved closer to the start date we relocated to offices in Harbour Exchange to be closer to the site at 5 Canada Square. The building for the fit-out was still in the final stages of completion by another contractor, so access on site was limited. During the summer I learned a lot about the importance of tracking document issues and revision by external consultants, as a key part of driving design towards something which can actually be built. I was given the ER’s to read (employers requirements), which was my first introduction to reading and understanding formal specification documents. This was the first time I had had the opportunity to work on a really complex electrical services installation, with a small data centre and dealer desks for financial trading part of the installation brief. I also had the pleasure of working for Denis Wilson (Hello Denis!) who was running the fit-out services team, and who would later be my boss again on another highly complex and challenging Bovis project over at the BBC.

Churchill gave his VE Day speech from this balcony.

Summer 2003 – HM Treasury Phase 2 PFI, Bovis Lend Lease

Following on from a challenging second year at university, I was assigned to a mysterious sounding project called “GOGGS East“. It was the second stage of the refurbishment of the UK Government Treasury office building in Whitehall, on the corner of Parliament Square. Here is a great presentation I randomly found online showing the insides of the building and some of the work, which is Grade II* listed, the highest category for UK buildings with historical and architectural value. The building has a rich history, including hosting significant fortifications installed during the second world war to bomb-proof the basement areas, where the cabinet war rooms are located.  One of the highlights of my internship was unlocking the Churchill Room, the majestic office used by Winston Churchill at some points during the war, with it’s balcony at the front of the building overlooking parliament and whitehall used for his speech on VE day. Another fascinating part of the building were the old treasury vaults on the sub-basement level, where some of the UK’s gold reserves were once stored. The gold was long gone and they were to be filled with concrete as part of the renovation.


Summer 2004 – I travelled round the world visiting lots of construction projects, funded by the Royal Academy of Engineering.

This is another story in and of itself, which merits a blog post to itself – one day I’ll try to write it. I was very fortunate to be awarded an Engineering Leadership Award in 2003, which gave me access to funding, a mentor and fantastic opportunities to expand my knowledge and skills. The program still operates today, and I cannot recommend it highly enough. If you are interested check out this link for details on eligibility and how to apply.


Summer 2005 – Summer student at CERN

I visited CERN for the first time in the summer of 1999 after persuading my parents to send me on a school trip. I really liked it and applied to return as an intern via the summer student program. Again this is a fantastic opportunity, I would recommend it without reservation to eligible students. You can check out the application procedure and eligibility requirements here. This internship was rather different from the others, with morning lectures on mathematics, physics and the challenges of building particle accelerators, visits to the then under-construction LHC and in between I worked on my project, writing a part of a detector control system for LHCb in VHDL. In addition to the huge intellectual stimulation, the career benefits, I made lifelong friends from across the world – many of whom I’m still in touch with, and some I work with on a daily basis! Back in 2015 we had a little get together to celebrate our 10 year anniversary, here’s what I wrote about it at the time.


These internships were a fantastic learning experience and set me up for my current career as an electrical and electronic engineer. I’m very grateful to all those who helped me to get them and to make them such valuable and rewarding experiences. While this year may be a wash-out due to the pandemic, there are still some fantastic schemes out there for current students and those applying to university this year. If you are a student I highly recommend seeking them out for next year!

Tagged , , , , , ,

The Brexit Post (from 2017)

I wrote this post just over 3 years ago. It seemed too pessimistic to publish a the time. Now, in the midst of the pandemic as the UK government push hard for a no-deal outcome, we are reaching the end of a path littered with broken promises. I thought it was time to share.

17th January 2017

My country is sinking. The parts of the world that aren’t already on fire are about to be ignited. Here’s what I think:

The question which was asked of the British people on the 23rd June 2016 was:

The referendum question will be: Should the United Kingdom remain a member of the European Union or leave the European Union?

On the basis of a 52/48 split on a vote for leaving the EU, the UK Government is now pushing for a “Hard Brexit”. This means:

  • Leaving the EU (fairly obvious)
  • No freedom of movement (not obvious)
  • Leaving the common market (also non-obvious from the text of the question)

The destructive consequences which will result from taking the actions above include:

  • Loss of access to all EU research funding, which will decimate research and innovation in UK universities.
  • Loss of access to EU markets at current tarrif free rates, requiring complex negotiations to re-establish a worse deal.
  • Potential loss of access to all WTO deals until something is negotiated.
  • Subject to a deal being reached to allow them to stay, all EU nationals will have to leave the UK.
  • As part of the same deal, UK nationals may also need visa’s for future travel to the EU.
  • Farming will be heavily hit by the loss of EU funding, unless a timely and adequate replacement is set up.
  • Unless suitable legislation is enacted before we leave, UK citizens will loose all their current rights enshrined in EU human rights laws.

Furthermore, the government wants to do all these things without a parliamentary vote.

Failing to consult and gain approval from parliament for the above would be a brazen action, putting at risk the rights currently enjoyed by british citizens. If we are to have a significant proportion of our existing rights removed without parliamentary vote, it would be an extraordinary situation. Abrogation of the rights of the citizenry without the consent of their representatives is clearly morally wrong. Abuse of power in this way is the road which leads to tyranny.

Post Script

That was what I wrote three years ago. It seemed too depressing, too pesimistic and yet here we are. The government has sought to shut down parliamentary scrutiny of the Brexit process, and was ultimately found to be in breach of the law. Any who urged caution, real negotiation with the EU and the path of moderation from within the Conservative party have either been chased out or formally purged. In the context of the pandemic, the UK government has been using secondary legislation, without any parliamentary scrutiny, to restrict the rights of UK citizens in quarantine. Now established, I fear that these habits will be a hard for the government to break.

If there is light, it comes from the devolved nations, Scotland, Wales and Northern Ireland. When the dust settles, I am sure the statistics will show they have thus far managed the pandemic far better than England. Scotland in particular has stood firmer against the prevailing winds of political change, the chilling of discourse and has remained vocal in welcoming those from foreign countries. It shows that the downward spiral of Westminster politics is not the only route available, we shall see what the next three years bring.

Building a PV Microgrid for the Druk White Lotus School, Ladakh, India

I’ve been very priviledged to have worked on some great engineering projects in my career to date. This blog post is about one of the stand-out projects that I was fortunate enough to work on while at Arup in London, back in 2007 and 2008. The Druk White Lotus School is an exemplar sustainability project, located in Ladakh, India.


The start of the school day at the Druk White Lotus School

The Druk White Lotus School is a boarding school in the foothills of the Himalayas which aims to foster and preserve the unique culture of the Ladakhi way of life. The school is planned based on the Dharma wheel, with classroom buildings near the entrance, and residential blocks further back within the site. It was my role to work with the client in developing the technical specification for the microgrid installation and to support them during the tender process, then at the end of the project I had the chance to go to Ladakh and get hands-on with the final commissioning and site acceptance testing in September 2018! Ladakh has a harsh climate, with extremes of hot summers and freezing winters, which was one of the drivers of the project schedule. The treacherous road to Leh closes for winter, restricting the availability of building materials. Due to the cold winter temperatures all building work outside is very difficult, meaning that all work had to be completed before the bad weather set in.


A Sand Mandala, inspiration for the layout of classroom areas of the school. (image taken by Colonel Walden, from Wikipedia, CC BY-SA 3.0)

The Leh valley, where the school is situated has an intermittent electricity supply, with a typical maximum of 4-6 hours of electricity per day (at the time of construction). Electricity was rationed, based on different districts and areas, not conducive to a school lesson plan using computers or electrical equipment of any kind! The design of the Druk White Lotus School has been supported by Arup on a pro-bono basis since the inception of the project. The installation of an on-site micro-grid, with solar panels and battery storage was designed to permit the site to operate automonously, with a top-up from the grid supply when available.


The school is located in a stark and beautiful valley.

One of the main challenges of the project was to install a distributed generation and electricity storage system on top of an existing electrical infrastructure. As school buildings were constructed, each was added to the three phase low voltage electrical network via copper or aluminium cable. Each building had a local single phase distribution box (or fuseboard) to supply interior lighting and sockets. The site was also equipped with a very small back-up generator, feeding in to the local distribution via a break-before-make transfer switch. A further constraint for the design was the requirement to construct a modular, scaleable system that would be able to grow with future development of the school, as new classroom buildings and accomodation blocks were added.

Untitled Diagram

A block diagram for the system as installed, arrows denote energy flow.

The system architecture is shown above, with three single phase PV installations added to the three classroom buildings at the front of the mandala site layout. Sunny Boy PV and Solar Island battery inverter systems from SMA were specified for the hardware installation,  The angle and orientation of the PV installation was optimised using the freely available RetScreen software. Each was connected to a different electrical phase, providing an overall three phase balance for the site, via the existing distribution system. A new power house building was constructed to house battery storage and three single phase battery inverters with a common DC bus. The existing AC disrtibution system wiring was retained, with frequency modulation used for communication between the battery storage and distributed solar inverters, located approximately 400m away from the power house building.


The contractor team, one of the site supervisors and me in the battery house towards the end of the project, September 2018.

System functionality:

  • Self-contained micro-grid, capable of autonomous operation on PV supply
  • Phase-Phase energy transfer via DC bus for unbalanced loads
  • Energy storage via lead-acid solar batteries designed for deep discharge operation on a single DC bus
  • Ability to perform battery charging and operation from local generator (recommended for periodic full recharging of the battery system as a maintenance operation)
  • AC distribution frequency modulation used by SMA inverters for communication without the need for additional communications wiring, using a slight lowering of the micro-grid frequency to encourage PV supply, and a frequency rise to disconnect PV supply in the case of insufficient demand and/or a fully charged battery.
  • Potential to sell energy back to the grid, however this was disabled at commissioning due to the lack of a regulatory/legal framework in the local energy market.
  • Sunny Island inverters equipped with SD card slots providing minute by minute logging for easy remote analysis of the system performance.
  • 9kWp PV installation, in three modules of 3kWp per building.
  • Capability to add additional PV installations on future buildings.
  • Capability to add additional battery capacity as funding becomes available.
  • A new earthing point was installed for off-grid operation.



Construction of a new earthing point for the site

The site commissioning process was very challenging, as it was the first installation of it’s type for all those concerned (contractor, site foreman and myself the design engineer), there were also significant differences, both in terms of culture and electrical installation safety standards to be overcome before the system could be commissioned. It was also necessary to borrow the only three phase rotation meter in the valley from the local airport electrician in order to ensure the correct configuration of the three phase system. The only major issue with the commissioning came when the battery system was fully operational and charged, with the solar inverters failing to connect. Upon further investigation, it transpired that the solar inverters had been shipped with firmware settings for domestic installations in Germany, rather than the micro-grid firmware required in this installation. A laptop with the new software and a suitable communication interface had to be flow in to Leh in order to make the upgrade, but once completed in October 2018 the system performed as designed.


The Sunny Boy solar inverter installed within the vestibule space of the classroom buildings, complete with PV isolators, mains isolator and an energy meter to monitor power generation.


This was a wonderful project to work on back in 2008. The challenge of designing a micro-grid system for a self-contained school, building upon existing low voltage distribution infrastructure in a remote location was significant. Then the opportunity of getting hands-on for the project commissioning in such a unique environment was possibly a once in a lifetime opportunity. Having all the system performance data logged to SD card was also very helpful in supporting the installation when I returned to the UK. Receiving the call from site in October 2008 to hear that the system was working as expected after the firmware update was a real moment of both relief and excitement.

If you would like to know more about other aspects of the multi-award winning Druk White Lotus School project you can find additional details in this article in Ingenia magazine.


PV panels supported on wooden trusses which form part of the classroom building structures.

Post script: At the time I had the idea of filming various critical operations on the system and putting the videos on YouTube for future reference. This was really useful and something I would highly recommend for anyone doing this type of project! The videos are still online, you can view them here in this playlist.


Tagged , , , ,

Build your own CPU with RISC-V and a Lattice ICE40 FPGA

Update – It’s been a while since the workshop and unfortunately I deleted the VM, since I won’t be hosting any more workshops in the near future. I’ve updated the article with tips on building your own VM (or of course you could install it natively!), good luck!

I’m giving a workshop next week on how to build your own RISC-V CPU within a Lattice iCE40 series FPGA using the awesome Icestorm framework by Clifford Wolf. We need two toolchains here in order to create both the processor and the code to run on it, and we will build EVERYTHING here from source. Sorry – the VM I built has now expired, so I’m afraid you’ll need to rebuild one, here’s a great site to get you started.whatgoeswhere

Stage 1: Create a VM

The main issue with running up your own RISC-V cores is having the toolchain ready to go. So I created an Ubuntu VM, based on 18.04 minimal and running within Oracle VirtualBox. I chose minimal because it’s lightweight, small and will have a reasonably manageable footprint when putting the VM on a USB stick, and because Ubuntu is my native Linux distro. The choice of VirtualBox was down to it’s cross-platform compatibility and the fact that it’s free to use. The VM is configured for 4Gb RAM and 30Gb HDD, AMD64 CPU, with nothing fancy on top – I would recommend a similar approach when making your own VM. In order to facilitate cross platform compatibility and carrying it round on a USB drive, I’ve also set the HDD to be split across 2Gb files, since some file systems have a restriction on the maximum size of a single file. The total size of the VM comes to >17Gb, so make sure you have plenty of hard drive free!

I created the username “risc” with the password “Lattice”, but when making your own you should make wise choices! I don’t generally advocate writing your username and password on a blog, but this was a special case while I had the VM available. Evidently don’t leave this VM running, or give it open ports to the outside world when you are using it! If I do rebuild the VM (this time I’ll use Ubuntu 20.04, because it’s new.), I’ll post a link here, but unfortunately that isn’t going to be for a while – since I don’t plan on running any DIY CPU workshops during the pandemic.

Stage 2: Configure the VM

Once the VM has been setup, it’ll need the icestorm toolchain installing in order to program the FPGA. This comprises a number of things in more or less the following order:

stack for fpga

1. FTDI drivers from here. It’s a .tar file, so you’ll need to unzip it a couple of times and then follow the instructions for how to copy the driver files in to your system directories as a super-user.

tar xfvz libftd2xx-x86_64-1.4.8.gz
cd release
cd build
sudo -s 
cp libftd2xx.* /usr/local/lib
chmod 0755 /usr/local/lib/
ln -sf /usr/local/lib/ /usr/local/lib/

2. Packages to make everything work in Ubuntu (note I’ve added libeigen3-dev, not included on Clifford Wolf’s page, since I needed it):

sudo apt-get install build-essential clang bison flex libreadline-dev \
                     gawk tcl-dev libffi-dev git mercurial graphviz   \
                     xdot pkg-config python python3 libftdi-dev \
                     qt5-default python3-dev libboost-all-dev cmake libeigen3-dev

3. The Icestorm toolchain components from here:

4. A sample program to check we’ve got the FPGA compilation working, before we move to RISC-V compilation, from here. I cloned this code into a directory called flash, compiled it and uploaded it to my device to make it flash the leds in sequence. It worked first time, after I connected the USB device to the VM.

It’s worth noting at this point that I haven’t installed Icarus Verilog, since it isn’t strictly required to compile to the target, but would be needed if we wanted to test things! If I get time I’ll add it to the VM. Thanks to Oliver for pointing out this nice FPGA toolchain installation script.

UPDATE: I just added Icarus Verilog (V10) built from source and the Icicle repo for some better Upduino support. The Icicle serial output doesn’t seem to be working when flashed to target, but it does make the LEDs light up on the iCE40HX8K and Upduino boards. I also added minicom and picocom for serial monitoring.

Stage 3: RISC-V

Now that we’ve got a working toolchain for the FPGA, we need to build a working RISC-V compiler in order to have code to run on our chip. I installed Clifford Wolf’s Picorv32 from here. This basically takes you to the RISC-V mainline toolchain and picks out a particular revision and only the compiler required for smaller/less capable cores. When compiling it for the first time, I was stuck for a few hours on the ../configure line pre-compile to insure that the /opt/riscv32i toolset is used (the other toolsets are not compatible with the iCE40HX8K FPGA due to size restrictions), but eventually figured it out.

What we are actually building in this stage is an add on for GCC that will enable us to compile binaries for execution on our soon to be created RISC-V core. There’s no point having a CPU if we can’t aslo compile code for it from a high level language.


RISC-V implementation on iCE40-HX8K, image taken from the PicoSoC presentation given by Tim Edwards, Mohamed Kassem and Clifford Wolf at the 7th RISC-V Workshop, November 2017.

I followed the instructions in the picorv32 repo as follows:

git clone picorv32

sudo apt-get install autoconf automake autotools-dev curl libmpc-dev \
        libmpfr-dev libgmp-dev gawk build-essential bison flex texinfo \
    gperf libtool patchutils bc zlib1g-dev git libexpat1-dev

sudo mkdir /opt/riscv32i
sudo chown $USER /opt/riscv32i

git clone riscv-gnu-toolchain-rv32i
cd riscv-gnu-toolchain-rv32i
git checkout 411d134
git submodule update --init --recursive

mkdir build; cd build
../configure --with-arch=rv32i --prefix=/opt/riscv32i
make -j$(nproc)

Stage 4: Hardware

Now the software is all ready to go, we just need a hardware platform to run it on.

This tutorial is designed to run on one of the following demo boards:


The Lattice iCE40-HX8K evaluation board, available from Digikey.


The UpDuino, available from GnaryGrey.


And the BlackIce II designed by a couple of awesome guys in the UK!

Stage 5: Compile and upload

Everything is very nearly finished. Except it doesn’t work just yet. We also need to install the VirtualBox expansion pack in order to access USB2 devices. We can download it here and add it via the GUI.

Then we need to ensure that we can find the compiler for RISC-V, which we can do by adding it to the PATH environmental variable:

export PATH="$PATH":/opt/riscv32i/bin

If you fail to do this, you’ll get a tonne of “riscv32-unknown-elf-gcc command not found” errors until you correct it. Make sure you don’t wipe out the path variable in the process!

And just to make sure we can access the device, let’s add our user to the dialout group:

sudo usermod -a -G dialout risc

With all of this in the bag, we need to ensure that our VM is connected to the USB hardware, which we can do via the menu or the USB attachment icon in the bottom right of our VM window. We should enable the Lattice device, and then we can complete our build and upload with the following commands:

cd /picorv32/picosoc
make hx8kprog

If everything works as it should you’ll see various messages about compilation and programming of the device, followed by “VERIFY OK cdone: high… Bye.”. The LEDs on your board wil blink about once a second. Note that there are several options for alternative things to do in the /picosoc directory, all without yet writing your own code or core, they are detailed in the makefile in the picosoc directory which is definitely worth reading.

It’s worth noting that I couldn’t get my permissions quite right so I had to cheat a little for access to the USB device to do the final upload, but calling make hx8kprog as sudo. Not the best technique, but it worked!

Conclusion – Testing the CPU

At this point I chose to unplug my FPGA dev board from the virtual host and hook it up to a real one (with the drivers installed of course!) to check that I’d actually built the core and it was working properly. I launched my trusty Arduino IDE and fired up the serial console, baud rate 115,200bps on the correct COM port and was greeted with this:


We have now built a working RISC-V core on our FPGA board and programmed it with some compiled code. I’d like to thank the awesome Clifford Wolf for basically making it all possible (he wrote the core we used to implement RISC-V and the ICESTORM toolchain we used to generate and upload our bitstream) and RMWJones for posting some very useful scripts that helped me along the way.


Tagged , , , , , , , , , , , , , , ,

MQTT and Round Robin DDNS

Last weekend I set up a round-robin DDNS system for internet connected Cosmic Pi devices. Here’s how I did it, but first let’s find out why this is useful.

For Cosmic Pi we’re trying to build a scalable, global infrastructure for cosmic ray detectors to connect to. We don’t want to reinvent anything, just to use a suitable existing technology. This is where MQTT comes in, it’s perfect for our use case. With one or two brokers, we can connect many clients and everyone can share the data. We could even use topics for remote configuration messages, but that is for the future. For reference, our MQTT journey started with RabbitMQ and we’re using Mosquitto these days.

As a figure of merit, a Mosquitto MQTT broker can cope with 1000 publishers or subscribers. I read this on the internet, I’ve no idea how true it is at the moment. 1000 isn’t a large number for a global cosmic ray telescope, so we need to add more capacity.

We also don’t want a single point of failure. The broker has been running on my home server for the last year, which is great for testing but not exactly high reliability territory. So figuring out a way to have two, parallel and synchronised brokers would be an ideal solution to scalability and reliability, enter round robin.

I’ve been using DDNS (Dynamic Directory Naming Service) for a while now to get remote access to my home servers. For those unfamiliar with the service, it allows you to associate a domain name with a dynamic IP address (V4) and update it every time your IP changes. So (for example) always points to your home IP, even if your ISP changes it. Of course a local client of some type is needed to ensure changes get propagated to the DDNS servers, but fortunately many routers (especially those running DD-WRT) support this out of the box.

Originally the service for Cosmic Pi MQTT ( used a single DDNS hostname linked to one IP. To implement a round robin using the excellent free service provided by, all you have to do is create a second sub-domain with an identical name and link it to a different IP. Round robin requests are handled automatically, so the first request gets directed to IP address A, the second request to IP address B and the third to IP address A etc. Of course fail-over in this scenario relies on the client detecting a server failure and requesting a fresh connection, but this is not an unreasonable expectation. If we lose a few cosmic rays it’s not the end of the world as long as the service stays available.

Once two MQTT brokers are running on each of the IP addresses, only one more issue remains. The MQTT brokers must be linked, otherwise what happens on server A stays on server A and the same for server B. Fortunately Mosquitto makes bridging easy, but we have one more hurdle to overcome first.

In order to bridge the two MQTT brokers, one must be able to refer to the other by a unique DNS name (i.e. can’t be used as it alternatively points to both servers). We could do a little bit of script magic to ask for both IP addresses, then sort out which one is the remote server, but amongst other things both servers are behind NAT (network address translation) so a third service would be required to resolve the internet facing IP for the local host. The solution I implemented was to simply have a second unique DDNS name for each server, which is kinda handy anyway if you want to SSH for remote administration. Under this configuration, we can explicitly bridge either server A to server B (or vice versa) without introducing any dependencies on additional services.

We now need to add a bridge command to server B so that messages it receives are shared with server A, and that it also subscribes to those messages being received on server A. In this way both servers have a complete copy of the data at any time. If server A goes down, the clients will eventually (depending upon the timeout we’re looking at milliseconds to minutes) request an update from the DNS server and end up being connected to server B. It doesn’t matter too much what happened to server A and the bridging, because server B will now be getting all the messages anyway.

This method has limits, specifically if all users are switched to just one server it must be able to cope with the demand, i.e. we are probably fine for 500 users on each server. I’m not sure why or if it’s possible to bridge multiple servers beyond two, I couldn’t find any examples. If we end up with a lot of Cosmic Pi units out in the wild we will find out!

If you want to try the MQTT service running for Cosmic Pi you just need to get an MQTT client and connect to on port 1883, then subscribe to # for all the messages!

Open Auto (an adventure in open source, open hardware community car sharing)


Just over a year ago, I attended a Geneva Smart City hackathon at the HP campus in Meyrin. I confess that my primary motivation for attending was to see inside the HP office, because I was curious! I brought with me a couple of Dragino LoRa expansion boards for Raspberry Pi and I pitched the project of creating an easy ‘how to’ guide for people looking to set up their own LoRa networks, mostly because it’s something I’ve wanted to do for a while and haven’t gotten round to yet – and because I think it would be very useful for many open source, open hardware, smart city projects. It turns out that The Things Network has actually made some great strides in this area since (and probably even before) – worth checking out for your LoRA projects. Nobody was interested in my pitch, so I went and joined a team of people with a problem who were looking for a solution.


At the time the project was called Open Auto, but it’s now morphed into Comobilis. The aim of the project is simple, to build an open infrastructure for community car sharing. This particular post is about a hardware platform for community centric car sharing. It’s not a box designed to allow you to share your car with random strangers, like airbnb for cars, because I’m not convinced that’s such a great idea.

Time for some hardware!


If you’re going to share cars, you will need hardware. The main driver is a way to open and close the doors securely. This means hardware, and software. You can put the keys inside the car, you can put them behind the sun visor, you can chain them to the seat if you want. What is important is making sure that the car doesn’t need ‘undue’ modification – we’re not car hacking here (yet).

The simplest solution which works for most modern cars is to borrow the RF transciever from the keyfob. With a little bit of soldering, this can be wired up to a couple of relays to simulate a human pressing the open and close buttons. This provides a near universal interface for the car, which is as physically secure as the original key transmitter. In the future, it should also be possible to open and close the doors via CANbus, however a lot of manufacturers keep this information secret, as you could use it to steal cars rather easily.

The other required hardware items are a GPS to locate the car, and potentially track it in real-time (or check if anyone is speeding – it’s certainly possible, if not yet implemented), an RFID reader to allow the use of pre-registered tokens to trigger the door opening and closing, and an accelerometer, to allow the driver’s behaviour to be modified (it’s a great way to see if they are Driving Miss Daisy or driving like a bat out of hell).  To complete the system, we need some communications devices, including a CANbus transceiver, which can be used to read and write to the car bus (if we feel the need to) via the OBD-II port which is mandatory on all modern cars sold in Europe, a way of powering everything from the car’s 12V battery – conveniently also provided by the OBD-II port, and a means of wireless communication with the outside world.

After briefly flirting with the idea of an Orange Pi Zero and a 3G dongle, we settled on the Particle IoT ecosystem, specifically the Electron module. This provides a 3G modem and STM32 microprocessor, complete with a cellular data contract, management platform and Over-The-Air firmware upgrade capability (the last part is very useful, meaning that you don’t need to plug into each car to fix bugs or add new features).

Designing a board

All this hardware could be integrated, more or less onto a single printed circuit board. Here is the first attempt, complete with some assembly comments from Seeed Studio who built it:

open auto v1

Version 1 being assembled.

The downside of carrying round a linux box in your car is that it needs regular patches to keep it secure. These can run to tens of megabytes a month, which could be expensive on a 3G data contract. I was also concerned about the stability of the system, given the high temperatures that occur inside cars during the summer (such as today, where it’s 30 degrees outside here in Geneva and probably over 40 inside every car in the car park). The Orange Pi Zero is very cheap, however there have been some reports of thermal issues – so putting one inside a car and relying upon it to open the doors is probably not the smartest move. The PCB also had an issue with the pinout of the 5V to 3.3V level converter, which was pinned out incorretly as I didn’t read the data sheet thoughrouly enough. You can find the hardware (Eagle) for the first prototype here.

After some testing with this board (and also realising what a pain it was to prep OS cards for the Orange Pi Zeros), we decided to take things a stage further with a new design. The major flaw in the inital design was the use of an on-board GPS antenna, which failed to acquire any signals – making it rather useless. You will see that I included a footprint for an external GPS antenna on the V1.2 board as a reaction to this.

Re-designing a board

openauto design


Version 1.2 being assembled.

The new board was designed around the Electron and Photon modules from Particle, inspired by some work we saw from The use of the Electron allowed us to power most of the board from it’s Lithium Ion battery, however in the first production run the relays (for opening the doors) were still driven by 5V – I’ve since found a 3.3V relay, and the board is equipped for this with a jumper to switch the supply, so future versions won’t need the car to be providing +12V in order to open the doors.

The Bill Of Materials (BOM) for the PCB includes everything needed to assemble the PCB, which is a relatively simple 2 sided design. There are some optional components not included such as the SMA jack for an external GPS antenna which you may want to add if this is something you are looking for. In addition to the board, you will need a CR1220 lithium cell to power the Real-Time Clock and an RFID reader module (it was designed with the 5V RDM6300 or the 3.3V SEEED SKU 113990041 module in mind, there are also several equivalent 3.3V UART RFID readers at 125kHz available from places like Sparkfun or Aliexpress.

I initially started using the board with the key fob to my own car (an old but reliable Peugeot 307), which happens to carry the key battery on the key PCB. It turns out that most other cars don’t do this, using the plastic key casing to accomodate the battery – so for these it’s possible to jumper the 3V non-rechargeable lithium cell on the board to supply the key.

For the next version, since everything was more contained, I designed a case to go around it. The case material is laser cut acrylic, since it was cheap and allowed you to see inside the box, which I think is cool!


An assembled case with the brown paper still attached.

All that was left was to put it all together in the box and start writing some software. Oh and of course test the circuit board! It worked as expected, with only one small snag in the first production run – a missing connection between the I2C SDA and SCK lines from the Particle boards to the RTC and Accelerometer.


The finished article! Version 1.2.

The picture above shows the completed box, fully assembled with an electron module and the RFID card reader module + antenna. In fact this configuration has since been modified to rotate the loop of the RFID antenna away from the GPS as it was causing interference. Moving the antenna totally out of the box is on the to-do list for future versions.

The design files for the Version 1.2 unit are available here, complete with the case.  The software remains a work-in-progress. The hardware is fully open, licensed under the CERN Open Hardware License V1.2. You can find a relatively recent sketch here. Of course this is only part of the solution to community car sharing, it’s necessary to have a back-end which can host reservations, billing and user information, co-ordinate with the vehicles to make sure they are in the right place at the right time and allow a way to sign up new users. The Comobilis team are using Odoo, an open source ERP and PLM platform to build free extensions to connect to the hardware and provide the necessary software and interfaces.

Comobilis Logo final single_with tagline

If you would like more information on this project please drop me a line or check out where you can find out about starting your own car sharing co-operative. The co-mobilis initiaive is initially focused on Switzerland, but the hardware will work anywhere there is 3G signal.

In the present I’m working on adding more functionality to the firmware, completing the way reservations are handled and retrieved to allow people to actually use the vehicles, as well as things like GPS position logging, acceleration and clock functions. In the near future I hope to look at using ESP32 + Lora as an alternative to the Particle modules, taking it closer to the project that I originally pitched! I have also started a company to sell these boxes (and some other exciting open hardware electronics) with some friends, but that will have to be the subject of another post.