My friend Mike

Mike having a good time at the Foundry, circa 2010 just before it was demolished.

Last week, one of my closest childhood (or at least teenage-hood) friends sadly passed away. He’s the first close friend in my generation to die. It seems fitting to share a few memories of our good times together. It would also be remiss of me not to name check my other close school friends: Xuan, Chirag and Nikhil, who appear throughout these memories too. We were a gang of 5, and we had some great times together. Reflecting on his life, I’m aware the Mike I knew was only a small fraction of the whole guy. Each of us who knew him have our own set of treasured memories and favourite moments. These are some of mine.

I first met Mike at the John Lyon School, circa 1996 when I moved up from prep school. However, I didn’t really get to know him until 1998, when we ended up in the same Young Enterprise company, Mineral. People don’t really start businesses to make friends for life. In fact, it seems a great way to alienate people! Our start-up was different. We didn’t make any money to speak of but found something much more valuable: lifelong friendship.

Mike modelling our Mineral Young Enterprise T-Shirts, 1999/2000

We had quite a few adventures over the last 24 years. I still vividly remember a trip to Cambridge with a couple of other friends to visit a mutual friend. I was the designated driver. We set off incredibly late (my fault as usual) from London to collect Mike in Nottingham before heading to Cambridge. Cassette tapes were still a viable format then, and I know Mike (and the others) were very unimpressed with my musical selection for our trip. I think we arrived at Mike’s student house in Notts at about 12 midnight, or maybe even later. It was still in the relatively early days of satellite navigation, meaning the unit I’d borrowed from my parents for the trip was slow and I often ended up taking wrong turns, requiring a U-turn to get us back on track. This became a running joke after about the fifth incident, which lasted the entire trip. The next day we headed on to Cambridge. I learned about a play performed by one of our mutual school acquaintances (I can’t remember if he was acting, or directing, or both) in the city that evening. For some reason, I really really wanted to go see it, but finding a parking place took about 40 minutes. Mike and our other friends were not bothered, so laughed a lot when I ran off into town to try and find the theater as fast as I could, only to get lost and arrive after the show had started and the doors were closed. The guy on the door asked me to leave. Despite not running through town as fast as they could, Mike and the guys arrived just as I was being ejected from the theatre, and everyone thought it was even more hilarious. We crashed at our friend Nikhil’s college room that night. I think there were at least 5 or 6 of us in total. We had to be quiet so the cleaners didn’t throw us out! The trip ended prematurely for me the next day, as I had to return to university and sort out some business. But it was a memorable trip. It’s not the destination, it’s the journey and the people you travel with.

A night out in North London, circa 2005. I think this was taken at the Finnish bar that used to be just off Charlotte Street.

The Forum. Having seen each other every day at school, how are a bunch of guys at the turn of the millennium supposed to keep in touch when scattered to the four corners of the country by University? (Actually we weren’t that scattered, Mike was in Nottingham, Chirag was in Warwick and the rest of us were still in London but…). We used to have our own message board. It was kind of like a blog but only for the (avid) consumption of our little friend group. We would write about our daily lives, mis-adventures and what we were watching/listening to at the time. It was fundamental to our keeping in touch, and only when everyone had returned back to North West London in about 2006 did we stop using it. I still have a good chunk of the archived posts, which I very occasionally re-read when I want to remember these days. Mike was a regular poster and would enjoy sharing everything from his gig attendance schedule, preferred lunch choices in the Harrow St. Annes centre food court, to comments on the performance of Liverpool FC.

His love of music and photography. Mike loved music. Around the time that the music chain Fopp was breaking out. This was always high on Mike’s list of places to go in whichever city he was visiting. I’d listen with envy as Mike talked about his trips to gigs. I’ve never been into live music in the same way, but it was fun to live vicariously through his musical tastes which were expansive and took him everywhere from Glasto to concerts in foreign parts. And then there was photography. During his time in S.E. Asia and Australia Mike explored looking at the world through a camera lens. I remember one of his best shots being a very ornate temple roof photographed upside down in a puddle with some raindrops. When I see the world upside down, reflected in a puddle, I think of Mike. At some point circa 2006/7 we started making a crew outing to the Wildlife Photographer of the Year exhibition, down at the Natural History Museum, a regular fixture in January/February, which Chirag reminded me of as we were reminiscing about the good times with Mike last week. We had adjourned to the Devonshire Arms for a few post exhibition pints, and for some reason I started talking about my TV viewing habits, specifically how I liked to watch things (documentaries, not drama) at 1.3x speed. This, I shared, was a fantastic feature of the new set top box I’d just persuaded my parents to buy. Mike thought this was HILARIOUS, almost to the point of wetting himself. The rest of the day was spent rinsing me out for watching things at 1.3x. What are friends for? In the 18 years since, every time I’ve mentioned anything TV/media/podcast related, Mike ALWAYS inquired if I watched it at 1.3x speed and cracked up.

Gladiator. It was the summer of 2001. We finished school and had yet to start university or our gap years. Stella Artois sponsored an open air cinema near Mile End. We all decided to go there and see Gladiator, once of the most epic movies possibly ever made (without hyperbole). Mike loved it. We all did, but it really was one of his favourite movies of all time. I think we got in a few pints of the titular beer, enjoyed the very long movie on a balmy summers’ evening before sauntering back to Liverpool Street to get the tube home. For me, this marked the start of our exploration of London as the playground of our late teens/early 20’s and beyond. It was also our first shared outdoor cinema, which we picked up on again after university, when we started having movie nights in the various warehouses where Nikhil was a ‘guardian’ tenant. However, nothing ever topped the excitement of seeing Gladiator in the great outdoors.

Monopoly and Risk games. We somehow fell into the tradition of playing a board game in the gap between Christmas and New Year. I don’t know when it started exactly, probably about 2002/2003? By 2004 this was already an established tradition, with The Book – a book about the history of monopoly, the title of which escapes me, being the trophy prize for the winner of whatever game we played. If you won, you could write your name in the book and keep it until the next year. I think the last time we actually played a board game together was in 2017. These sessions typically started late (10 pm) and finished even later (2, 3, 4 am). This was a priceless part of our shared youth. We’d occasionally play some Catan during the year, but it was all about the annual epic session. In more recent years, we also supplemented the annual game with an online risk game called Warlight; the turn based playing of which would typically last at least a month, and involve several thousand WhatsApp messages covering cutthroat alliances, merciless betrayals and new alliances.

Sitting on the sofa at Mike’s house. This was something we did a lot of in our early-mid 20’s. Mike’s sofa was – due to a variety of factors – the centre of our social world. This was a result of who could drive, who had a car, who lived where in the metropolis of Harrow, and of course whose parents were away. I can’t remember how many countless Friday and Saturday nights we spent in Mike’s lounge, sitting on his sofa watching DVD’s. Our favourites were typically action movies from the 80’s. The place of honour goes to Commando. We also watched the Big Lebowski at least once. I remember Mike enjoying football games on his PS2, but I don’t think we ever played together. We would also use it as a base for eating Jain take-away from a restaurant that might have been on Belmont Circle, I can’t remember. Mike’s sofa was also our launch pad for countless adventures into the wilds of North West London, such as late night trips to the 24/7 Krispy Kreme which opened in the mid 2000’s in either Colindale or Edmonton, and countless pubs and bars which have probably all been closed or rebranded by now. We were big fans of Dust bar on Clerkenwell road, sadly now long since converted into something else.

Squeeze! Rolling in the back of an SUV, Cape Cod 2016.

Our biggest shared adventure. I was originally planning to get married in the US in August 2016, to my then fiancee. It wasn’t going well and we decided to postpone the big day, which in the end never happened (it was for the best). Of course this wasn’t until after Mike and the guys had already bought plane tickets! We ended up on holiday for a week and a bit in Boston and we rented an AirBnB on Cape Cod. I think this was the only time I had a ‘proper’ holiday with Mike? It was a fantastic trip. We explored historic Boston, had some less than exceptional cannelloni from Mike’s Pastry, and spent a super chilled few days in Cape Cod. We watched the whales, swam in freshwater ponds and ate BBQ fish at home. It was part stag do, part personal existential crisis for me. Mike was there to share in it all. He supported me in my time of need and we enjoyed the good times we had along the way.

Having a great time. Mike enjoying the whale watching with Nikhil, Chirag and Xuan. Cape Cod, 2016.

The last time I saw Mike was Christmas time in 2019. I was back in the UK. We got the crew together for dinner and all had a good time. With the pandemic and family life here in Geneva, I never made it back to see him again. When his illness struck, we would exchange messages and voice notes about everything, from the minutiae of daily life to the feeling of icy cold rain as it drips down your face. As guys, we don’t often talk about our love for one another, but Mike never hesitated. He was my friend and I’ll miss him.

Tagged , , , ,

How to build a supercomputer (for <$1Bn)

Disclaimer: It’s been a while since I worked on an active data centre project (6, 7 years? Which is an eternity in such a fast moving field) so there are probably be areas below where industry practice has moved on, if you spot any errors or questionable assumptions please let me know and I’ll happily update my blog!

The other morning I woke up and read some news about how the UK PM is splashing $100M on ‘UK AI Supermodels’. It’s a good article, and it talks about the previously announced plan to build a UK supercomputer (presumably to run said models). The ever impressive Ian Hogarth has been put in charge of the group developing the models, but there isn’t any detail on the hardware yet. Whether it all actually happens remains to be seen, but it got me thinking about how you could hypothetically build such a machine, and what it would be like as a project (spoiler, I’m part way through the excellent How Big Things Get Done by By Bent Flyvbjerg and Dan Gardner). Anyone who’s been paying attention for the last decade or two should be well aware that IT projects can turn into giant fail whales, so there is clearly a high probability of failure from the outset.

The most interesting supercomputer I’ve seen recently is the Tesla Dojo. It is interesting in a number of ways, specifically:

  • Custom chips
  • Cheap/cost effective
  • AI focused

If the objective is to be a UK ‘homemade’ supercomputer, the main problem will be chip fabrication. Wikipedia has a long list of fabs worldwide, and unless I’m grossly mistaken there are no cutting edge ones in the UK, or even Europe. Cutting edge fabs today are <10nm, which limits you to Taiwan (TSMC), Samsung (S. Korea) or Intel (USA). The Tesla Dojo was apparently produced by TSMC on their 7nm process. Could the UK use some of the money from this project to upgrade an existing fab to 7nm? No way. The typical cost of building a new fab is in the tens of billions, so even a giant supercomputer project like this doesn’t even get close.

Regular readers will know that I like to abuse the term ‘Full-Stack‘, this post will be no exception. In this case the bottom of the stack is quite literally dirt under a data centre. The top as always is a bunch of software that interfaces with humans and other machines.

If the UK can’t bake it’s own chips, what is the next best thing?

NVIDIA is the current leader in AI hardware by some margin. They are a licensee of ARM, a UK centric (and once UK listed) CPU design company. One approach would be to throw money at NVIDIA for their hardware, put it in racks and call it a UK supercomputer, and if the wafers have ARM cores on them too, call it a win. It may also distract from the fact that ARM are planning to list in the US when they de-merge from Softbank.

The approach taken by Tesla was to use (more or less) RISC-V, a relatively new architecture that is based on an open standard, and increasingly the biggest competitor to ARM. This is a fundamentally different approach, which is more flexible, requires a bit more knowledge and experience on behalf of the chip designer, but looks like the way things are going in the future. The business model for RISC-V chips is very similar to ARM, you can go to a company such as SiFive and get them to design you a custom core, which you then need to get fabricated. RISC-V is really easy to get started with (not that I’m going to be designing a supercomputer myself) – you can check out this workshop I ran a few years ago about building your own chip on an FPGA. I recently read a fantastic article about Tensortorrent, who are aiming to be a specialist AI hardware house, using open RISC-V architecture as a base. If successful, this approach could be the x86 of AI.

Another option would be to throw large amounts of money at one of a small number of UK AI silicon start-ups such as Cerebras who have recently unveiled their first commercial installation. This has some advantages (domestic IP, ‘wafer scale’ deployment) and some disadvantages (custom software tooling required to run your models on it). Technically, the fundamentals are very similar to a RISC-V approach, except that the platform is entirely proprietary. There is a non-negligible risk that such a system would become obsolete in a VHS/Betamax if more players (national and corporate) go down the RISC-V route which seems likely. Of course the VCR situation isn’t exactly applicable, as it will probably always be possible to write a software interposer layer to run whatever the latest open RISC-V platform code is on a fully proprietary architecture, but this is likely to mean layers on layers which is a bad engineering idea.

What should the UK do? Given the success of ARM, it would be tempting to throw money at them and/or NVIDIA in some combination. However, if the objective is to boost domestic industry and build a network of skilled individuals who can design cutting edge AI hardware, a RISC-V based approach seems like the right call today.

What else do you need for a supercomputer?

All computers need a case, and for a supercomputer this means a data centre, or part of a data centre, filled with racks. Though I’ve never actually built a whole one from scratch myself (yet), I’ve worked on a few designs over the years. The fundamentals requirements are electricity and data connectivity (fiber), otherwise it will need cooling and some logistics (to move the hardware in, and upgrade it). A well designed data centre can last a long time, much longer than the hardware inside it.

One of the most interesting trends in the data centre world over the last few years has been the move to DC power distribution. Google have their own architecture that runs at 48V. There are significant savings to be made by not having each computer equipped with two (or more) redundant AC/DC converters, both in terms of money and energy.

Some design questions to answer

Cooling – water cooling is going to be necessary at these energy densities. It gives the opportunity for heat rejection/recovery that could say heat a swimming pool. Would anyone like a new leisure centre?

How big should the box be? The minimum size of Dojo is 10 cabinets (19″ racks). You would probably want at a few extra rows of racks and cooling, as we never quite know how the insides of our data centres will evolve. Plus some office space for human operators and a logistics bay or two.

Is this a standalone facility or part of another data centre/complex? Of course the cheapest option would be to find an unused corner in an existing data centre with spare power and cooling, though finding a couple of MW of spare power will be challenging even in the largest of facilities.

Overall project – How long will it take to build? I don’t know how long custom CPU core development takes, but to build a datacenter from scratch is about 2-3 years give or take a few months and the overhead for local planning processes.

What about staffing? How much of the budget should be allocated to:

  • Building the facility
  • Designing the chips
  • Building the chips
  • Software to run the chips (infrastructure)
  • Actual research to be done on the chips
  • Maintenance and upkeep for the lifetime of the facility
  • Electricity bills

The fundamental question is what will be the Total Cost of Ownership of such a project, including end of life disposal.

And of course we aren’t just building a computer to sit idle. Will it be a facility that people (researchers) can ask to use? One example of shared computer infrastructure is the CERN WLGC. A quick look around the WLGC wiki shows the many layers of management, governance and operation for such an undertaking, though clearly this AI supercomputer would be a little bit simpler having a national rather than international character. Who will manage operation and access, and how will this be done? It’s clearly a case that will require an organisation to oversee, going beyond the scope of the project to build the machine in the first place. The broader question is should a new organisation be created, or can an existing research/academic structure be co-opted to run this new behemoth?

All fascinating questions, we may get to see the answers if HM Treasury ever green lights a project, rather than just a pot of money.

Full stack: the lower levels in review

Starting from the bottom up, your data centre needs a physical location. Yes, even cloud infrastructure still lives somewhere here on earth for the time being. Your site needs:

  • Enough space for however many racks you wish to install, you can always go multistorey (planning permission dependant)
  • Electricity (Dojo consumes up to 2MW, for reference)
  • Permission to make noise, as 2+MW of cooling fans and chillers are noisy.
  • Water (You will need a reasonable amount to fill up, much more if you employ evaporative cooling)
  • Fibres, because you will need internet connectivity.
  • Road network connectivity, because you aren’t going to deliver anything or anyone by helicopter.

Physical infrastructure:

  • A large concrete slab for everything to go on.
  • Walls, which can be either blockwork or concrete depending on structural loads, the number of storeys etc.
  • Roof, supported on either steel or concrete.
  • Drainage and rainwater connections
  • Your own High Voltage substation
  • A decent fence, because you don’t want just anyone coming to your data centre.

Electrical and Mechanical systems: Just like your desktop or laptop computer has a really annoying fan, and an equally annoying power cord, the same goes for your data center. The general rule here is if you need one of something to function, you actually need at least 2 of them to function reliably. You can multiply everything in the list below by 2N, or in some cases even 2N+1. The level of reliability offered by data centres is classified in Tiers, rangining from the lowest at 1, to the highest at 4, which generally includes at least 2 of everything and physical separation of systems so that a fire or failure in one room doesn’t bring your entire operation down. As a practical minimum I’d suggest:

  • Two complete, independent electrical high voltage connections (if you are serious about reliability). High voltage in this domain would typically be anywhere from 11kV to 66kV.
  • Transformers to go from high voltage down to 230/400V (for europe, 110/220V for the US and other parts of the world).
  • Electrical switchboards to turn your single 3200A circuit from a 2 MVA transformer into lots of smaller circuits.
  • UPS (uninterruptible power supply), typically this is comprised of some power electronics and a lot of batteries. It is critical if you have a power glitch, or need to keep everything running and cooled when changing from one of your supplies to the other.
  • Diesel generators, because even the most reliable electrical networks still fail from time to time and you wouldn’t want to lose any precious calculations.
  • Power Distribution Units (PDU’s), which is how you distribute the power to individual racks containing computers. This may also include AC/DC conversion if you go for a DC data centre design.
  • Chillers or cooling towers, which can turn hot into cold via the addition of (electrical) energy and water.
  • Chilled water, to distribute the cooling around your computer racks – NB this is the only thing that is generally installed as a unique item. There is almost never space for the installation of two full sets of cooling pipes in a data centre.
  • Air conditioning, depending upon your computer architecture you may be able to cool your racks with air (that is in turn cooled by the chilled water). This is viable up to a few kW of heat dissipation per rack, but to really go to high power densities water to the rack is essential. Air conditioning will be provided by Air Handling Units (AHU’s) and/or CRAC units (Computer Room Air Conditioners), which are the same but smaller and typically designed to throw cold air out into a rack room false floor. In this case we are almost certainly looking at more than 20kW of load per rack, at which point water cooling for the rack is necessary and the air conditioning is only for human comfort and auxiliary systems.
  • Fire detection, normally via ‘VESDA’, Very Early Smoke Detection Apparatus – a system of vacuum tubes that sample the air and can tell if it contains smoke. The sooner you detect the fire, the sooner you can take action to stop it and the lower the probability of major damage.
  • Sprinklers or other gas based fire suppression (depends on your jurisdiction and the assessed risk of fire spreading between parts of your data centre).
  • Lighting, because you need to have humans walking round the building to build it and install/maintain your computers.
  • Comfort heating, there will probably be an office or two for humans. Computers can generate their own heat.

Network connectivity: Here we will need more than Wi-Fi or a tethered cell phone to provide access to the internet and/or our other data centres.

  • As a sensible minimum you will need two incoming optical fibres, ideally following diverse routes (i.e. not running in the same trench down the same road… because diggers have a habit of severing both simultaneously!).
  • You will be buying network connectivity from someone, unless you happen to also be your own Internet Service Provider (ISP). This means that you will need to provide them dedicated space inside your data centre to house their equipment that connects to the fiber and provides you somewhere to plug in. This space is sometimes called an MMR (Meet Me Room) or Comms room, and it may be a whole room, or a subdivided space with a series of cages, with one per supplier/one per supplier connection point. A design tip, don’t put your MMR’s next to each other if it can be avoided.
  • Internal data connectivity, this can be provided by copper cables (Cat 6A/Cat 7) up to 10gbit, but increasingly fibre is also used within the data centre to provide communications in/out of racks, and also between racks. The cost of fibre connectivity can be significantly higher than copper, as the splicing and termination equipment (opto-electronic transceivers) are more costly.
  • You will need lots of cable trays to run these optical fibers and ethernet cables between racks, rooms and the MMRs. A design tip, put power cabling under the floor and data cabling suspended from the ceiling. You are very unlikely to change the power installation after you are up and running, but data cabling is more likely to require changing as the content of the data centre evolves.

This is a lot. What if i can’t do it all myself?

There are plenty of companies that specialise in end-to-end data centre construction as a discipline, such as Equinix. For those building supercomputers there is a niche within this market generally referred to as HPC, with a few large incumbents such as HP, Fujitsu and IBM. There are also many generalists who can also put them together, to list a couple Arup (my former employer), and AECOM (who I’ve worked with in the past).

Tagged , , , , , , , , , , , ,

Covid-19 is the new climate change

Somehow a global pandemic becomes a wedge issue

It’s a regular day here in Geneva. As I was walking to the shop to buy my sandwich, it occured to me that the ongoing COVID-19 pandemic is becoming the new climate change, in a lot of ways:

  • It kills people, but relatively slowly and mostly far away (for now at least).
  • As an individual, the impact of your actions is almost imperceptible.
  • We have a box full of solutions and improvements, but they are mostly still in the box.
  • The debate is ridiculously polarised, even when the science is very clear.

At most times it seems like I’m the only person in a mask, in work meetings, in the shops. I’m also the guy who is in the process of installing solar panels on my house because of climate change, in an attempt to go Carbon Zero, so being an outlier isn’t new to me.

So the big question is, as a society, how did we get here and why?

It’s partly a story of failures by our institutions, our political leaders, and partly due to the particular virus we are facing. Of course, it’s a perfect storm, because only perfect storms are worth writing about. To start with the easy part, the virus spreads very rapidly via airborne transmission (and also via fomites, but airborne is clearly the main means of spread and superspread), and is transmitted before individuals become symptomatic. This is a significant difference to SARS and MERS, where the infectious phase starts in parallel with the symptoms. I recently watched a great talk from back in 2020, using things which we already knew about the virus back then, Arijit Chakravarty was able to forecast many of the key impacts from SARS-CoV-2.

  • Symptomatic quarantine is insufficient to stop the spread of the virus
  • Natural immunity is short lived
  • A modestly effective vaccine will not be enough
  • The cost of non-compliance with public health measures will be significant

Sure, there are other elements which are missed – but if we had used this approach, we would arguably be in a better position right now.

And politics.

This hasn’t aged well… infectious diseases are known to stop at passport control.

One of the stand-out moments in the early pandemic was then UK Prime Minister Boris Johnson’s pro-freedom speech at Greenwich on the 3rd February 2020. This was followed by disastrous dithering and delays in implementing a nationwide lockdown that arguably cost tens of thousands of British lives. Add to this the scientific prejudices against airborne transmission which were seized on by politicians around the world, meaning a fatal dose of bad advice was being dished out to the public at large.

It was soon followed by the ritualistic publication of daily death counts from every advanced nation and far more press conferences than anyone should have to endure in a lifetime. At some point the penny dropped and the message about masks started to get out, only to lead to the very shortages that the initial misguided public health messaging had been trying to avoid.

The cavalry came over the hill in 2021 with the arrival of mass vaccination in rich countries that had pre-purchased doses. Not all the vaccines were created equal however, some had undesirable side-effects and others were not particularly effective at preventing severe disease. But most significantly, none of the mass market vaccines developed to date (July 2023) are effective at preventing transmission. This was another message that took far too long to diffuse via our politicians. I remember a work meeting in late 2021 where everyone was proudly talking about their vaccination status and suggesting maybe we could all remove our masks? (We didn’t..)

The US seemed to be a fertile battleground for political disagreements on the wearing of masks. I cannot understand why anyone would reject something relatively innocuous that has major health benefits for the individual and their friends and family. But that’s humans.

A litany of missed opportunities

Negative, for today.

Another tragedy of the pandemic is the failure of the West to release vaccine related IP to the world. I’ve never been much of a fan of Bill Gates, or those who consider him to be the locus of a global conspiracy to do all kinds of terrible things. However, I’m afraid his fingerprints are all over this particular failure.

If we take a step back, we have a highly contagious, rapidly mutating virus that’s spread across the world. One possible response is to build a slew of national (or even regional) mRNA vaccine factories across the world on a common platform. This would enable very agile, localised production and distribution of whatever new vaccine we come up with (and it could also be used for other mRNA vaccines too, license terms permitting). We have already seen four or five major shifts in the Covid-19 genome (from the Original Wuhan variant to Alpha, Delta, Omicron and now beyond Omicron sub-lineages), it’s abundantly clear that the vaccine will need to evolve at a similar pace to have any value at all.

One of the downsides of narrow mRNA vaccines that is starting to get some public attention is the fact that they are very narrow. The Pfizer and Moderna vaccines code for the production of the spike protein, however it is the same spike protein that has changed in each iteration of the virus. There is also the problem of immune system patterning, where our immune systems retain the strongest response for the first version of the pathogen (or vaccine) they encounter. This means that instead of starting your vaccination course with shots for the original Wuhan variant, you may be better off jumping directly to an Omicron targeted shot. Unfortunately we don’t have much data for this, as most people in countries where trials are easy to do have already had at least the first round of shots.

The mutagenicity of coronaviruses was news to me, but I’m not a virologist or an infectious disease specialist. To those in the know, it’s obvious and is the reason why we don’t yet have a vaccine for the common cold. So it would seem obvious that a worldwide network of vaccine factories, that can retool output overnight if needed, is exactly the prescription for a Covid-19 type viral pandemic. But this threatens the financial interest of Pfizer and Moderna, amongst others. It also seems that the ability to reduce transmission is higher for other types of vaccine that aren’t as easy to produce

Where next?

I wear an FFP2 mask in public. I take a rapid antigen test if I’m feeling unwell with symptoms that could be Covid-19. I don’t mix with people unmasked unless it’s really important and we’ve all taken precautions. Thanks to these measures (and a couple of HEPA filters), the last time Covid-19 came to our household I managed to avoid infection. If you have read this far, you are probably also careful about avoiding infection.

But billions of my fellow humans either do not have the means, or the inclination, to protect themselves in this way. This means millions of infections every day/week/month, which in turn allows the virus to mutate. Some mutations are benign, others spread more easily and can be more deadly. If you think of the human genetic code as a lock, with each mutation the virus gets closer to opening it.

A difficult problem

If we are going to ever regain control of the situation, each of us needs the means to protect ourselves and our families. These need to be secured by a mixture of give and take, both materially and in terms of social permission. It’s not okay to abuse someone for wearing a mask, just the same as it isn’t acceptable to abuse someone for their race or for being disabled.

From a hardware point of view, we need to make high quality masks available and affordable for all. The more people wear FFP2/N95 masks, the fewer people are going to catch and spread Covid-19. The more of us who are vaccinated (worldwide), the lower the burden of severe disease will be.

No silver bullet

There is very seldom a single silver bullet in biology, from what I understand. Mostly because vampires and werewolves are not real. We need a new vaccine, probably something administered intra-nasally, which will provide sterilising immunity (or something very very close). This will be able to stop transmission in its tracks. We probably also need to be wearing masks, just so that the few mutant viruses which could evade our upgraded security don’t wind up in somebody’s lungs. And then life can go back to normal, until the next pandemic.

To end on a not very cheerful note, the fact that we’ve had SARS, MERS and SARS-CoV-2 all in the space of 20 years is not an accident. It is a consequence to some degree of human expansion into the natural world, increased air travel, and directly or indirectly, climate change. Ultimately all things are linked, and solving one problem makes it easier to solve others. We just have to start, and be guided by our compassion for our fellow humans – rather than greed or fear.

Tagged , , , , , , ,

Citizen Science: Getting to discovery?

For the last few weeks I have been thinking about an exchange with the ever dynamic Francois Grey about citizen science, and what it would take to get to an actual significant discovery. This is in the context of my involvement with the long running Cosmic Pi project, an attempt to produce open source cosmic ray detectors based on cutting edge technology, so I will also do my best to share the lessons learned from this endeavour. While we haven’t formally terminated the project, unfortunately none of the current team members has time needed to continue the project so it is currently on “pause”. Added to this, there are a lot of supply issues at the moment – so let’s just say it’s in stasis for the time being, hopefully to re-emerge at some point in the not too distant future.

Choosing your battle: Physics vs Biology vs Other things

How easy is it to discover a new force or fundamental particle, in comparison to a new type of fungi? I watched the excellent documentary “Fantastic Fungi” featuring Paul Stamets recently on Netflix. It hadn’t previously occurred to me that you could potentially discover a new type of mushroom (or other biological entity) in your back garden or local wilderness – but it seems to be quite plausible with a reasonable amount of effort. And for a few hundred dollars, you can probably even get the DNA of your new find sequenced!

However, if you wanted to discover the Higgs Boson on your own (or even with a few like minded individuals), you would need very deep pockets and a ridiculous amount of time. Forbes estimated the cost of the discovery at $13.25 billion, plus the time of over 5000 researchers, not including the efforts of all those working on the infrastructure to support the discovery (like me since 2010).

These are probably the two extremes of the science spectrum, in terms of the validity of findings and general usefulness to the wider human species. There are also doubtless many other fields of endeavour and inquiry that fall between the two extremes, with a range of cost (money, time and resources) and reward (discovery, or significant advancement in human knowledge) trade-offs.

What does it take for Particle Physics?

The standard for a discovery in Particle Physics is 5 sigma. For those of you familiar with p-values, it’s the same principle – a statistical test to determine the likelihood that the observed result could be a fluke, rather than a real discovery. 5 sigma means 5 “standard deviations”, on a traditional bell curve. It is interesting to note that lower levels of significance can still be worth publishing, with significance of 3 sigma and beyond considered “evidence” of something new, but insufficient for a discovery. The probability of a false result at 5 sigma significance is 0.00006%, but of course it isn’t just a statistical test that determines a discovery, everything else also has to line up.

More practically, such a high level of confidence can only be reached with a large number of trials or observations. I’ve spent about a week thinking about ways to explain this concisely with some statistics examples, but to do the subject justice it really requires a full blog post on it’s own. Until I get round to writing it, I’d suggest you check out this article in Scientific American.

I started working on the CosmicPi project a few years ago now (in 2014!) with some other young, enthusiastic engineers and physicists I knew at working CERN. We all did something with particle detectors and their supporting infrastructure as part of our day jobs, but each of us had only a very small slice of the overall knowledge required. We decided to build a muon detector, using the latest technology we could find. And we knew it would be difficult…

It took several years and a lot of help before we detected our first “muons”. And then a couple more years when we figured out that these weren’t actually muons, but electronic noise and to redesign things to capture the real thing. I’ve lost count of the number of prototypes we built, it’s at least 10 different versions. In short, if you want to build distributed hardware for particle physics, you will need to build a device that can take in particles of whatever type you are interested in (I would strongly recommend muons), and emit some form of TCP/IP packet that codifies the detection and goes to a central server where someone can look at it in combination with all the other packets your devices are detecting.

Consumer electronics

The more astute readers will have already guessed that a device which detects a particle, and gives out a digital signal as an output, could also be described as a “product”. It is a manufactured object of non-trivial complexity, with a moderate cost associated. We were aiming to build a device 10x cheaper than the competition, and we managed this – in terms of costs, (but not sale price, because a) we haven’t started selling it yet, and b) some margin is required when selling anything).

The trap (perhaps it isn’t a trap) is that to scale your detectors you will either need someone with a lot of money (a lot), or to do some form of crowdfunding – where you sell your products to customers, who will host them. We’re not just talking about a box with a flashing light on it, but actually a very complicated, sensitive piece of equipment – an overall level of difficulty that puts most kickstarter campaigns to shame.

You also need to take the components and put them in a box. This is a very non-trivial activity, and since everything needs to fit in the box, and housings are either off the shelf, or custom molded ($$$ unless you have a huge volume of manufacture into the tens of thousands) it’s a good idea to choose your case appropriately. If you want to go the full distance, you will also need a manufacturer to put the components together in the boxes (and test them!). But even after nearly a decade we still haven’t got this far yet.

Lots of moving parts

There are many stages to detecting a particle, each is very sensitive.

Building a cutting edge particle detector is not easy. You will need a detector medium, we chose plastic scintillators, as they can be extremely cheap – but are rather hard to get hold of commercially unless you are buying by the tonne. You will also need some electronics to read out the detector, which will include some specialist analog circuits, as this is what it takes to detect very small, fast moving particles that are only going to generate a few photons of light in your chosen medium. These electronics have to be designed and prototyped iteratively. Before we had even finished our first generation of prototype, the main microcontroller we were using was already obsolete! So a redesign was required before we could move to the next stage of manufacture.

There are plenty of other options, such as getting recycled or spare detector chips from full size physics experiments, or repurposing existing semiconductors which are sensitive to various forms of radiation. The former may run into availability issues and export constraints, while the later path can massively reduce the amount of data collected by a particular detector. Ultimately data is what can lead to discoveries, so the more you capture the better.

A fine balance of many skills

Building a working detector is just the smallest Matryoshka doll. Around this you also need to build an infrastructure, both in the conventional sense (team, website, means to get your detector to the customer and their money into your bank account) and in the Physics sense. To use the oil analogy, raw data is just a resource, the value only comes when it is exploited with a means to process it. There are plenty of physics data analysis frameworks which exist, with varying degrees of complexity, but they all require significant pre-processing of the raw data and the addition of constructs that constitute the physics process you are searching for. A very reductive way of viewing a modern Physics PhD is that it involves three years writing and running data analysis scripts in order to generate some new insight from the vast array of high energy physics data collected by modern large scale detectors.

Full Stack software.

I find job adverts for ‘full stack’ developers rather funny. Because typically they only really want a couple of layers of the stack at most and certainly nothing that touches real hardware. The development stack for a particle detector goes all the way to the bottom. If you are building a new detector, you will need to read in an analog signal via some electronics, and somehow get it all the way up the software stack so it prints out on a screen or webpage. Practically, this means there is a need for both firmware (embedded software that runs on a microcontroller) and software, which can interface the microcontroller with a computer and share your data with the world. To build a ‘product’ appliance, that can be operated without the benefit of a full PhD, you will also need to handle everything from the calibration of the device (ideally automatic!) to setting up a device hosted Wi-Fi network and building a web interface, so that users can actually connect to your magic discovery box with their PC or Phone.

Who has done this before?

We wasted an inordinate amount of time discussing the totally irrelevant. Could we manufacture our own scintillators with embedded wavelength shifting optical fibres? Should our device have a battery inside? Would a solar panel on the top be enough to power it? This was due to inexperience, but also a learning and sharing of knowledge, which (inconveniently) is not a linear process.

What we needed was someone who had done this before to act as a mentor and guide. Someone with experience in electronics design, prototyping, manufacture. We connected with a lot of people but there are very few at the intersection of science and consumer electronics with all the relevant experience – and fewer still with sufficient free time for a project like this. There are plenty of science experts, but very few emerging experts in DIY electronics at scale, who are mostly self-educated via the mistakes of various crowd-funding campaigns they have just about survived. It’s still a rarefied skillset, even if you happen to be located at CERN.

A personal inspiration to me has been Bunnie Huang, and I can’t recommend his book The Hardware Hacker highly enough. We have been using it, recommending it to other teams we cross attempting a similar challenge, and generally trying to learn from his mistakes when we haven’t already made them ourselves. In retrospect, we could really have used a mentor to guide us on this journey. We are still looking, and in the meantime the next best thing is to share our experience with others. While we have been on our journey, open science hardware communities have started to emerge, the most notable being GOSH – the Gathering for Open Science Hardware. This is also the Journal of Open Hardware, which has also started while we’ve been working on Cosmic Pi, and maybe one day we’ll even get round to publishing an article in it about our detector!

The Profit Motive

What motivated our team? It was a lot of things, the fun of working together with like minded people, learning new skills and trying new things, the potential for discovering something, and democratising access to the technology through the code and schematics we published online. The profit motive doesn’t really feature, and as a group we are are missing a marketing department. Unfortunately (?), we are the type of people who would price our product based on what it cost to build, plus a markup we thought was reasonable. Typically in commercial electronics, if you aren’t making a 5-10x mark-up, you don’t stay in business for very long. In addition to sage advice from Bunnie, the EEVblog from Dave Jones is a resource beyond compare for those on this journey.

Design For Manufacture (DFM)

Our design has many weak points, which of course have been exposed by the ‘Great components shortage’ of 2021/2/3/n. If you open up two seemingly identical consumer electronics products manufactured a few months apart, there’s a fair chance you will find they have some different components and integrated circuits inside. This is because large volume manufacturers (and smart smaller volume ones), tend to design with at least one alternate part for each critical component. This allows them to continue production when something is out of stock. The alternative is to redesign the board on the fly, based on available parts – and of course you will probably want to test it again before making a lot! Or you can pay a ridiculous amount of money to someone who has some stock of the chips you need.

And then there is the more mundane, “traditional” side of DFM – making sure that your circuit board complies with design rules and good practices for the production process, ensuring you have sufficient margins on your passive components and design calculations to ensure that you get a reasonable yield.

This is a hugely time consuming activity. I have some friends who spend their day jobs right now redesigning existing products to work around the chip shortage. This type of operation is far beyond the resources we have as a bunch of individuals trying to build a cosmic ray detector. While it doesn’t take 100% of the effort all over again to produce a pair of design variants, even if another 20% is needed this is a lot for a volunteer project.

Putting it all together

I’ve filled out a typical business model canvas for the Cosmic Pi project. You can download it for your own remixing via the link below. We haven’t even started down the commercial part of this adventure, so I’ll just leave this here for now.

Some Lessons Learned

I have learned many things on this journey about how to build a particle detector and the top to bottom architecture of a globally-scalable IoT class device. Most of my biggest learning points come from the mistakes but not all, here are my top 5 lessons.

  1. Footprints for PCB components. The first fevered weekend of building a detector was spent painstakingly soldering tiny wires to inverted components that looked like dead spiders, all because I hadn’t verified the pad dimensions well enough on our very first prototype. Always double check your device footprints (and pin outs). Always.
  2. Humans. This project has been kind of a mash-up of science and open source, with a side helping of start-up. The most important part of the puzzle is the human element. As usual I roped in a few friends, made some new friends along the way, and we had some fallings out too! Trying to wrangle a team of very skilled, highly intelligent volunteers with divergent ideas into a project can be challenging. When conflict erupts, which it will, make sure that your friends know that any disagreements aren’t personal, and that you value your friendship independently from the project. If you see tempers rising around a particular issue, don’t wait for things to boil over before getting involved. And if you are wrong, or over the line on something, apologise as soon as you realise it. Things have been a lot of fun, but it hasn’t always been easy. I don’t think I destroyed any friendships (so far)?
  3. Ignorance. I know thing X. Thing X is obvious (to me)… but it turns out that some team members didn’t know thing X, and didn’t even know that they didn’t know it. They took on a challenge, and got into difficulties that affected the whole project because of their ignorance. We’ve all done it in different ways, with impacts that vary from expensive to time consuming. Of course, it is necessary to assume some level of common knowledge (and trust) when any two people are working together, but I find it is always worth taking the time to frame the task and go over the first principles at the start of any new collaboration.
  4. Interns are amazing. We have been fortunate enough to have a few interns working on the project, some of whom were even full time and funded. The progress they have been able to make on the project, working full time, as opposed to free evenings and weekends for the rest of the team, has been inspirational. We were able to have a good win-win relationship with all the students who worked on the project so far. The ones who were funded got paid and all of them learned valuable skills in programming, electronics and particle detector assembly, plus the lesson of how hard it all is to put together.
  5. Entropy is a problem, especially in software. Just because you have a set of awesome software for your device that’s tested on hardware platform Y.0.00.0, doesn’t mean it will work at all on hardware platform Y.0.00.1. Or even on your original platform with a version update to your OS or it’s major libraries. Software requires maintenance! The rules, settings, configuration requirements, dependent libraries are all shifting. To minimise your exposure to entropy I recommend:
    • Keep it simple. The less code you write, the less there is to maintain (and you should have fewer bugs too). The software problem hasn’t changed fundamentally since the 1970’s, you should read The Mythical Man Month by F. Brooks Jr for wisdom and inspiration. It’s the best book I didn’t read at university.
    • Put as much of the data processing into your embedded elements, i.e. firmware, as you can (within reason), as this will be stable across software changes. Keeping our data output format stable for versions 1.6 through 1.8 saved us a lot of time.
    • Scripts not software. It’s much easier to maintain a hundred lines of Python than something compiled. If you can rely on software platforms (InfluxDB, Grafana) for the heavy lifting that is ideal, and if not then consider ‘module’ level systems such as SQLite and Python libraries. Writing your own linux kernel drivers in C is always possible, but will require a lot of upkeep.
    • When it comes to embedded binaries, make sure you keep a copy of the latest version you have compiled for distribution (and all previous versions too..). This is especially important if you are using development environments such as Arduino, where the actual .bin/.hex file can be abstracted away under the plug and play upload button.
    • Git. Things which are put in a repository are great. Things which aren’t are usually lost over the years it takes a project to get to maturity.

A conclusion, for now at least.

I hope to have shared insights into at least some of the ground we covered with Cosmic Pi so far. We’ve come a long way, but just like climbing a mountain we might have scaled the first and the second rise, it’s still a long way to the summit. If you are full of enthusiasm and want to get involved please drop me a line, or if you would like to chat about your own open hardware science projects feel free to get in touch with me via twitter, where you can find me as @pingu_98.

Tagged , , , , , , , , , , , , , , , ,

Our COVID-19 Social Rules: A new world disorder

The final frontier: the supermarket!

“We need to figure out a way to hang out with other people and not get Covid again.”

My wife

It’s been a rough two years for most people since the Covid-19 pandemic got started, and the same is true for our household. We’ve both been sick at least twice with Covid-19 symptoms, some of which have persisted for months. But now we’re both back to ‘normal’, fully vaccinated and we’d like to keep it that way for as long as possible, even when the case numbers around us are taking off again. States are tearing up their own rules, so we’ve made some of our own to keep ourselves and our friends safe.

Our Covid-Safe rules for home & away

1. Wearing Masks – When

Masks are the most effective thing you can do to protect yourselves and others. They are bearable, but not always comfortable, especially when it’s warm outside or you are doing major physical activity. We wear masks when leaving the house, in the communal areas of apartment blocks (entrance, stairways, basements, parking garage, lift, bicycle parking) and anywhere else we go indoors (shops, offices, doctors surgery etc.). We don’t generally wear them outside, unless we are passing people on the street or it’s really busy.

2. Wearing Masks – What type?

Our go-to mask combination is a ‘fish style’ FFP2, with a blue surgical mask on top for higher risk situations, changed daily. For indoors, it’s always an FFP2, ideally ‘double masked’ with a surgical one on top if the situation merits it (poor ventilation, lots of people, anyone we consider high risk!). Outdoors we the risks are reduced, so if we wear a mask it’s normally just a surgical one, unless we only have an FFP2 handy.

We are fortunate enough to have picked up a couple of MicroClimate helmet style masks from the US. These are for grocery trips, and during mask mandates we wear a surgical mask inside, mostly so that nobody accuses us of not wearing a mask or complying with the mask rules for supermarkets. The MicroClimate is a transparent acrylic dome fitted with an impermeable fabric surround and equipped with battery powered fans and HEPA filters on both the intake and exhaust air. They look like astronaut helmets and are reasonably sound from an engineering perspective, though they don’t carry any kind of formal certification. They are also rather heavy, so we don’t tend to wear them outside the grocery store for very long.

3. HEPA filters and open windows at home

We used to live in a small apartment block. It had communal ventilation in the corridor areas, we have an extract vent from our hallway, also from the bathroom, toilet and kitchen. These are driven via fans on the roof, which don’t run/don’t run at full speed all the time. There are cases document in South Korea where Covid-19 spread through the ventilation system across a building from individual apartment to apartment via shared air ducts – there is also a very well documented case from the SARS outbreak in Hong Kong (Amoy Gardens) where the virus spread through the drainage system via water traps that had dried out.

Consequently, we have a couple of large HEPA filters on wheels of which at least 1 was usually running somewhere in our apartment, particularly at night when the building ventilation is running in low power mode. Now we live in a house, but we still keep a HEPA filter on low in the living room and bedroom.

4. Antigen tests for social events

We had a couple of social events once upon a time in the not too distant past. We had real people visit us in our apartment physically instead of via zoom. We agreed our test protocol with them at the invitation stage – our guests took an antigen test just before heading over, and at home we both took tests before they arrived. We are of course very fortunate to have access to a generous supply of antigen tests, and it’s important to remember that they are not 100% reliable, but it’s still a big step towards cutting the risk that someone in the group will be spreading the disease. It is increasingly important to wait the full 15 minutes after doing the test to ensure you don’t have a faint line, which is a telltale sign for the start of a COVID-19 infection. Likewise, we agreed beforehand that if anyone is feeling unwell, or just not quite right, even with a negative test that we’ll reschedule – because it’s no big deal.

5. Meeting outside if possible

Covid-19 is an airborne virus. This means it spreads through the air we breath. Sitting in a poorly ventilated room with someone who is infected is the best way to catch it. However, the virus is still fragile. It is destroyed by high air velocities, easily diluted and destroyed by UV light. Based on the current medical understanding, you can catch Covid-19 in two main ways (here are two analogies I’ve wholeheartedly stolen…):

Garlic Breath – If you can smell someone’s (bad!) breath, then you are actually smelling small aerosolised particles that are coming out of their mouth and nose in a jet of exhaled air. Normally these just contain volatile organic chemicals which we perceive as (unpleasant) smells. But for someone who is shedding Covid-19 virus, it hitches a ride on the same droplets. If you are close enough to smell someone’s breath, then you are in an excellent situation to contract Covid, or any other airborne virus they are shedding. Just like meeting smelly people, it’s always more pleasant to do outside.

Cigarette Smoke– The second way you can catch Covid-19, with a slightly lower risk, is by inhaling the very small aerosolised particles which can remain airborne for a long time (several hours). The best analogy for this type of spread is cigarette smoke. Even if someone isn’t smoking right next to you, it’s easy to tell if a smoker has been in a room, or if someone is smoking in proximity to you. Of course it’s best not to smoke at all, but if you have to then doing it outside has the least consequences for those around you. It’s the same with Covid-19 spread, except of course that you can’t smell virus particles in the air.

While meeting outside doesn’t eliminate these two forms of transmission, it does substantially reduce the risks. The outside typically has at least some airflow to carry away the cigarette smoke and to mitigate the garlic breath, as well as UV which reduces the viability of the virus. Meeting outside is definitely the safest way to do things, assuming the weather cooperates.

The end.

Fight Covid-19 with software from CERN on your Raspberry Pi

Here’s a how-to guide for running the CERN CARA Covid-19 risk assessment tool at home on a Raspberry Pi.

One of the most interesting projects I’ve been involved with at work over the last year has been CARA, the Covid-19 Airborne Risk Assessment tool. It is a relatively simple tool that uses some equations to model the risk of transmitting Covid-19 in an indoor environment (for example in an office, or during a face to face meeting, or in a workshop). The tool has been developed to help CERN manage Covid-19 risk better at the lab, but fortunately it has also been released under an open source license, so anyone can use it and build upon our project (so long as you respect the license conditions).

The tool takes some inputs to describe a scenario (number of people, type of activity, room size and ventilation, geographical location, activity duration, types of mask worn), uses a mathematical model based on the latest understanding of Covid-19 and generates a risk score for the probability of spreading Covid-19 in the particular scenario. It will also indicate what measures you can take to decrease the risks, such as wearing face coverings. The model is based on aerosol transmission, and does not take into account large droplet spread, which means that physical distances (1.5-2m in most places) between individuals must be followed for a valid result.

In this post I’ll be taking you through all the stages needed to run your own instance of the CARA tool at home on a Raspberry Pi 4. I’ve not tried it on the Zero/2/3/3B+, and I would recommend using the 4 as some simulations can be quite computationally intensive – especially if you are using natural ventilation or making your instance available to more than one user. My test setup was a Pi 4 with 2GB RAM and it worked pretty well. CARA is written in python which you can interact with via a web interface. The whole thing should take about 40 minutes from start to finish.

Stage 0: What you need

For this tutorial, you will need either a Mac OS X or Linux computer, or a Linux VM running in Windows/OS X if you want the absolute latest Debian image, or SD card imaging software for Windows/OS X/Linux if you are happy to use the link below. You will also need an 8GB microSD card (minimum) and a Raspberry Pi 4 (I used a 2GB, but should work fine with any of them) with a wired network connection. Temporary access to a screen and keyboard (to set the root password and enable SSH) will also be needed for a couple of minutes to set things up.

Stage 1: Download an OS

I tried initially to get it up and running in Raspberry Pi OS Lite, however at the time of writing this is still 32-bit so I rapidly gave up. It’s probably possible and if you do get this running let me know and I’ll link to you. In search of a 64-bit Pi friendly OS, I went for Debian, because this is what the Raspberry OS is built on in the first place. The raspi-Debian project is still a bit of a work in progress and not all the hardware features are working yet, but there is more than enough already available for our needs in this tutorial.

Go and get the daily image (for reference I used 20210802). If you are using a Linux/OS X computer, you can do the following to download and image your SD card:

export RPI_MODEL=4
#set the sdf to your SD card device... not your hard disk!
#you can locate your SD card with 'sudo fdisk -l'
export SD_CARD=/dev/sdf
export DEBIAN_RELEASE=buster
wget https://raspi.debian.net/daily/raspi_${RPI_MODEL}_${DEBIAN_RELEASE}.img.xz
#checksum verification
wget https://raspi.debian.net/daily/raspi_${RPI_MODEL}_${DEBIAN_RELEASE}.xz.sha256
sha256sum -c raspi_${RPI_MODEL}_${DEBIAN_RELEASE}.xz.sha256
#write to SD card
xzcat raspi_${RPI_MODEL}_${DEBIAN_RELEASE}.img.xz | dd of=${SD_CARD} bs=64k oflag=dsync status=progress
sudo sync

This will download the latest Debian image for your Raspberry Pi 4 and write it directly to the SD card. Alternatively, you can download the image via this link and use these instructions to write it to the SD card.

Stage 2: Boot and configure the Pi

Once you have copied the image to your SD card, pop it in to your Raspberry Pi and power things up. At this stage you should have power, network, a keyboard and a screen connected to your Raspberry. You will need the keyboard and screen to set a root password and enable SSH for remote access, after that you can run the Pi with only power and network – or continue to enter things manually as you prefer.

When your Pi has booted, login as root and set a new root password (don’t forget it!). You can then create a user for the project and enable SSH (remote) access. You will also need to remember your user password, here I called the user “cara”.

rpi-4-20210802 login: root
passwd
#enter your new root password, choose wisely and do not forget it
adduser cara 
#specify the password for the cara user
reboot now #logout and restart

Now we log in again as root and install some useful packages, we will also change the host name to cara-pi.

rpi-4-20210802 login: root
apt-get update && apt-get upgrade -y
apt-get install git git-lfs sudo openssh-server -y
#set the hostname to cara-pi
echo 'cara-pi' > /etc/hostname
reboot now
#you can also shutdown now if you want to unplug things gracefully

With this, we no longer need the screen or keyboard. I would recommend connecting to your Raspberry using SSH and install Python 3.9 by compiling it from source:

ssh root@cara-pi
#:sudo apt install wget build-essential checkinstall libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev libffi-dev zlib1g-dev -y
wget https://www.python.org/ftp/python/3.9.5/Python-3.9.5.tar.xz
tar -Jxf Python-3.9.5.tar.xz
cd Python-3.9.5
./configure --enable-optimizations
make
sudo make install
cd ..
sudo rm -r Python-3.9.5
rm -rf Python-3.9.5.tar.xz
#set python 3.9 as default ...
echo "alias python='/usr/local/bin/python3.9'" >> ~/.bashrc
. ~/.bashrc
#check we've got python 3.9 working
python -V
exit #logout as root

Stage 3: Run CARA

And now we are ready to clone the CARA source code, install it and run CARA from the CERN repository:

ssh cara@cara-pi
cara@cara-pi:~$ git clone https://gitlab.cern.ch/cara/cara.git
cd cara
git lfs pull   # Fetch the data from LFS - weather profiles and weather stations
pip3 install -e .   # At the root of the repository, install python libraries
python3 -m cara.apps.calculator #and we're done - it's now running

At this point, we are now running CARA on our Raspberry. You can connect to it via http://cara-pi:8080/ assuming it is accessible via your local network. If you can’t find it via the hostname, you should use the Raspberry IP address (probably something like 192.168.xxx.xxx), which you can find by entering the command below into the terminal on your Pi:

ip addr

If you want CARA to run automatically at boot on your Raspberry, we need to install it as a service. You can find out more about how to do this here.

You can even open your service up to the whole internet, but you should think about the security implications before you do this. For example, consider applying firewall rules on your Raspberry and permitting SSH login only with a key file. If you are hosting it at home, you will probably also want to modify the port (from 8080) and make sure you have an SSL certificate, a good way to do this is with an nginx proxy and the EFF’s certbot, but this is something for a follow up post.

Some notes, questions and answers:

How accurate is this Covid transmission model? Great question, if you want a detailed look under the hood of the model, check out the paper we wrote about how it works. You can also find all the scientific references to the original published sources which we have used in constructing our model.

This tutorial is just for the CARA Calculator, a simplified version of the model. If you want to run the expert app, I’d suggest using something more powerful than a Raspberry!

If you are using an x86 (Intel/AMD) machine, you can run CARA locally (and without having to go through all the above steps!) via the docker images also provided. You can find links to them in the CERN repository.

At the time of writing the CARA web pages still contains some CERN branding. The software is free (Apache License, V2), but the CERN logo is copyright. If you want to host a version of CARA for your friends and family, or at work for your organisation etc. please make sure you aren’t using the CERN brand.

If you have questions about running CARA on a Raspberry, drop me a line via twitter @pingu_98. If you have questions about the software itself, you can contact the development team via the repository.

What is my personal role in all this? I’m just one of the members of the CARA development team, each of us brings different skills and perspectives to the project. Covid-19 has been a real challenge for everyone, but it gives me a lot of personal satisfaction that some of the work I’m doing can be shared freely with the whole world to improve our understanding of how this disease spreads and what we can do to protect ourselves.

What about liability issues with presenting Covid-19 risks – what if someone catches it even when the risk is low? Unfortunately it’s still possible to catch Covid-19 even in low risk situations, this is why it is important to respect the health advice (masks, physical distance and hand-washing). The model also comes with a lot of fine print, which you should read.

Footnote: Thanks to Adrian Monnich for the comments on improving user privilege levels, I’ve amended the instructions!

Tagged , , , , , , , , , , ,

Home Indoor Air Quality Monitor

How to build your own indoor air monitoring station. This is a work in progress and will be updated, completed and made more pretty as I get spare time! I’m thinking of turning this build into an online workshop, let me know if you are interested.

What does it do?

This project creates an indoor air quality measuring station, a little larger than a credit card, capable of measuring Temperature, Air Pressure, Relative Humidity, Volatile Organic Chemicals and also equivalent CO2. It will measure the air every minute and log the results into a database, accessible with your web browser. You can also share the data via the internet if you want to. The capabilities are impressive, but bear in mind that we’re using very cheap sensors so your mileage may vary!

I’m currently thinking about designing a badge with the same sensors, and also an external air sensor station with GPS. We’ll see how far I get with those ideas!

Parts: (The essential items should cost < 30 units of money, including shipping)

CCS811 – eCO2 and VOC sensor

BME680 – Temperature, Pressure, Humidity and VOC sensor

1602 LCD screen with I2C converter – These are blue or green, but you can also get them in red or orange from other suppliers. Make sure you get one with a PCF8574 I2C adapter, as we need to modify it a little.

Stand for the LCD screen

Raspberry Pi (Minimum Zero, but should work with all Raspberry Pi’s)

microSD Card for the Raspberry Pi (minimum 8GB)

Some 2.54mm pitch jumper cables, you will need female-male and female-female

Extras:

You may need these if you don’t already have them lying around in your box of geeky accessories!

USB micro OTG cable

HDMI mini to HDMI converter

USB micro cable (you’ll need a USB C cable if you are using a Raspberry Pi 4).

USB to mains power adapter (for the Pi Zero 500mA is fine, for a Pi 4 3A is recommended)

USB keyboard – if you have desktop PC, you should be able to borrow the keyboard from it.

Assembly:

There are three stages to assembly, firstly the I2C module needs to be modified and soldered on to the LCD screen. Then the two sensor breakouts need to be soldered to the jumper cables. Finally the screen needs to be put into the housing and everything connected to the Raspberry Pi.

I’ll add some photos to describe each step. For those who are already experienced makers, you need to remove the two pull up resistors on the LCD I2C module, in order to prevent the 5V from breaking the input pins on your Raspberry Pi. The two sensor breakouts are both 3.3V powered, whereas the LCD needs 5V.

Software:

The software is what makes it all go. There are three parts here:

  1. Python Script + libraries. This reads the I2C data and puts it into the Raspberry Pi
  2. InfluxDB. Stores local data in a really efficient and neat way.
  3. Grafana. Provides fancy visualisation and web interface.

There is also a 4th element, MQTT, which allows your data to be shared with a centralised server via the internet, which is optional.

To add – download the image, put it on the SD card, run it, connect and set up.

My Summer Internships: A Retrospective

Yesterday I was thinking about all the wonderful summer internship experiences I had while at university. I felt bad about the effect of the COVID-19 pandemic will have on those currently studying who won’t be able to take up their internships this year. I thought I’d write down my own experiences to share, in the hope that it will be useful for someone planning their future adventures, in 2021 summer and beyond! 

My internships, 2001-2005

Summer 200110 Gresham St. Project, Bovis Lend Lease

In December 2000 I accepted an undergraduate sponsorship from Bovis Lend Lease (now Lend Lease), to do electrical engineering at university. The offer was independent of my choice of university, tied only to the subject. I had looked up undergraduate sponsorship options when I was 17 and thinking about university, as a way of reducing the costs and boosting my employment prospects. Bovis was the first company that I had applied to and I accepted without hesitation. I’d also looked at IBM for their PUE scheme, Microsoft and also some of the military/defence related options, but they had later deadlines and certainly at the time IBM didn’t commit to anything more than a gap year employment. The deal I was offered by Bovis included annual trainee conferences, some industry skills training at the CITB training centre in Bircham Newton and annual postings as summer holiday cover to construction projects in my area. In return I would receive a book allowance during term time and be paid as a junior site supervisor during my summer work. The programme still exists today, so definitely take a look if you are seeking undergraduate sponsorship for a career in construction!

Normally the programme didn’t start until the first summer of university, but I asked if it was possible to start on site directly after my A-levels and the company was very accomodating. I was sent as a junior to help out the building services team on the 10 Gresham Street construction project, which at the time was in the process of excavation. The building was to be a low rise office, designed by a team renowned architects Foster & Partners. My boss was Gary Sturges (Hello if you are reading this!), who was the building services manager for the project and started my education in what it is that makes a modern building light up! The services designs were a long way from installation, so I also got to follow some of the groundworks and steelworks. Being realistic, a fresh faced 18 year old can’t necessarily contribute a whole lot to a multi-million pound construction site, but here are some of the more concrete things I did included:

Learning how to fold a drawing and keep the site drawing sticks updated with the latest revisions. It’s basic grunt work, but having the latest revision of a drawing is fundamentally important.

How to work in an office! For the first time I wasn’t going to school every day, I was taking the tube into London like an adult. There were all kinds of things to navigate, including finding my way to the office on day one, finding my way to site thereafter, buying lunch from a sandwich shop. Using email for work, handling a photocopier and sending and receiving faxes (yes, faxes were still very much a thing back in 2001!) with cover pages.

After a month or so I had done the necessary safety training myself, called a ‘license to practice’ back then, and was authorised to give site inductions to visitors and some of the smaller or specialist groups of workers attending the construction site. I remain impressed by the commitment to safety on all the Bovis Lend Lease sites I visited and worked on, the process of giving these briefings gave me my first understandings of the statutory duties of employers to provide a safe place of work and a safe system of work for their employees. The other thing I learned is that safety works best when it’s a human to human interaction, knowing that before anyone steps out onto the potentially hazardous environment of a construction site, someone else has taken the time to explain the particular dangers of that specific site and to remind them that safety is a partnership between employer and employee.

I also attended my first work meetings, learning essential skills such as staying awake, followed by more advanced meeting skills such as taking and preparing minutes.

Finally, I had time to read the company site safety manuals and safety system. These were a couple of very large A4 ring binders with the distilled procedural knowledge of the company for how to run a construction site. They were a great resource and it was especially useful having the time to read through them – I often wish I still had a copy to refer back to even today.

 Bank of America European HQ

Summer 2002Bank of America fit-out, Bovis Lend Lease

After my first year of university, I went home for the summer and was soon commuting daily to Canary Wharf. At that time Bovis had a specialist fit-out division called Bovis Lend Lease Interiors who had won the contract to fit out trading floors. Initially I was based at their office in Farringdon, then as the project moved closer to the start date we relocated to offices in Harbour Exchange to be closer to the site at 5 Canada Square. The building for the fit-out was still in the final stages of completion by another contractor, so access on site was limited. During the summer I learned a lot about the importance of tracking document issues and revision by external consultants, as a key part of driving design towards something which can actually be built. I was given the ER’s to read (employers requirements), which was my first introduction to reading and understanding formal specification documents. This was the first time I had had the opportunity to work on a really complex electrical services installation, with a small data centre and dealer desks for financial trading part of the installation brief. I also had the pleasure of working for Denis Wilson (Hello Denis!) who was running the fit-out services team, and who would later be my boss again on another highly complex and challenging Bovis project over at the BBC.

Churchill gave his VE Day speech from this balcony.

Summer 2003 – HM Treasury Phase 2 PFI, Bovis Lend Lease

Following on from a challenging second year at university, I was assigned to a mysterious sounding project called “GOGGS East“. It was the second stage of the refurbishment of the UK Government Treasury office building in Whitehall, on the corner of Parliament Square. Here is a great presentation I randomly found online showing the insides of the building and some of the work, which is Grade II* listed, the highest category for UK buildings with historical and architectural value. The building has a rich history, including hosting significant fortifications installed during the second world war to bomb-proof the basement areas, where the cabinet war rooms are located.  One of the highlights of my internship was unlocking the Churchill Room, the majestic office used by Winston Churchill at some points during the war, with it’s balcony at the front of the building overlooking parliament and whitehall used for his speech on VE day. Another fascinating part of the building were the old treasury vaults on the sub-basement level, where some of the UK’s gold reserves were once stored. The gold was long gone and they were to be filled with concrete as part of the renovation.

RAEng-logo-2013

Summer 2004 – I travelled round the world visiting lots of construction projects, funded by the Royal Academy of Engineering.

This is another story in and of itself, which merits a blog post to itself – one day I’ll try to write it. I was very fortunate to be awarded an Engineering Leadership Award in 2003, which gave me access to funding, a mentor and fantastic opportunities to expand my knowledge and skills. The program still operates today, and I cannot recommend it highly enough. If you are interested check out this link for details on eligibility and how to apply.

CERN_blue_transp_600

Summer 2005 – Summer student at CERN

I visited CERN for the first time in the summer of 1999 after persuading my parents to send me on a school trip. I really liked it and applied to return as an intern via the summer student program. Again this is a fantastic opportunity, I would recommend it without reservation to eligible students. You can check out the application procedure and eligibility requirements here. This internship was rather different from the others, with morning lectures on mathematics, physics and the challenges of building particle accelerators, visits to the then under-construction LHC and in between I worked on my project, writing a part of a detector control system for LHCb in VHDL. In addition to the huge intellectual stimulation, the career benefits, I made lifelong friends from across the world – many of whom I’m still in touch with, and some I work with on a daily basis! Back in 2015 we had a little get together to celebrate our 10 year anniversary, here’s what I wrote about it at the time.

Conclusion

These internships were a fantastic learning experience and set me up for my current career as an electrical and electronic engineer. I’m very grateful to all those who helped me to get them and to make them such valuable and rewarding experiences. While this year may be a wash-out due to the pandemic, there are still some fantastic schemes out there for current students and those applying to university this year. If you are a student I highly recommend seeking them out for next year!

Tagged , , , , , ,

The Brexit Post (from 2017)

I wrote this post just over 3 years ago. It seemed too pessimistic to publish a the time. Now, in the midst of the pandemic as the UK government push hard for a no-deal outcome, we are reaching the end of a path littered with broken promises. I thought it was time to share.

17th January 2017

My country is sinking. The parts of the world that aren’t already on fire are about to be ignited. Here’s what I think:

The question which was asked of the British people on the 23rd June 2016 was:

The referendum question will be: Should the United Kingdom remain a member of the European Union or leave the European Union?

On the basis of a 52/48 split on a vote for leaving the EU, the UK Government is now pushing for a “Hard Brexit”. This means:

  • Leaving the EU (fairly obvious)
  • No freedom of movement (not obvious)
  • Leaving the common market (also non-obvious from the text of the question)

The destructive consequences which will result from taking the actions above include:

  • Loss of access to all EU research funding, which will decimate research and innovation in UK universities.
  • Loss of access to EU markets at current tarrif free rates, requiring complex negotiations to re-establish a worse deal.
  • Potential loss of access to all WTO deals until something is negotiated.
  • Subject to a deal being reached to allow them to stay, all EU nationals will have to leave the UK.
  • As part of the same deal, UK nationals may also need visa’s for future travel to the EU.
  • Farming will be heavily hit by the loss of EU funding, unless a timely and adequate replacement is set up.
  • Unless suitable legislation is enacted before we leave, UK citizens will loose all their current rights enshrined in EU human rights laws.

Furthermore, the government wants to do all these things without a parliamentary vote.

Failing to consult and gain approval from parliament for the above would be a brazen action, putting at risk the rights currently enjoyed by british citizens. If we are to have a significant proportion of our existing rights removed without parliamentary vote, it would be an extraordinary situation. Abrogation of the rights of the citizenry without the consent of their representatives is clearly morally wrong. Abuse of power in this way is the road which leads to tyranny.

Post Script

That was what I wrote three years ago. It seemed too depressing, too pesimistic and yet here we are. The government has sought to shut down parliamentary scrutiny of the Brexit process, and was ultimately found to be in breach of the law. Any who urged caution, real negotiation with the EU and the path of moderation from within the Conservative party have either been chased out or formally purged. In the context of the pandemic, the UK government has been using secondary legislation, without any parliamentary scrutiny, to restrict the rights of UK citizens in quarantine. Now established, I fear that these habits will be a hard for the government to break.

If there is light, it comes from the devolved nations, Scotland, Wales and Northern Ireland. When the dust settles, I am sure the statistics will show they have thus far managed the pandemic far better than England. Scotland in particular has stood firmer against the prevailing winds of political change, the chilling of discourse and has remained vocal in welcoming those from foreign countries. It shows that the downward spiral of Westminster politics is not the only route available, we shall see what the next three years bring.

Building a PV Microgrid for the Druk White Lotus School, Ladakh, India

I’ve been very priviledged to have worked on some great engineering projects in my career to date. This blog post is about one of the stand-out projects that I was fortunate enough to work on while at Arup in London, back in 2007 and 2008. The Druk White Lotus School is an exemplar sustainability project, located in Ladakh, India.

20080929-0001

The start of the school day at the Druk White Lotus School

The Druk White Lotus School is a boarding school in the foothills of the Himalayas which aims to foster and preserve the unique culture of the Ladakhi way of life. The school is planned based on the Dharma wheel, with classroom buildings near the entrance, and residential blocks further back within the site. It was my role to work with the client in developing the technical specification for the microgrid installation and to support them during the tender process, then at the end of the project I had the chance to go to Ladakh and get hands-on with the final commissioning and site acceptance testing in September 2018! Ladakh has a harsh climate, with extremes of hot summers and freezing winters, which was one of the drivers of the project schedule. The treacherous road to Leh closes for winter, restricting the availability of building materials. Due to the cold winter temperatures all building work outside is very difficult, meaning that all work had to be completed before the bad weather set in.

1440px-Chenrezig_Sand_Mandala.jpg

A Sand Mandala, inspiration for the layout of classroom areas of the school. (image taken by Colonel Walden, from Wikipedia, CC BY-SA 3.0)

The Leh valley, where the school is situated has an intermittent electricity supply, with a typical maximum of 4-6 hours of electricity per day (at the time of construction). Electricity was rationed, based on different districts and areas, not conducive to a school lesson plan using computers or electrical equipment of any kind! The design of the Druk White Lotus School has been supported by Arup on a pro-bono basis since the inception of the project. The installation of an on-site micro-grid, with solar panels and battery storage was designed to permit the site to operate automonously, with a top-up from the grid supply when available.

20080929-0016

The school is located in a stark and beautiful valley.

One of the main challenges of the project was to install a distributed generation and electricity storage system on top of an existing electrical infrastructure. As school buildings were constructed, each was added to the three phase low voltage electrical network via copper or aluminium cable. Each building had a local single phase distribution box (or fuseboard) to supply interior lighting and sockets. The site was also equipped with a very small back-up generator, feeding in to the local distribution via a break-before-make transfer switch. A further constraint for the design was the requirement to construct a modular, scaleable system that would be able to grow with future development of the school, as new classroom buildings and accomodation blocks were added.

Untitled Diagram

A block diagram for the system as installed, arrows denote energy flow.

The system architecture is shown above, with three single phase PV installations added to the three classroom buildings at the front of the mandala site layout. Sunny Boy PV and Solar Island battery inverter systems from SMA were specified for the hardware installation,  The angle and orientation of the PV installation was optimised using the freely available RetScreen software. Each was connected to a different electrical phase, providing an overall three phase balance for the site, via the existing distribution system. A new power house building was constructed to house battery storage and three single phase battery inverters with a common DC bus. The existing AC disrtibution system wiring was retained, with frequency modulation used for communication between the battery storage and distributed solar inverters, located approximately 400m away from the power house building.

20081012-0002

The contractor team, one of the site supervisors and me in the battery house towards the end of the project, September 2018.

System functionality:

  • Self-contained micro-grid, capable of autonomous operation on PV supply
  • Phase-Phase energy transfer via DC bus for unbalanced loads
  • Energy storage via lead-acid solar batteries designed for deep discharge operation on a single DC bus
  • Ability to perform battery charging and operation from local generator (recommended for periodic full recharging of the battery system as a maintenance operation)
  • AC distribution frequency modulation used by SMA inverters for communication without the need for additional communications wiring, using a slight lowering of the micro-grid frequency to encourage PV supply, and a frequency rise to disconnect PV supply in the case of insufficient demand and/or a fully charged battery.
  • Potential to sell energy back to the grid, however this was disabled at commissioning due to the lack of a regulatory/legal framework in the local energy market.
  • Sunny Island inverters equipped with SD card slots providing minute by minute logging for easy remote analysis of the system performance.
  • 9kWp PV installation, in three modules of 3kWp per building.
  • Capability to add additional PV installations on future buildings.
  • Capability to add additional battery capacity as funding becomes available.
  • A new earthing point was installed for off-grid operation.

 

20080926-0002

Construction of a new earthing point for the site

The site commissioning process was very challenging, as it was the first installation of it’s type for all those concerned (contractor, site foreman and myself the design engineer), there were also significant differences, both in terms of culture and electrical installation safety standards to be overcome before the system could be commissioned. It was also necessary to borrow the only three phase rotation meter in the valley from the local airport electrician in order to ensure the correct configuration of the three phase system. The only major issue with the commissioning came when the battery system was fully operational and charged, with the solar inverters failing to connect. Upon further investigation, it transpired that the solar inverters had been shipped with firmware settings for domestic installations in Germany, rather than the micro-grid firmware required in this installation. A laptop with the new software and a suitable communication interface had to be flow in to Leh in order to make the upgrade, but once completed in October 2018 the system performed as designed.

20081003-0003.jpg

The Sunny Boy solar inverter installed within the vestibule space of the classroom buildings, complete with PV isolators, mains isolator and an energy meter to monitor power generation.

Summary:

This was a wonderful project to work on back in 2008. The challenge of designing a micro-grid system for a self-contained school, building upon existing low voltage distribution infrastructure in a remote location was significant. Then the opportunity of getting hands-on for the project commissioning in such a unique environment was possibly a once in a lifetime opportunity. Having all the system performance data logged to SD card was also very helpful in supporting the installation when I returned to the UK. Receiving the call from site in October 2008 to hear that the system was working as expected after the firmware update was a real moment of both relief and excitement.

If you would like to know more about other aspects of the multi-award winning Druk White Lotus School project you can find additional details in this article in Ingenia magazine.

20081003-0006

PV panels supported on wooden trusses which form part of the classroom building structures.

Post script: At the time I had the idea of filming various critical operations on the system and putting the videos on YouTube for future reference. This was really useful and something I would highly recommend for anyone doing this type of project! The videos are still online, you can view them here in this playlist.

 

Tagged , , , ,