About ten years ago, we designed and made an Arduino shield implementing “core memory,” a technology that was sixty years old even then. Our shield stored 32 individual 1s or 0s using magnetic fields going either clockwise or anticlockwise around 32 tiny doughnuts of magnetisable ‘ferrite’ material. This kind of memory, invented in the 1950s, became dominant in the 1960s, with some machines having over a million bits (compared to our 32!). By the 1970s, core memory had been replaced with semiconductor memories, but the name lives on in the “core dumps” produced by Unix-like systems.
We were work colleagues at the time, and while chatting one day it emerged that Ben had had an interest in how core memory works for some time, and Oliver was looking for an electronics project. We both were interested in playing with the Arduino, and so the collaboration took shape. The project took several months, working off and on in our spare time. There was plenty of unfamiliar territory along the way, but we had some advantages over the original inventors, including knowing that what we were trying to do was possible! One key moment was when our early prototype produced the correct traces on the oscilloscope: that was when we thought “yes, we’ll be able to get this to work”.
Software and systems engineers continue to work at higher and higher levels of abstraction, letting us build more and more powerful systems. Sometimes, though, it can be a welcome change to work with something concrete and tangible. This sentiment, and maybe some nostalgia, seemed to help our project strike a chord — we’ve had coverage in places like Hacker News, Hackaday, Slashdot, Arduino forums, Thinq magazine, and the journal of the UK’s Computer Conservation Society.
After we published our write-up of the project on May 11th, 2011 — the 60th anniversary of a key core memory patent — quite a lot of people got in touch, including several with first-hand memories of working with core and related technologies. We had some interesting and quite long email conversations with some of them, which was great. A few students emailed too, keen to build their own core memory. We offered help where we could, and we know of at least one group which was successful. It was very gratifying that someone other than us had got it to work! And of course there’s Jussi Kilpelainen’s kit, inspired by our project and released in May 2016 — we’d decided we didn’t want to take on the task of packaging and shipping orders, and had chosen to release our designs under the Open Hardware License, so we’re pleased that Jussi has made a success out of his take on the shield. Our sample Arduino code was only really meant as a proof of concept, so it’s funny to think of it driving hundreds of boards now!
Although it would take 35 core memory shields to store a 140-character Tweet, getting the core memory to reliably work taught us a lot about physics, the interplay of analog and digital electronics, PCB prototyping, collaboration, persistence through setbacks and mistakes, and project management. We found working with these old technologies to be valuable, even beyond the enjoyment from understanding things, getting them to work, and connecting with history.
When we started the project, we decided we’d use modern ICs, PCB techniques, and so on. We could, instead, have used only transistors, say, or tried to make our own USB end-point, but the parts that were of interest to us were the core memory itself, and working with a real project on the Arduino. This was a useful thing to clarify to ourselves, and kept the scope of the project well focused.
As a final thought, perhaps there are real benefits to making computing tangible. Dr. Fabio Morsani, technologist at Italy’s Istituto Nazionale di Fisica Nucleare, was in touch recently to say how he’s planning to use core memory as part of a program to teach high-school students how computing is built up from real physical mechanisms. We think it’s important to demystify computers like this, helping people to realize that software isn’t magic. Maybe being able to directly see and understand the workings of a computer can help give people more agency in an increasingly software-driven world.