Making sensors systems is a tough job. Although I would say that, wouldn’t I, as it’s what I allegedly spend all my day doing (actual job may vary, depending on how distracted I am by the internet). Interestingly enough, it’s also not difficult for the reasons you might think.
When developing a sensor there are several stages – design, development, testing, optimisation, more testing, crying, production and finally launch. In general, the first few steps are quite quick – I have folders and folders full of sensor designs and ‘development’ is really just a fancy word for taping bits of cardboard together until they look like a prototype, and then slowly swapping out or painting the cardboard until it looks less ridiculous. The tricky bit is the optimisation…
Say you have a brand new sensor that you’ve managed to put together just about well enough to give a vaguely sensible result. Most sensors when they start in life, start very simplistically – for example if you are making a sensor for water leaks you’d start by making a sensor that could tell the difference between sitting on land and sitting in the middle of the English channel.
Optimisation is about slowly improving them little by little, to transition the sensor from detecting oceans to lakes, rivers, a small brook with a pretty wild flower meadow, a dripping tap, and eventually, a small damp patch across the room. My use of this analogy is in no way related to the 1 hour I spent trying to find a damp smell in my house which could have been solved if someone made one of these!
Now a new sensors is made up of any number of different components, each with their own manufacturing conditions and tollerances. Optimising the sensor doesn’t normally required anything more radical than just checking that these various components are all working together as efficiently as possible. In sensor projects this often means checking reagent concentrations and conditions. For all you know, you’re actually using 5x too much reagent, the majority of which is doing bugger all – lazy reagent. So you start by testing a simple range of concentrations.
Simple. You’ve just saved yourself wasting some very expensive antibodies. Saving money makes everyone happy and your testing might even make you realise that you’ve used too little and the optimal amount is more than that. But this little example assumes that there is only one un-connected part of the test that you can test and be confident that it is optimised. What if you have more than one thing and what if (as is very, very, very common) they interact and impact each other. Then you’re not looking for something so simple as a single concentration, you’re looking for a combination of perfect values. What if you have a reacting antibody and a protein that it sticks to?
Adding in the extra thing to go looking for has now multiplied your work. You’ve tracked down the perfect conditions but to do it you had all these extra optimisation experiments to test it. But it doesn’t stop there either – this carries on. That protein and antibody are sensitive souls and get quite uppity when they aren’t treated properly, so they need to be given a nice happy buffer to work in. In fact, they are so twitchy that they can work even better if you have other conditions right. So now you need yet another dimension to your grid.
Now if you are any good at spotting patterns, then you’ll realise that this is probably going to get worse. And you’d be quite right, too! My current project is making sensors using optical fiber. I’ve talked about it many time before so I won’t do it again but these little sensors need the same tender, loving, optimisation as any other. But they are quite annoyingly, more complicated. For a start, I don’t have 3 interacting variables – I have 4 of flipping things! (that I know of – ask again in a week and I bet I’ve found something else…). Every variables basically add another dimension so now your doing 6 x 6 x 6 x 6 experiments to find the perfect optimisation point.
So now you’re doing a section of cubes of experiments!! In theory I should have drawn a 4 dimensional hypercube to show that off but I’m not good enough of a cartoonist or mathematician to represent that. But it doesn’t end there, because in the little cartoon above, like the previous ones, it assumes you can simply look for a single variable to tell you what the best one is. My sensors are more complicated. I don’t have a single measurement I have 250 possible measurements, spread across a range which form a complicated pattern, which doesn’t really have a very clear smily face or sad face state.
So I am trying to find the best possible version of the sensor even though we don’t quite know what that looks like, but it’s somewhere in 250 variables of data per run. Which could occur with any one of ~20 variations of 4 different variables. The title of this post is not accurate. I should rename this to “Making better sensors by playing 4D battleships while blindfolded”