Drone Controller

This is a quick theoretical guide on controlling linear dynamic systems, with practical advice for DIY Drone Controllers wanting to implement their own code. If you are an undergraduate engineer, graduate STEM or a tinkerer with a basic understanding of differential equations, this should make sense and hopefully help you!

Motivation: Exquisite Control — Look at that head!

What is closed-loop control?

Suppose you want to sprint to a wall in the shortest time – do you run at full pace then come to a sudden stop? If you try to, most humans will hurt themselves, and either brake too late and hit the wall, or end up short. Instead, we apply a varying level of speed to our legs. But how much nerve impulse do we send to each leg muscle, and how often?

We use our eyes do determine position and speed, but we also use our amazing ears to determine speed and acceleration! During our sprint, we frequently measure how close we are to the wall, and how quickly we’re moving. We gradually adjust the demand on our muscles, and repeat that in a “closed loop” until the error (distance from the wall) is zero. Controlling vehicles has many parallels.

A hovering drone without a controller is dynamically unstable, like an upside down pendulum. If you tip it slightly, it will wobble and go all over the place. It needs a constant correction of orientation to stay upright, or move anywhere, and that needs to be automated.

Inverted Pendulum. A unstable linear(ized) system.

Control theory is one of most beautiful things I learned in undergraduate engineering. It looks to give mathematical answers to the question: what should I do, given where I am, and where I want to be? That includes staying still! Most control systems can be simplified into the block diagram below:

Purple lines are signals. Black Boxes are functions or processes — (called transfer functions in the frequency domain). We can design the controller, but the Dynamics of the Plant (thing to be controlled, e.g. power plant or drone) are given to us by nature, e.g. mass, length and outside our control.

In this blog, we examine each of the the elements 1-4 of a simplified closed loop controller for a drone design, along with practical advice on each, to help implement your own.

To illustrate the key principles we will assume a that our “Plant” is a 2D drone in a flat plane with its x-axis pointing into the page. This greatly simplifies the Dynamics. Here is my own python code for a simple 2D interactive controller where I use the assumptions below and the variables and equations in purple.

Controlling a 3D drone is very similar to a 2D one, although the dynamics module becomes more complex, and we need to transform between the drone frame of reference to the inertial. A good place to learn these for 3D is the UPenn Aerial Robotics Online Course, to which I’ll refer often. I credit the Upenn course also with most of my learning in this area (thank you!). If you really want to fast forward you can see my code for a non-linear 3D controller here.

1. Getting the Desired State

In Theory

Remember that telling the drone to stay still is a desired state (and needs to be controlled against e.g. wind gusts etc…). Otherwise, moving is a series of desired states:

For a human-piloted drone, the pilot will have a route in mind. Using a joystick or some other device, we tell the drone controller what position it needs to be in – the desired state. A joystick can (in theory) communicate information on desired position, velocity and acceleration by moving the joystick knob close or far, abruptly or slowly.

An drone on autopilot will normally be given way-point coordinates from a controller based on a desired path. But does it go between each of them in a straight line? That’s the subject of route planning.

In Practice for a Drone

Suppose you want the drone to touch a number of coordinates. There are an infinite number of paths it can take between them. You can plan a path that maximises speed, minimises acceleratrion, or even minimises something called “Jerk” which is the rate of change of acceleration

Minimum jerk trajectories are popular as they minimize the stress on drone components by avoid abrupt, tight turns, keeping its movement smooth.

A minimum jerk trajectory using less than 10 individual way-points

Route planning generates a desired trajectory, which is a series of desired states that vary with time. Using the desired coordinates as solution points, you can find (for example) a minimum jerk trajectory by finding a curve that smoothly fits the desired coordinates.

Here’s an example of a minimum jerk trajectory in 3D that I implemented in MATLAB in the UPenn Course, along with an explanation of why it needs to be a polynomial of the 7th power of time.

2. Getting the Actual State

How do we know the current state? Namely, where is the drone, which direction its pointing (attitude) or how quickly is it moving? This also depends on whether you’re flying a real drone or simulating one using a computer model.

If you are piloting a real drone, you should be able to see it. That’s a good estimate for position, but maybe not acceleration. If the drone is beyond visual range, it will need to communicate back to you its position, acceleration and attitude. You could even get a real-time feed from some camera mounted to it, but that can be heavy, energy intensive and slow.

Note the overlap in certain areas

You could instead get small measurements from sensors like GPS (position), LIDAR or ultrasound (distance to obstacles) accelerators (linear acceleration) and gyroscopes (rate of rotation). The latter two don’t rely on vision so are called Inertial Reference Units (IRU) in most vehicles. Remember: the main things we need for the state are the position and attitude, and their rate of change.

In Practice for a Drone

Unfortunately, no sensor is 100% accurate – they all have a minimum resolution. Moreover, getting a sensor reading from source to computer involves a number of electronic circuits and analogue-digital signal conversions — these can introduce noise or even errors.

The IRUs only give us acceleration, so we need to integrate that over time to get position — this risks compounding errors.

All of this means we are never controlling the desired state against the actual state, but rather the measured state. There are a number of ways to get around this in practice:

  • Redundancy Using a number of independent sensors to measure the same variable, both for backups and to increase confidence.
  • Statistical Noise Reduction Accounting for noise by statistical methods like Kalman Filters that compare the predicted next state with what the sensors actually read. By combining the noise distribution of both, the uncertainty is reduced … (assuming the noise is truly random). I wrote an article previously that demonstrates a simple Kalman filter for a drone with both inertial measurements and LIDAR readings against a known map. It also discusses sensor fusion more broadly.
Learn how to programme this here
  • Modelling the noise Accounting for sensor noise in the design of your control system, by adding a transfer function between the actual state and measured state.
You could design your control system with a transfer function (Black Box) that accounts for noise and disturbances, giving a linear map between actual state and measured state.

3. Designing the Controller

Now that we have the error between current and desired state…

In Theory

The Controller takes the error between desired and current state as an input, and outputs a control signal (e.g. electrical current) to the plant (the thing being controlled, e.g. a motor). How does the controller relate the output signal to the error? There are normally three ways to do this:

  • Proportional control relates the output signal to the error through a proportionality constant called the proportional gain Kp so that Output = Kp times the position error. This is the simplest and most commonly used control gain.
  • Derivative control uses another proportionality constant Kd and calculates the output signal as a multiple of this times the error in the first derivative (i.e. speed) of the state. This can help in damping the response if we want a high proportional gain
  • Integral control calculates the output control signal as a function of time and the total error up to that point (the integral of the error). This helps counter any offsets or persistent bias in the sensors or plant, even after the (instantaneous) error goes to zero. We won’t use integral control for this Drone.

The Proportional, Derivative and Integral elements can be combined in a variety of ways to form the family of PID controllers. A correctly designed PID control system can be mathematically proven to drive the error exponentially to zero in a way that is stable.

Control system design, and stability, is an engineering degree in itself and too wide to describe in this blog or website. Below are some of the things to consider for a drone.

In Practice for a Drone

In the case of the drone the motors point vertically. Controlling the vertical height is a straightforward relationship with the motor speed and is nearly linear. So our diagram for a simple 2-box control system would actually work. But controlling the horizontal motion is more complicated because there are no horizontal facing motors (the drone is not fully actuated).

We first need the drone to tilt just a little so that there is a component of its force in the x or y directions. Tilting introduces non-linearity (see dynamics) because of the Sin and Cos functions, but it also means there is a loop within a loop in our control system, each with separate gains. First we need to decide how much to tilt by, then we need to order the motors to change speeds to generate the moment.

The controller I have design is a PD controller so you will only see Kp and Kd terms above, with a further subscript for y, z, or psi (tilt) depending on the axis.

Tuning the gains (choosing their value) is also a bit of an art. There are some simple heuristics like the Ziegler-Nichols method that can help you get there without too much pain, as trial and error can seem endless.

I’m sure gradient descent like in machine learning could be used as an automatic tool to tune controller gains, but I have not looked into it. [If we are to believe this UC Berkley Machine Learning Researcher, few people have made the link until recently!]

During most of my control engineering degree, we assumed the systems we were controlling were Linear and Time Invariant (LTI). We indeed assume this for our drone, although in reality no system is LTI, so the below reasons mean your simulation will never match reality perfectly:

Linear dynamic systems obey the superposition principle. So for example if you run twice the current through a resistor, it generates twice the voltage. For our drone, this is not really the case because of the horizontal motion requiring a tilt as an intermediate step – the horizontal force is actually trigonometric in angle, and not linear (and Sin(A)+Sin(B) =/= Sin (A+B)!).

We use the small angle approximation that Sin(A) ~ A but that limits our drone to fairly gentle maneuvers where it doesn’t tilt too much. For an example of a non-linear controller, have a look at this MATLAB example from the UPenn code — this allows a wilder drone but may be more difficult to prove stability.

Not very relevant, but very cool non-linear dynamics that you’d struggle to simulate in a computer

Linear controller gains also shouldn’t depend on the state, so twice the error should causes twice the output signal whatever the state. In reality though, our motor thrust will not be linearly proportional to the electric current, as prop blades aerodynamics vary with tilt. For small drones this is negligible at shallow angles, but it’s a different story in helicopters.

Time Invariant systems assume that the Dynamics (and hence Controller) functions don’t change with time, so they are just one equation we hard-wire at the beginning and that they work for all possible states of the Drone.

In reality, aircraft burn fuel and their center of gravity shifts as they fly, so their dynamics do change and so must the controller to ensure a smooth experience for passengers. The drone dynamics are fairly time invariant (although when the battery gets low, the responsiveness of the motors may suffer).

4. Modelling the Dynamics

After applying the forces to our Drone… what happens?

In Theory

In the Dynamics module of the control system, we take the forces acting on the body, and calculate the resulting motion based on the system’s equations of motion.

It is not essential to know or calculate the equations of motion when designing controllers for complex systems. Many times it’s impossible, so we just measure the output of the system and take that as the next current state.

For LTI systems we can reverse engineer the equations of motion (in a sense) by measuring the plant’s impulse response; that is, its vibration frequency and decay rate resulting from a tap, like a tuning fork!

Everything you need to know about the Fork’s dynamics can be learned by measuring its response to a tap

Most engineering systems can be reasonably simplified into LTI systems obeying linear second-order differential equation (analogous to the mass–spring-damper system, or damped harmonic oscillator) and their impulse response characterizes them fully. (No real world system is truly LTI though).

However, if we want to simulate the drone’s motion in our case, we need to know it’s equations of motion, or at least estimate them from first principles. This is especially important if we want to simulate the drone responding to unpredictable inputs like a human controller.

In Practice for a Drone

In the below image we show the most basic representations of Newton’s F=ma equations of motion for our drone, taken instantaneously. For a 2D system, these are actually sufficient to model the dynamics.

This is a linear map between the external forces, orientation, weight of the drone, and its resultant acceleration. To simulate and plot the trajectory of the drone we have two options:

  • Real Time Simulation: solve the equations of motion instantaneously at short time intervals, then “plot as we go”, then solve again after the drone has moved through a few milliseconds, and so on.
  • Replay Simulation solve the equations of motion for given interval (example 0-10 seconds), store the trajectory and then replay the whole thing in “simulated” real-time.

Take a look at my python code for both here.

In both cases for a 2D drone, by “solving” the equations of motion we are evaluating the acceleration then estimating the resultant velocity (and hence position) at small time intervals using discrete integration — which is a Taylor Series approximation.

In 3D, the equations of motion become non-linear: the acceleration doesn’t depend just come from external forces, but from rotating simultaneously around two or more body-fixed axes (gyroscopic effects), we can’t quite use the method above. A numerical solver like ODE45 is used to find a solution for the acceleration and velocity at the same time, that is within an acceptable margin of error (tolerance) — again at discrete time steps.

Replay simulations won’t work well for a user-controlled simulation or game, however for reasons discussed in section 5 (time) they are smoother. Real-Time Simulation is tricky, because we want the the time intervals to be small enough to accurately evaluate the speed and velocity using Taylor series, but we don’t want to perform so many complex calculations between each frame update to the screen…

5. Time … and the real world

In Theory

To bring everything together, the controller is constantly looping through these four steps (or 3 if you are going for a replay at the end)

  • 1 Get desired state

  • 2 Measure current state

  • 3 Calculate control signal

  • 4 Estimate next state using Dynamics and ideally plot it (Not essential, unless you want to simulate or model the system).

In Practice for a Drone

Using timers in my code and a 5 year old i5 processor, I think the reality looks a little more like this.

  • 1 Get desired next state by regularly checking the pilot remote controls, calculating any pre-programmed trajectories (10s of microseconds)

  • 2 Estimate the current state by polling sensors at a fixed rate, converting visual information into location information, estimate speeds by integrating accelerator data (10s or 100s of microseconds)

  • 3 Calculate control signal (a few microseconds for a linear controller, 10s or more for a non-linear one) and issue the motor commands to the power unit (microseconds-milliseconds)

  • 4 Estimate the next state using the dynamics, or solve the complete simulation by solving the differential equations of motion (10s of microseconds per second of simulation)

    And/or wait for the control signal to turn into motor current to turn into a rotating fan (inertia) to turn into thrust (fluid dynamics).

    And/or plot or render your drone position and attitude (10s of microseconds at best in Matplotlib)

What this means is that the simulation time is not equal to the real passage of time – by the time the solver and plotter has got to the 5th second of the real simulation, it may have expended (for example) 5.2 seconds of computing time – the 0.2 seconds being computing time to run through the 4 steps above – let’s call this “The Matrix delay”

The “Matrix delay” could be reduced by a faster computer, but much depends on your CPU clock speed, and whether you can get the CPU to entirely devote itself to the above process. A Raspberry Pi for example runs python within Linux and doesn’t guarantee how many RAM or clock cycles your python programme gets. To control that more tightly you could use a dedicated computer, which could even be a small one like an Arduino that either loops a single programme, or turns off.

Also, compiled programming languages like C++ run faster than interpreted languages like Python, so any serious realtime application needs to be written in the former. C++ also ringfences memory so you have a better chance of dedicated processing power.

Finally, live plotting graphics is time consuming and I don’t understand it well enough just yet. What I do know is python’s standard plotter Matplotlib is not particularly fast. It took me a while to find ways of getting it to plot the output of a script while in parallel listening to keyboard and joystick inputs. Here are a few ways to do that (includes link to a wiring diagram for a RaspberryPi joystick controller input).

Most professional aircraft control systems and robots have dedicated computers running solely one function. For my first DIY drone though, I’m going to see how far I can get with using a Raspberry Pi. Stay tuned!

Mobility 3 – Energy

Energy versus Emissions

Is your electric car lowering emissions? Maybe not, unless it’s largely charged by solar panels or other zero carbon sources. Most electric cars consume electricity generated in a power station far away. In the U.K. or Germany over 40% of electricity is generated by burning fossil fuels like coal and gas. In the U.S. it’s nearer 60%. McKinsey forecasts that most of the world’s electricity will be non-renewable until at least 2035.2018 UK Energy (left) and Electricity (right) consumption by fuel type

U.K. total Energy (left) and Electricity (right) primary energy source

This blog started off cheering for flying taxis in 2018. Environmental consciousness was also swelling behind the scenes, and peaked in 2019. The U.K. made it law to have net zero emissions by 2050, extinction rebellion ran amok, and “flight shaming” in Scandinavia pushing demand for air travel down by 3%.

Flying comes under a lot of scrutiny for polluting the world, but many insiders feel aviation is a scapegoat. Is it simplistic to say that flying vehicles are bad for the planet? Maybe, depending on their size. Large airplanes are by far the most practical approach for some long journeys. This article looks at the numbers for smaller vehicles like flying taxis and E-VTOL concepts.

Energy versus Emissions

First, some physics. Getting around doesn’t just emit CO2. It consumes energy. Whether you’re burning off a cake to go for a run, or burning diesel to move a lorry, you’re using energy that came from somewhere.

Aside from nuclear energy, all energy on earth comes from the sun (and little bit from the moon – tidal). The sun’s energy can come as an instant peppering onto solar panels, and it can heave wind through turbines. But today, 80% of global energy consumption is still from fossil fuels: these are super-concentrated stores of solar energy in the form of long-dead plants and animals (or recently deceased ones, in the form of food…).

Rather than just looking at CO2 emissions, this article will look at how much energy different modes of transportation consume. Why? It allows us to compare the cost of motion for anything before specifying its fuel.  It encourages less consumption, period. It also doesn’t let things like solar off the hook: solar panels need land and they need to be replaced every 20 years. Wind farms and biofuel can compete with agricultural land, and a lot of countries don’t have enough land to sustain their food needs.

Complex chart, suggest you see the source: Alexander et. Al 2016. Basically if the whole world wanted the French diet there would not be enough land to feed us.

If we want to be real about sustainability, we can’t just consume less fossil fuels. We need to consume less energy in total. We also can’t ignore society needing fast ambulances, or that productive adults will want to cross the country with more urgency than backpacking teenagers. So, we need options for speed and comfort. But are flying cars an energy efficient addition to today’s options?

Flying Cars – Revisited

In 2018 I was excited at the possibility that flying cars would solve the world’s traffic woes. We wouldn’t feel overpopulation because we could build neighborhoods sparser and further apart. Alas, it’s not that simple. Aside from the challenges I have already reviewed like capacity and air traffic management, I had not really appreciated the amount of energy (and arguably, land) it would need.

The flying car craze that swept the world in 2019 is led by companies promising to be “emission free” and “sustainable” by using electric vehicles. Chances are they won’t be. They’ll want to maximize flying time and minimize charging time to be profitable, so they’ll need to speed charge. They’ll probably need public grids for that, and we said earlier that those grids will be up to 60% fossil fuel powered for the next 15 years.

But that’s still not bad if they’re more energy efficient than the alternatives for similar journeys. The problem is, physics makes it very difficult. To move on planet Earth, there are three energy costs: you need overcome inertia, friction and gravity. The latter is what puts flying things at such a disadvantage to trains and cars, particularly on short routes.

The Numbers

The table below shows the total energy in kiloJoules per passenger per kilometer for a typical journey in each mode. I start with how much energy it would take to move (a person or helicopter), and then how efficient it is at converting its fuel (in today’s world) to motion.


For example, it’s estimated that humans need to burn around 3kJ of food to get 1kJ of motion because of entropy and heat. We need to send 115kJ of electricity from the station to get 100kJ into a train’s wheel motors, because of transmission and conversion losses.

Message me and I’ll send you the full list of assumptions and calculations. I would love to have people check and validate my method. Assuming I’m generally correct, we can deduce:

  • For the same commute, a flying electric car will consume nearly 4 times more energy per passenger than a normal electric car. (Although it will be nearly 3 times faster).
  • High-density, high-efficiency electric trains are the most efficient form of mass transport, consuming between a third and a tenth energy per person per km of today’s flying solutions.
  • Cycling is extremely energy efficient and happens to make you healthy and happy.
  • We could shave around 50% of the domestic flying fuel bill in Europe if we banned jet aircraft on routes under 1,000km and replaced them with turboprops – people would just need to live with slightly longer journeys.
  • Helicopters perform very poorly, which is why the two best funded electric car concepts are tilt-rotors, below

What this means for Urban Air Mobility

Monstrosity or Masterpiece

In the last 18 months,  nearly a billion in VC dollars have poured into urban air mobility startups. Believing that VCs have smart people, I am sure they’ve identified markets where energy efficiency isn’t assumed to be a regulatory problem. There are hundreds of concept vehicles out there, but these are probably the front runners:

Joby Aviation (+ Uber)Tilt-wing concept with several embedded fans (revision 1)($700M+ raised from investors like Jet Blue and Toyota),
LilliumTilt-wing concept with several embedded electric fans($100M+ raised from investors including Skype ex Founder)
VolocopterVTOL Helicopter with several electric fans instead of a turboshaft rotor($50M+ raised from investors like Intel)

Without giving too much away, Uber intends to pilot its service in Dallas and Los Angeles, and Lillium’s homepage animation shows a beautiful machine zipping around New York. The question is: will flying cars start in luxury niches then become mainstream, or stay in luxury niches forever? Here’s my guess;

Europe, Japan and California

In regulated and eco-friendly markets like Japan, Europe and California, I think physics will soon become a major challenge to anything flying. This is especially as high-speed rail is a well-developed form of travel in the first two, and cycling routes are proliferating within major cities.

So far, the states who have signed up to a net zero CO2 target for 2050 contain about 1 Billion people, or 14% of the world’s population. Many are in low population growth or even decline, meaning their citizens could number closer to 10% in the coming decades.

However, they are also where most of the world’s wealthiest people live, and where the transportation infrastructure is generally good. As such I can only see flying taxis become luxury niche market here, replacing helicopters and maybe developing into a premium route.

In terms of mainstream flying within Europe, I am struggling to see the basis for long term growth if net zero carbon targets are taken seriously. On routes under 1,000km turboprops may become mandated over jets, and on even shorter routes, flying altogether banned if there are suitable train connections.

Rest of the USA

I sat with a friend who ran a future product strategy project for a major American aerospace company. The view there is: flying is the best way to get between cities, and energy isn’t a worry. The GDP and scale of the U.S. coupled with the lack of high-speed rail options creates a necessity for flying; the U.S. is the world’s dominant private and general aviation market.

Guess Lillium’s Launch Market

Electric tiltrotors promise to be cheaper, more reliable and faster than helicopters and private aircraft, So there could be large market here, possibly even becoming mainstream in places. Uber intends to trial its pilot air taxi service in Dallas and Los Angeles. It is also likely the U.S. will also make its own equipment, from vehicles to service infrastructure.

India and China

Flying in general is in its early days in China and India – only 10% of their populations have ever flown, despite having 40% of the world’s population. They are also where most of the forecast growth in passengers is expected, with nearly half of the growth in demand to 2035. Neither country has made serious CO2 commitments, although they are net energy importers. 

I’m honestly not sure what the flying car market will be in either country but suspect it will remain a luxury for the foreseeable future. China for example has the world’s longest fast train network which may make inter-urban flying unnecessary.

But both are so big that even their niches could be substantial. Combined, I think the combined market in India and China will be somewhere between Europe’s and America’s.

Bottom Line

With the world’s population and consumption growth, pressure on energy and land, increasing low-carbon policies and the many existing alternatives like high speed rail, I can’t see flying taxis becoming a mainstream transport method anywhere outside the USA. The high net-worth niche is likely to be a sizeable market in China and possibly India. But in Europe it’ll probably be an amusement ride, although emergency vehicle and VIP transport could be justified use cases.

Mobility 2 – Keeping Track

Worrying that some faces still look frustrated with the traffic… even up there.

In the Jetsons, we see thousands of flying cars zooming about in an invisible lattice in the sky, safely. This got me thinking. What is needed before we can trust something like that existing in the heavens above London?

Many of us in London bemoan the decision to expand Heathrow because of noise implications (we’ll come to that in a later blog). We are told that it is too congested. In the meeting I attended with TfL on flying cars, they claimed that London had the world’s most congested airspace, which is why helicopter flights are so difficult to approve. Yet when you look up at the sky in most parts of London, you’ll almost never see an aircraft. What’s going on?

The Challenge

The first challenge for any mobility system is for its vehicles not to crash into each other . On roads this is achieved by the individual drivers using their eyes and the speed limits – but we still get collisions. In the skies, passenger aircraft are flying at anywhere between 300 and 900 kilometers per hour. The nearest collision is likely beyond your visual range, and that’s if it’s daytime. On top of that, aircraft fly in 3D, so the next collision could be above, behind and below you. Picture making a wrong turn in this environment:

Visualization of UK air space done by NATS

In this challenging 3D environment we need a way of tracking the position, including altitude and importantly speed and heading of all vehicles in a space to make sure their current paths are clear, but also that there is enough separation that emergency action by any aircraft won’t mess things up for others – this explains why London’s air space is clear according to our eyes, as clearances for large aircraft mean they would look like sparse specks on a map. Tracking the aircraft is done by technology – GPS, radar and VHF ranging, but – believe it or not – it’s human beings at the end who oversee the management of air space using this information. They’re called air traffic controllers (ATC).

ATC Today

The International Civil Aviation Authority (ICAO) was convened in 1944, at the end of the war, to develop a shared set of rules for all the member countries to allow safe air travel between them – this includes things like aircraft call-signs, identification protocols, and flying at even altitudes (for example 34,000 ft) when flying West, and odd ones when (e.g. 35,000) flying East.

Each country then formed its own National Aviation Authority (NAA) which made the ICAO regulations (or a variation of them) effectively into national transport law. For example, the UK’s Civil Aviation Authority effectively rolls up in the Government’s Department for Transport, and sets the rules for any flying passenger vehicle over UK skies.

Europe’s NAA’s are in theory able to join up and manage the whole of Europe, since regulations are so similar and borders are adjacent – this collection of NAAs is called Eurocontrol, and Eurocontrol is working towards a fully joint up European sky in a programme called SESAR.

That’s for international travel. If we want to get back into the Grand Challenge in the UK, i.e. flying cars and urban air vehicles, we need to focus on the CAA, its service providers and the technology available.

(Civil Use) Radar Today

Radio Detection and Rangin, or RADAR has been around since the beginning of the century, and is still the most widely used way of tracking moving objects. An electromagnetic pulse of a specific frequency is emitted into space, and when it hits something, it bounces back to a receiver. By triangulation of the emitted and received rays, the location of the object can be determined, and by measuring the Doppler shift of the ray, the speed of the object can be worked out (at the time the ray hit it).

The range and resolution of the RADAR depends primarily on the power source of the emitter. If you wanted to see all around you, you’d send out a spherical beam, but since the power of your emitter would be spread over a huge surface area, the intensity at any given point would be low, meaning your resolution would go down, and you’d need a huge amount of power to compensate.

You could instead focus a beam in one direction, and mount it on a rotating platform, but that introduces some other limitations; depending how quickly the objects move compared to the rotational speed of the emitter, you may lose track of them. In the below gif what we’re seeing is a snapshot of each wedge in the map which is updated at the frequency of rotation.


Back when computers were rudimentary, each returned signal had to be interpreted and classified at the same rate or faster as the signal swept around a circle; this again placed limitations on how fast you could sweep, but also the number of targets that could be simultaneously tracked.

Limitations to Today’s RADAR systems

Aside from the range and power constraints already mentioned, the frequency bands of the the electromagnetic bands that the radar can use are assigned by the CAA. Jamming or interference of the RADAR system can be done by sending noise or false readings at these frequencies to the receiver, confusing the processor into seeing things that aren’t there – this is the basis of electronic warfare.

Target classification and identification is a challenge to the processor that interprets the receiver signals: emitted signals reflect not only off aircraft, but birds, wind farms and skyscrapers. Certain wave frequencies can get diffracted by high terrain or weather systems. And once the signals are back, the processor needs to piece together not only that something is there, but what exactly it is. Back in 1944 at ICAO aircraft were much fewer and slower, and this was not too much an issue.

Today however, if we want to track thousands of tiny, zippy delivery drones and flying cars around Hong Kong, we need much more sophisticated systems.

RADAR Tomorrow

The emitter and receiver problems discussed earlier are physical limitations which can be crudely addressed by having a higher power input, or more efficient electronics in the emitter and receiver electronics. The electromagnetic frequency bands available is a regulatory decision, which is becoming a bit more of an issue these days as certain frequencies are becoming saturated, and The frequency bands available today for RADAR may not be the optimal ones for detecting much smaller objects.

Where technology has been able to make a difference however is in computing power and efficiency. Faster processors can track more targets. Intelligent signal processing coupled with machine learning can intelligently classify birds from wind turbines and drones.

To manage an airspace like London, a 3D “always on RADAR” will likely be needed, compared to the rotating architecture of old which is like rotating a blinkered animal and asking them to constantly report what they see. This market is expect to grow nearly 20% annually and UK players such as have Aveillant (Now part of Thales) compelling products, as do SAAB and Rockwell Collins .

One last thing – TCAS

All of this has so far been about centralised control of the airspace – one central radar air space manager such as NATS sees all and controls all. As the number of vehicles spirals, and vehicles become much more zippy, how efficient can this be?

Mobility 1 – Grounding Ambition

The Grand Challenge

In my about section I mention a “Grand Challenge” of mobility. What is that exactly? It starts with being dissatisfied with the way we get around cities, local towns and suburbs. For most of us within commuting distance of central London,  we spend nearly an hour each way to work and back, door to door – depending on your source. This is 10 hours a week, or a working week per month, of travel.

What about the quality of those 40 hours every month? If you use public transport, such as the central line, your brain is shaken out of your skull as the carriages shriek and hurtle, and the atmosphere in the carriage is suitable for studying tropical disease; roughly 20% of public transport commuters take urban trains like the tube according to Department for Transport statistics from 2017.

Another 20% of British public transport users are  on the overcrowded and regularly-delayed national rail, with the remaining 60% taking the bus. It may be possible to read or listen to a podcast on your commute, but doing anything remotely productive is out of the question. That’s a loss to your well being and to the economy.

A lot of this blog will be focusing on solutions. If I whinge too much about the problem, caution me in the comments! It’s easy to criticize public transport, as any biased commuter would. But it is also important to remember that London’s public transport is among the world’s best. Maybe not so much in vehicle and passenger ergonomics, but in how well it works as a system. At the end of the day, it is quite intuitive, reliable, convenient, and for the most part bearable. I’ve met TfL with work a few times to discuss mobility, and their people only reinforce my view that it’s a competent, caring and progressive organisation – positive change is coming!

There has been no significant increase in total public transport travel for 40 years

Outside of public transport, point-to-point car travel still dominates as the British mode of transport. Remarkably, the amount of public transport passenger-kilometers are nearly unchanged over the last 40 years. In the same time however, car, van and taxi passenger-kilometers have doubled – with the obvious consequence of congestion, noise and emissions in our neighbourhoods.

Source: Department for Transport 2017 Report

The Grand Challenge is a term that the Department of Transport has coined in their most recent call for evidence. It’s about making transportation easier, cheaper and healthier. Technology can play a huge part, from autonomous vehicles, to smart ticketing and intelligent cities. There is a recognised need to design new components and an improved system to integrate them.

One of the best things I learned in our design course at engineering school was about problem definition and abstraction; before we fixate on a particular solution and its engineering, it’s important to frame the problem and  decide how much (or little of it) we want to focus on. We need an abstract problem statement for the system and some measure of a good solution.

Energy and Well-being

Most policy decisions, and most decision problems in general can be represented as: how can we maximize a desirable outcome while minimizing its cost. We could further condense this into maximizing some score like: score = \frac{good~outcome}{cost}.

Seems simple, but we have so many variables affecting “desirable outcome”. In my mind these all maximize well-being. For transport this could be the speed, the quietness, the reliability, the safety, the cleanliness, the level of noise and vibration, but also the degree to which the mode of transport encourages a healthy posture and/or activity (e.g. cycling) so that we don’t all become sedentary and end up costing the public in mobility scooters. Making less decisions is also crucial to well-being, which is why it’s a lot less stressful to be a passenger than a driver, or to take a single mode for the entire journey rather than change 3 times (more about Mobility-as-a-Service later).

For cost, there is the direct energy consumption of the vehicle, but also the energy required to build and maintain it and its infrastructure. It’s cost to the environment and hence ultimately to us include pollutants and also NOISE. I’m a little perplexed that noise is not mentioned in DfT’s recent call for evidence: seriously, the word “noise” is not even in the document. There is a separate article on noise coming, you can believe that. For now believe me when I say that chronic exposure to background traffic noise can cause anxiety in children, degeneration of the brain and a weak immune system.

We could put the energy-equivalent of economic costs from noise into the energy part of the equation, or detract it from the wellbeing part.

So, the Grand Challenge can be thought of as the abstract problem of taking today’s Mobility System Score: score = \frac{wellbeing}{energy} and increasing it by a certain amount.

The Mobility Grand Challenge can be framed as taking today’s “national mobility score” and increasing it by a certain amount


Okay, so we have a Mobility System Score that we’d like to maximise. Next we need to realise that national mobility is a sum of the individual journeys that people make; there are many types of journey, and each will have an optimal local solution or mode of transport, but often we may need to rely on a combination of these modes for an end to end journey. Let us define segments with Segment Scores and a system to pull them together with a Mobility System Score.

Source: Department for Transport 2017 Report

A good system will have seamless high-level integration between different modes of transport. Mobility as a service (MaaS) can be though of as a service where you just say you want to get from A to B, and it joins up the cycle hire, bus and train journeys you need into a single price and seamless connected journey. That’s the service level. The product level could be something like a cycle lane from a suburb that terminates in a local train station, where you ramp onto a dedicated cycle-carrying train carriage. The train hauls all the cyclists into central London, and you cycle the last few miles into work. That’s another one for the government or a high-level integrator like TfL. Decentralization and private enterprise is needed to encourage innovation from the bottom-up for certain journey types, but doing so in isolation from each other can result in poor integration. A framework for how different modes fit, like lego, is a good idea in my books. Sort of like a set of guidelines for developing an iOS app – you can do whatever you like as long as it works seamlessly on any Apple device, and talks nicely to the other services on the OS.

Okay, we’ve opened up the abstraction funnel; we have an awareness of the context, the overall system and a way of scoring solutions. Now to start converging on a particular segment.

The 30 Mile, 90 Minute Trip

As an aerospace engineer and lover of aircraft I am no doubt biased. I am particularly interested in a segment that appear well suited for an aerial solution – hub to hub travel between major suburbs, major ports and work centres; for example Watford to the City, Gatwick to West London or Surrey to Westminster. Let’s call it the 30-mile, 90 minute trip. A common yet nasty commute which is horrible to drive due to traffic, constant starting and stopping, and stressful as hell to do by public transport as you will likely change two trains, stand for most of your trip, and encounter some form of delay.

Uber calculates that for a 50-mile trip, a vertical take off and landing aircraft (VTOL) travelling at 125mph has the same motion efficiency as a car travelling at 65mph, which is a close enough segment for me to be encouraged that VTOL can be a good solution for cutting many people’s commutes by half. Imagine 20 hours a week back to you to spend on your well being, with family, pets or on pet projects.

Not the most intuitive chart, but in simple terms the further you are from the origin, the faster or more energy-efficient your vehicle. Diagonally up to the right is best. Source: Uber Elevate White Paper

Uber further estimates that the per mile cost of a VTOL would start off about 10% more expensive than taking an Uber X, dropping in the medium term to about half of an Uber X trip and in the long term comparable to using your own car for travel. Obviously Uber wants to offer their own flying taxis, so all of these figures should be taken with a pinch of salt, however they have performed a quite comprehensive and structured analysis to get to these figures in the paper and it’s worth looking through the methodology and plugging your own numbers in.

Passengers and Bystanders

“Having lots of whirring motors flying above my head… isn’t exactly an anxiety reducing situation – is that blade going to come loose and guillotine me?” – Heavily paraphrased Elon Musk Quote

Okay, so I have arrived at a possible solution after considering the bigger picture, abstracting the problem, choosing a segment and making a start at evaluating its score using direct energy costs. If we think back to our scoring system, we realise that these cold numbers are just the beginning. We also need to consider other proposed solutions to the same problem before fixating on one and defending it dogmatically. This is where it gets quite challenging, and where cities will likely diverge in their final choice because of local preferences and details.


Virgin’s Hyperloop One

Let’s consider some land based proposals like Hyperloop, High Speed Tunnels and HS2, assuming they cost the same per mile (which they most probably wont). I’ll have to do a detailed comparison for a particular city pair one day, but here are some initial considerations:

Local Noise and Emissions: Tunnels and tubes are largely hidden from people’s senses. Elevated tubes less so. Flying vehicles inevitably noisy.

Passenger experienced: Lurched into the heavens or accelerated into a dark tunnel at mind boggling speed? Both out of human comfort zones, but the latter can be made comfortably gradual, especially without windows.

Infrastructure: Initial tunnelling expensive, and raised rail eye-sore and difficult to obtain optimal land. Train and hyper-loop need high precision rails to be regularly maintained. Flying cars need nearly no infrastructure as most concepts intend on re purposing rooftops, local airfields or service stations near highways.

System integration: Tunnels and hyperloops could allow bikes or cars to slot right on, but stations few, far between and likely in built up areas too far from suburbs. Flying taxi ports could be anywhere and less constrained by say the route of the rail.

Capacity: Train initially starts higher but ultimately constrained and difficult to scale without new tracks. A single vehicle or track failure shuts down the whole line for everyone. Flying cars start lower capacity and more expensive but in theory very saleable through parallel air lanes in 3D.

Failure Mode: Flying car falls from the sky, parachuting to ground if lucky, tumbling uncontrollably into a crowd if not. Not inconceivable. A train or hyper-loop car is stuck in a tunnel. Claustrophobic. Fire catastrophic. Derailment causes extensive damage to infrastructure. Very unlikely given significant human experience in high speed rail.

For the same network, a system of tubes and tunnels appears much more desirable than a route network of flying vehicles, but with today’s technology the time and cost to implement the infrastructure for a ultra high speed train/tube network is likely crippling beyond a few key town/city pairs.

Up Next

Initial research suggests that flying cars could be a good solution for the 30-mile, 90-minute trip, giving up to 20 hours a week back to greater London commuters, reducing road traffic and helping people live more spaciously and away from the urban centre. Whilst a lot of the human factors aren’t as favourable as a hyperloop, the time and cost to introduce VTOLs compared to a hyper loop or high speed underground network is a fraction, with a much higher ability to customise and scale.

The Segment Score of a flying car appears to be a multiple better than driving, assuming the noise can be kept at least close to highway levels and that safety is a multiple higher than driving which Uber believe is possible. The effect on the Mobility System Score will depend on the accessibility of the ports to suburban commuters, the reliability of the service and the degree of integration with local and national transport schemes. For widespread adoption many local factors would need to be overcome including public acceptance, regulation and safety demonstrations.

To get a better handle on the scores, we proceed to conceptual design and evaluation – the fun bit! Stay tuned.



Localization and Mapping

I haven’t posted for a while because I’ve been a bit stuck with this interesting, yet frustrating MOOC on Coursera. According to the marking system, I’ve passed, so I wanted to share what I learned about how statistics help drones (or any intelligent robot) navigate. Check out my code to generate the below simulation.

Using LIDAR readings from a map to help localise


If you are driving a car, the speedometer will tell you that you’re going at 50 mph. You will know that you’re going at 50 and not 55, but you probably won’t  be able to tell if you’re going at 49.96 or 50.05. This may be because the needle is too thick or because there is a tiny imbalance in some part of the machinery connecting the axle to the gauge. Fortunately, we don’t care whether we’re going at 49.96 or 50.05 mph because it makes no material difference to how we drive.

Whilst humans are usually very certain about where they are, where their nose is pointing and how many steps they need to climb, it turns out uncertainty is a massive problem for autonomous systems like robots.

Imagine a missile travelling at 2.8 km per second needing to intercept a 1m wide asteroid on a collision course with a new space station – if the missile is travelling 500km to hit its target, it needs to be controlled incredibly accurately to make sure all the small imperfections on the way don’t take it off track. It may be that the accelerometers on board have a manufacturing tolerance that means their readings can be out by 5%. What to do? How do we direct it if we’re not sure how fast it is?


Disclaimer: I used to hate statistics at school and my early working days because we went straight into the equations without talking about the essence. Since dabbling in machine learning and robotics, I now appreciate the need.

For example: If I know exactly what the market will be tomorrow, I can take only one action to plan for it precisely and optimally, but if I don’t, I need to consider the different scenarios and either plan for the most likely one(s), or ideally all of them.

Similarly, for a given atom energy level, an electron “cloud” is a representation of all possible locations of an electron around its nucleus, weighted by probability. The electron is so small, fast and damn spooky that it makes more sense to represent and use all its possible positions for analysis, rather than keep track of the crazy particle.

An electron “cloud” is all possible locations where an electron is likely to be found. It describes the electron positions more completely, if not precisely

Statistics gives us a way to manipulate the entire distribution, rather than individual certain events (samples). If we have a process that takes a variable through a large number of steps, the variable becomes the distribution – the collection of all  outcomes weighted by probability – instead of a single discrete observation.

We typically end up with better predictions by carrying the uncertainty through the process; one way to think about this is that after a large number of random events the “noise” typically averages out.

Another key idea in statistics is inference, for example in Bayes’ Rule, where we can work backwards; from a series of observations we can infer the most likely probability distribution that they were observed from. It’s quite important to read up on this before attempting the MOOC mentioned in the next section.

Kalman Filter and “Sensor Fusion”

Imagine chasing a friend through a thick forest. You hear a shuffle, you turn towards it. You catch a blur, your eyes focus. You smell his fear… okay, maybe a bit weird, but you get the idea – the information you have about your friend’s location increases as you take more independent measurements. Or, statistically: your estimate of your friend’s location is less uncertain (has less variance) if you use both your eyes and ears, than if you use only one of them independently. This is sensor fusion – kinda.

5_ Tata Elxsi Autonomai sensor fusion schematic (1)
Using a combination of RADAR, LIDAR, Cameras and ultrasound the vehicle computer assembles a model of its surroundings. Areas of sensor overlap are likely to have the most accurate picture.

In this MOOC you can learn about one way of implementing a sensor fusion algorithm for a robot that decides where it is using a combination of its last position, the reading from its wheel transducers (how far it thinks it moved) and inertial sensor information – all of these variables carry uncertainty (noise) which is assumed to be Gaussian, making it easier for us to combine them in closed form.

Localisation Using a Particle Filter

Imagine we fly our robot into a dark cave on a rescue mission. GPS doesn’t work in the depths of the cave. Still the robot needs to be able to navigate; it needs to map the inside walls, avoid them and report its position. The robot needs to be light so it has the bare minimum of sensors on it; an inertial reference unit (IRU) to measure its acceleration in its own body-centered frame of reference, a light detection and ranging system (LIDAR) to measure the distance to walls and obstacles, some simple on-board logic and a long wave radio to communicate coordinates and acceleration back to the controller.

From its first position it returns the part of the cave it can see using LIDAR, with some uncertainty. It moves a tiny bit and returns an updated map reading from the LIDAR – the parts of the cave which agree in both LIDAR readings become more certain, starting to look like more solid walls.

The IRU should help us keep track of the robot’s acceleration, but it will not be perfect because of its resolution and noise during transmission. This inaccuracy builds up, especially if we are integrating acceleration twice to get position. Here is where we rely on the previous LIDAR map readings to help us pinpoint location, or localise.

The Localisation Journey

Suppose the IRU tells us we have moved by a certain amount from our last position to position A. Not trusting it completely, we instead assume our next location is a probability distribution (like the electron cloud!) which A belongs in. This way we can use statistics to work with uncertainty and help us average out noise along the journey.

We have to use some other information to help us prove which distribution we are most likely to be in – we can use the map! It’s like when somebody gives you directions then calls you when you’re nearby and asks “what can you see?”

The localisation problem then becomesgiven what we can see in the vicinity of position A, what is the most likely probability distribution of our location? This is an application of Bayes’ rule.

This is an area of active research and guesswork – which probability distributions do we assume we are in? The Guassian is a common one to use because it is easy to manipulate with closed form math. To make the computation much faster we randomly pick a few positions (particles) to test from each candidate distribution. What follows is something like this:

# You are given the latest and best-known map 
# You are given a best estimate of your previous position (Distribution Z)
# You are given a new set of LIDAR readings and IRU measurement
# Where are you?

Try DistributionA 
 Project a number of particles from Z using IRU plus some perturbation
 For each particle
     Record how well the LIDAR readings correspond to the map
 Does the distribution of LIDAR scores for the particles correspond 
 to DistributionA? [True or False]
 If True
    Select Distribution A as distribution of new position
    Update Distribution Z as Distribution A
 End and Exit
 If False
    Go back to Start

In reality a huge amount of tuning, optimization and tweaking is needed to get something that works given the dynamics of the vehicle and the quality of the sensors. For a land-rover buggy in a given map you can check out the MOOC sandbox example and my code here.

Headless Raspberry Pi Zero W Setup

For this guide you’ll need: Linux, an SD-card reader on you desktop or laptop, a raspberry Pi Zero W,  8Gb+ SD card, micro USB power cable and WiFi, you’ll also need some elementary Linux command line knowledge.

Assuming you have the above, the good news is you don’t need to buy a keyboard, HDMI link or monitor to boot (or use) your Pi and so I hope this quick guide saves you time and money.

*This is probably my first foray into hardware. It may be because of that, or because of the wacky-hacky-open-source nature of the whole Raspberry Pi thing that I didn’t find the set-up straightforward. There are others among us who have also had issues with the SD card partitions, booting and initial WiFi hookup. This is a sincere attempt on my part to spare you that pain.

Get Ready – Download Raspbian, Etcher and VNC Viewer

Download Raspbian OS zip from “https://etcher.io/”>here.
Download Etcher from here.
Download VNC Viewer from here.

Snag 1 solved: I’ve read that VNC viewer Linux version doesn’t install normally using Linux’s built in installer, which is annoying. You can get around this by using gdebi.

 sudo apt install gdebi

Also, there’s no need to unzip the Raspbian image.

Then right click the downloaded application and install using gdebi

Step 1 – Burn Raspbian onto SD card

Find the name of your SD device

 sudo fdisk -l

Snag 2 solved. Once you find your disk (e.g. /dev/mmcblk1) I would recommend you completely wipe it and remove any partitions. I had an issue when burning later that etcher decided to create separate partitions for /boot and /root which confuse the installer on your first Raspberry Pi boot. This can be eliminated by burning onto a completely bare SD.

 sudo fdisk /dev/mmcblk1

Then delete all paritions (e.g. mmcblk1p1) until there are none left.

 Command (m for help): d

Safely eject then re-mount your clean SD. Now use Etcher to burn the Raspbian zip onto the SD card, no need to unzip the ISO image! Safely unmount again.

Step 2 – Configure Raspbian for Wifi and SSH

Mount the SD again. We need to change a few things in the stock Raspbian debian OS so that it automatically connects to your WiFi and allows you to SSH (and later remote desktop).

Navigate to the /boot directory on the SD. You can find where the card is mounted by:

 df -h

Once you’re in boot create a file called “ssh” without an extension

 touch ssh

While in /boot, you’ll also need to create a file called wpa_supplicant.conf which the bootloader accesses then permanently moves to the OS file system

 touch wpa_supplicant.conf

Finally, using your WiFi network’s name (SSID) and password (PSK) edit then save the wpa_supplicant.conf file as follows:

 nano wpa_supplicant.conf

An it should look like this assuming you’re in England (GB):

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev


Step 3 – Boot your Raspberry Pi! (and enable VNC)

Put the SD in your Raspberry Pi then power up. You should be able to ping it and see it on your WiFi network by running

arp -a

If a new Raspberry Pi device is listed along with an IP, you’ve followed my instructions explicitly and we’re good.

Keep the IP address which comes up near the raspberry pi. E.g. Note – your Raspberry Pi will have a different IP address on different boots in the future as the default setup uses DHCP.

SSH into your Raspberry Pi, so for example with my IP it would look like the below. (default password is raspberry).

ssh pi@

Once you’re in and greeted by a raspberry pi Linux prompt, you  need to enable VNC on the Raspberry Pi by following these instructions.

Step 4 – Use VNC Viewer to control your Raspberry Pi from your laptop

Annoying Bug (14/4/2020) and fix. VNCServer does not automatically boot on your RaspberryPi in the latest versions of Rapsbian, which means that if you try connecting using the graphical window VNC Viewer from your desktop/laptop you’ll get a “Connection Refused Error”. Simple fix I have found for now is manually starting VNC Viewer on your Pi through command line and SSH, before using the GUI. Simply boot and SSH into your Pi as above, then type into the command line:


Which should greet you with something like

Xvnc Free Edition 4.1.1 - built Feb 25 2015 23:02:21
Copyright (C) 2002-2005 RealVNC Ltd.
See http://www.realvnc.com for information on VNC.
Underlying X server release 40300000, The XFree86 Project, Inc

Then. Open RealVNC Viewer (installed in an earlier step), plug in the Raspberry Pi’s IP from previous step and connect!

Copic Greys

I studied engineering. We did a lot of maths. We did a lot of simulation. We even did a whole design class where we focused on abstraction, validation and verification – but not communication. We were never taught how to draw. We got very good at solving problems and dreaming up solutions, in our heads, and in the shape of equations. For most of, that’s where it ended – maybe that’s why engineers have a stigma of being poor communicators attached.  Some of us knew CAD, but we’d inherit the design from an architect or industrial designer most often. Some of us had good presentation skills, but often had nothing compelling to show non-engineers.

I graduated visually illiterate. One day I decided to change that, and I taught myself perspective drawing. I wanted to be able sketch ideas and make them exciting, before figuring out how to make them work. Over a few years of practice I went from lines, to cubes, to rounded shapes and finally products. I also went from pencil to biro and marker. When I felt ready to render depth in colour, I looked far and wide for markers. The online design community seemed unanimous in their praise of Copic markers.

Copics have a cult-like following, and for good reason. They don’t bleed. They glide on paper with a divine combination of smoothness and just enough tactility to feel where you’re going. They come in a near infinite range of colours and saturations. You buy the marker once and refil it as many times as you want. You can even replace the nibs. So I bought the cool grey range (okay, I dreamt about them and they arrived in the form of a lovely gift from my lovely fiancée).

The featured image is a drawing of an elevator console, viewed from the top, using only Copic C3, C5, C7 and C9 (plus some white tipex marker).