Ripple Down Rules

Ripple Down Rules (RDR) are a form of knowledge acquisition technique that helps to extract knowledge from human experts by grounding that knowledge in terms of the context in which the expert applies or uses that knowledge.


RDR grew out of as a solution to a big problem scientists and engineers faced when they tried to build expert systems or AI knowledge systems by asking humans to transfer their knowledge to the machine. Human experts often know how to do a skilled task. They possess the underlying knowledge about the domain and how to do the task in that domain. They can even talk about the task with other experts easily, even though each expert may have a completely different mental model in how they approach the task. But, despite their expertise, they find it incredibly difficult to communicate that knowledge in a declarative manner.

Let us take a simple example – driving a car. Many of us know how to drive a car. Now imagine that we had to write down all that we know about driving a car in a book. The book has to capture complete knowledge about driving the car so that someone who doesn’t know how to drive can read that book and assuming perfect memory able to instantly learn how to drive a car. If you haven’t learnt how to drive or taught someone how to drive, let me tell you no such book exists or is likely to exist any time soon. We’d probably get driverless cars first.

The reason for this isn’t that we don’t know how to drive a car. We may have a decade of experience, driving 2-3 hours a day, in all kinds of weather and traffic conditions. It is that knowledge is so deeply hidden away in our brain and context-dependent, that we cannot separate it from the actual task itself.

So how does RDR help?

Compton and Jansen recognized that although experts find it difficult to communicate the complete knowledge in a structured manner, they can easily structure their argument when defending their position for a given case, by referring to specific characteristics or properties of specific cases.

In our example of teaching someone how to drive a car, the expert can provide specific ‘morsels’ of knowledge in specific context. In our example, this would have been provided when the student messes up the dreaded parallel park – “You should have angled the car by aligning x and y when reverse parallel parking.” 

Through repeated, context-driven knowledge acquisition sessions, experts can articulate their tacit/hidden knowledge and the student learns how to drive the car. Each time capturing the additional piece of knowledge and associating with some experience supporting it.

There are a number of different ways to visualize and structure RDRs, but they all have these 2 fundamental properties:

  1. Knowledge should be acquired in the context of specific cases, or evidence.
  2. Any change to the knowledge (incremental or batch) must maintain consistency of the previously seen cases.

RDR has been successfully applied to a wide number of applications – email classification, playing chess, VLSI chip design, network intrusion detection and my own work on applying it to image processing/computer vision, to name a few. Here are some more.

How about a concrete example?

Let us assume we want to teach a computer to recommend activities to us given weather conditions. We’ll use a nested hierarchy of rules to capture the rules that predict what to do. In this particular example, the default rule A is to ‘stay-indoors‘. Rule B overrides the conclusions of rule A, provided it is less than 30C and cloudy day. If these conditions are met, the system will recommend ‘play-tennis‘. If the temperature is less than 12C, rule E would kick in and recommend we ‘stay-indoors’, however, if it had been snowing then we’d recommend ‘go-skiing‘.


The knowledge base is used to make predictions, and each time it gets it wrong, we can correct its knowledge by adding an exception rule. The new overriding exception rule would contain:

  1. the context or condition for when it applies
  2. the actions or decision to conclude
  3. one or more examples, in support of that rule that would be checked for revision in the future.

Before the rule could be added to the knowledge base the examples from existing (called cornerstone cases) must be checked against the new rule to ensure they still behave correctly. Typically this means checking the neighborhood of the rule where the exception will be added – that rule and its existing exception rules.

RDR vs machine learning?

Despite all the data out there and a plethora of machine learning techniques, RDR has their place. Firstly, machine learning cannot work without sufficient data. In some scenarios, you cannot wait to capture enough examples before needing a working system. RDR is fantastic way to incrementally acquire knowledge as data trickles in. You don’t need to go through the often time consuming and expensive process of collecting data up-front when you could ask a human to guide and train the system along the way.

Secondly, RDR and machine learning are complimentary. The principle of evidence for knowledge is consistent to both techniques. Algorithms like InductRDR and Minimum Description Length RDR are good examples of constructing rules (and exceptions) automatically from data. The advantage of using RDR’s structure and validated revision strategy is that both machines and humans can add/revise the knowledge as needed.

RDR for Image Processing and Computer Vision

During my PhD, I developed ProcessRDR to help computer vision experts quickly build image analysis and segmentation systems. I had used it to build image processing application to segment lungs in high resolution computed tomography scans of the chest. Multiple ProcessRDRs captured knowledge about which algorithm to apply to the image, what parameters to select for that algorithm or classify structures in the image as lung vs not-lung. The simple workflow and the validated revisions meant that even though the expert may be partially guessing on the right parameter values, with each successive revision the expert can get to the right parameters very quickly.


ProcessNet generalized the concepts of RDR to other representations of knowledge – not just rules. After all knowledge can be represented in code – input data, feature extraction, feature generation, algorithm selection and the algorithm itself.


ProcessNet allowed me to build very complex lung anatomy segmentation systems by giving me the safety net of having my changes/tweaks validated against previously seen cornerstone cases. It also allowed me to incrementally build up the complexity and sophistication of the system, while focusing on very concrete examples that helped me to articulate and contextualize my knowledge.