The previous post in this series was a quick introduction to the world of SAT and SAT solvers, including a simple example of how we can take a real-world problem and use SAT solver to solve it. In this post, we will use SAT to solve a harder real-world problem, namely lock-chart solving, sometimes also known as master-key system (MKS) solving and explore some of the more advanced techniques used to efficiently convert problems to SAT.

Before reading further, note that this post will go only over the basics of solving master-key systems and the approach will be to create a simple solver, rather than a production-ready one. If you are interested in all of the gory details of solving master-key systems in the real world, you should also look at:

  • Radomír Černoch's disseration that provides a theoretical framework for talking about different kinds of lock-charts and master-key system solvers
  • My own master's thesis that goes over the nitty-gritty details of the production-ready master-key system solver developed by our research group. The solver described within is currently being used by an actual manufacturer of master-key systems.
  • An open source master-key system solving research test bed that we develop to help other people test their own approaches and improvements for solving master-key systems.

Master-key systems

Master-key system is a set of keys and locks where a key can open more than one lock (and thus a lock can be opened by more than one key). They are often found in business buildings, where the typical employee should have limited access, e.g. to the floor, the kitchen and his own office, but some employees (e.g. maintenance staff) need to have full access to most rooms in a floor (or building).

Before we start looking to solve a master-key system, we should talk about how plain old mechanical locks work, and how a master-key system is specified.

Mechanical locks

The idea behind mechanical locks is quite old, it is often dated back to ancient Egypt or to even sooner, and while the manufacturing has gotten better, the basic idea behind it remains roughly the same. The idea is that the lock contains a tumbler, a movable part that prevents the lock from opening. The tumbler should be easy to move using the correct key, but impossible to move using the wrong key and hard to move using lock-picking tools. The exact design of the tumbler varies, e.g. in my country the most common design is the pin tumbler lock, but there are also other tumbler designs, such as the disc tumbler lock or wafer tumbler lock.

Let's quickly look at a schema of the pin tumbler lock, named for the spring-loaded pins that rest against the inserted keys. The pins are separated into multiple parts by horizontal cuts, shown in this schema using blue and green colour. The right side shows a lock where the cuts in pins are aligned with the shear line, because a compatible key has been inserted, and the left side shows a lock where the cuts in pins are not aligned with the shear line, because an incompatible key has been inserted.


We will also use this schema to define some common terms:

  • (cutting) position is a position at which the key can be cut. Denoted as $p_{i}$ in the schema above.
  • cutting depth is a depth to which the key (or locK) is cut. Denoted as $d_{i}$ in the schema above.
  • (key) cutting is the actual shape of a key. Usually represented as an ordered tuple, the cutting of the key on the left is (2, 2, 1), the cutting of the key on the right is (1, 2, 0).

Specifying a master-key systems

There are 2 parts to specifying a master-key system:

  • A lock-chart provided by the customer. Lock-chart specifies the number of keys and locks in the system, and the opens and is-blocked relations between keys and locks.
  • A geometry provided by the manufacturer. The geometry defines the set of possible key cuttings by describing the overall shape of the key and providing a set of constraints on the key.

My preferred depiction of a lock-chart is a simple table, where the black squares denote a (key, lock) pair where the key opens a lock, the white squares denote a (key, lock) pair where the key is-blocked (or does not open) a lock:


For geometry, we will make a simplifying assumption that all positions have the same number of possible cutting depths and that the only kind of constraint we work with is something we call gecon (general constraint). This is not that far from the real world, because most real-world manufacturing constraints can be converted into a polynomial number of gecons, and while most of the geometries in the real world are "jagged" (they have different number of possible cutting depths in each position), we can use gecons to encode such geometry inside this simplified framework.

We will represent gecons as a tuple with the same length as there are positions in the geometry, where each element can either be a number or a wildcard, marked as *. When represented in this way, gecon can be seen as a forbidden cutting pattern, e.g. gecon (*, *, 2, *) forbids all keys whose cutting depth at the 3rd position is 2.

This is all we need to know about the inner workings of mechanical locks and master-key systems, so we start working on solving them via conversion to SAT.

Conversion to SAT

Before we start converting our problem to SAT, we need to determine the properties our system should have. In production use, there can be quite a few of them[1], but luckily most can be translated into gecons, and we will skip over the rest in our simplified example. This means that we end up with 5 properties:

  1. A key must have exactly one cutting depth selected for each position
  2. A lock must have at least one cutting depth selected for each position
  3. A key's cutting must not match any gecon
  4. A key must open all locks that the lock-chart specifies it should open
  5. A key must be blocked in all locks that the lock-chart specifies it should not open

As with the Sudoku example, we will need to decide which properties of the whole system will be modelled via variables and which will be modelled via clauses binding them together. We will start by using 2 groups of variables, $key_{p, d}^{k}$ for keys, and $lock_{p, d}^{l}$ for locks respectively. The meaning of these variables is that if $key_{p, d}^{k}$ is set to "true", then the key $k$ in position $p$ has a cutting depth $d$, and analogously for the $lock$ variables.

With the variables defined, we can start encoding the properties in CNF. The first two are the same thing as we already did in the Sudoku example:

Property 1 (A key must have exactly one cutting depth at a position)

\forall (k, p) \in (keys \times positions): \operatorname{exactly-one}(key_{p, 0}^{k}, key_{p, 1}^{k}, \dots, key_{p, d}^{k})

Property 2 (A lock must have at least one cutting depth at a position)

\forall (l, p) \in (locks \times positions): \bigvee_{d \in depths} lock_{p, d}^{l}

Property 3 (A key's cutting must not match any gecon)

Formulating this property in a set of CNF clauses is easier if we change how we think about gecons first.

A gecon is a tuple of the same length as there are positions in the geometry, and at each position, the gecon can either contain a wildcard or a specific cutting depth. Because wildcards match any depth, only the positions with specific cutting depth are relevant to reasoning about gecons. In other words, we can also think of gecon as a set of (position, depth) pairs that cannot be present in a key cutting at the same time.

Using this reformulation leads to the following simple clause, saying that at least one of the (position, depth) pairs must not be present in the key.

\forall (k, g) \in (keys \times gecons): \bigvee_{(p, d) \in g} \neg key_{p, d}^{k}

Property 4 (A key must open all locks that the lock-chart says it should open)

For a key to open a lock, the pins in the lock need to be cut so that the cuts align with the shear line when the key is inserted. In simpler terms, a key opens a lock when the lock is cut at the same (position, depth) pairs the key is. This leads to a simple translation to a set of binary clauses:

\forall k \in keys,
\forall l \in \operatorname{opened-by}(k):
\bigwedge_{\substack{p \, \in \, positions \\ d \, \in \, depths}}
\left( key_{p, d}^{k} \implies lock_{p, d}^{l} \right)

Because an implication can be converted into a disjunction as $\neg key_{p, d}^{k} \vee lock_{p, d}^{l}$, the produced clauses are trivially convertible to CNF.

Property 5 (A key is blocked in all locks the lock-chart says it shouldn't open)

For a key to be blocked in a lock, at least one of the pins in the lock must not be aligned with the shear line. In other words, a key is blocked in a lock when at least one of key's (position, depth) cutting pairs does not have a counterpart in the lock. This can easily be converted into a set of logical formulae:

\forall k \in keys,
\forall l \in \operatorname{blocked-in}(k):
\bigvee_{\substack{p \, \in \, positions \\ d \, \in \, depths}} \left(key_{p, d}^{k} \wedge \neg \, lock_{p, d}^{l}\right)

The problem with this translation is that the produced formulae are not in CNF, but rather in DNF, and the naive conversion from DNF to CNF using distributive law leads to an exponential explosion in the number of clauses. Specifically, given $N$ clauses of length $L$, the conversion produces $L^N$ clauses of length $N$.

Instead, we have to move from using equivalent transformations, as shown in the previous post, to equisatisfiable transformations.

Tseytin transformation

Tseytin transformation is a simple algorithm that allows you to transform arbitrary logic formula into a CNF formula that is equisatisfiable with the original one. The size of the resulting CNF formula is linear in size of the original formula, but it also contains new variables to achieve this.

The basic idea is that if we have a formula that explodes when being converted into CNF, such as $\left(x_{1} \wedge x_{2}\right) \vee \left(x_{3} \wedge x_{4}\right) \vee \left(x_{5} \wedge x_{6}\right)$, then, if we could replace each of the conjunctions with a new variable that would be "true" when the whole conjunction is "true" and vice versa, the conversion to CNF would become trivial: $\left(y_{1} \vee y_{2} \vee y_{3}\right)$.

Tying the new variables to their subexpression is done by using an equivalence, e.g. $y_{1} \iff \left(x_{1} \wedge x_{2} \right)$, but these new clauses also need to be converted to CNF. The first step is to split the logical equivalence into 2 implications, then converting those implications into disjunctions, like so:

%% Step 1 -- the original
y_{1} & \iff \left(x_{1} \wedge x_{2} \right) \\
%% Step 2 -- two implications
\left( y_{1} \implies \left(x_{1} \wedge x_{2} \right)\right)
& \wedge
\left( y_{1} \impliedby \left(x_{1} \wedge x_{2} \right) \right)
%% Step 3 -- implications to negated disjunctions
\left( \neg y_{1} \vee \left(x_{1} \wedge x_{2} \right)\right)
& \wedge
\left( y_{1} \vee \neg (x_{1} \wedge x_{2}) \right)
%% Step 4 -- LHS multiplication
\left( \left( \neg y_{1} \vee x_{1} \right) \wedge \left( \neg y_{1} \vee x_{2} \right)\right)
& \wedge
\left( y_{1} \vee \neg x_{1} \vee \neg x_{2} \right)
%% Step 4 -- Remove unneeded parentheses
\left( \neg y_{1} \vee x_{1} \right) \wedge \left( \neg y_{1} \vee x_{2} \right)
& \wedge
\left( y_{1} \vee \neg x_{1} \vee \neg x_{2} \right)

Using Tseytin transformation we can convert the DNF generated by blocking keys in locks into a much smaller set of clauses, by defining a new kind of variable, $block_{p, d}^{k, l}$:

\left( key_{p, d}^{k} \wedge \neg lock_{p, d}^{l} \right) \iff block_{p, d}^{k, l}

This definition means that $block_{p, d}^{k, l}$ is "true" when the key $k$ is blocked in lock $l$ at position $p$ and depth $d$, and lets us rewrite the formulation for property 5 in this way[2]:

\forall k \in keys,
\forall l \in \operatorname{blocked-in}(k):
\bigvee_{\substack{p \, \in \, positions \\ d \, \in \, depths}} block_{p, d}^{k, l}

Cutting corners (do we need all of the clauses?)

The model we have created above is a valid logical model for a master-key system. However, some of the clauses in it are redundant, e.g. if we assume that any lock in a lock-chart is opened by at least one key, we can remove clauses generated by property 2. This is caused by the fact that we already force keys to have exactly one cutting depth at a position, so a lock opened by a key will have at least one cutting depth for each position anyway.

We can also define the $block_{p, d}^{k, l}$ variables using a single implication,

block_{p, d}^{k, l} \implies ( key_{p, d}^{k} \wedge lock_{p, d}^{l} )

saving 2 binary clauses per variable. I will skip providing proof of this fact because the proof is quite involved. There is also something much more interesting going on, namely that these optimisations might not be optimisations at all. Removing clauses from a problem, and thus "decreasing" the amount of work a SAT solver has to do, does not necessarily decrease its running time for reasons I will talk about in another post.

There is one more thing to note about the formulation above, specifically that it does not forbid spurious cuts in locks. A spurious cut is a cut that does not correspond to a cut in any of the keys that open the lock. We want to avoid these cuts because they increase the manufacturing costs and decrease the security of the locks. There are two ways to solve this:

  1. Add a set of clauses that forbid spurious cuts in locks. Formulating them is simple enough, but it does add quite a few new clauses of low value (likely to lengthen the solver's running time).
  2. Post-process the results to remove spurious cuts. This has linear complexity in regards to the number of opening (key, lock) pairs, which is usually only a small multiple of the total number of keys in a lock-chart.

Because the post-processing option is easy and fast, in the real world, we would pick that one, but we won't use either of these two options in our toy example.

C++ implementation

Now that we know how to translate a master-key system to CNF-SAT, it is time to implement a solver for master-key systems in C++[3]. As before the full code lives in a GitHub repository and this post will only contain the more interesting and relevant excerpts. Also, before we start writing the solver itself, we need to define its input and output formats.

Input/Output specification

The chosen formats are mostly picked for their simplicity and easiness of hand-rolling a simple parser for them. For the lock-chart, we will pick the simplest possible textual representation, that is translating the full lock-chart into * for black squares and . for white squares. As an example, the lock-chart shown in the "Specifying master-key systems" section would be encoded into this:


For geometry, we will use a simple, line-oriented, format. As an example, a geometry with 3 position and 6 depths at each position where the first and the last position are not allowed to share cutting depth will be encoded like this:

base: 3x6
G: 0, *, 0
G: 1, *, 1
G: 2, *, 2
G: 3, *, 3
G: 4, *, 4
G: 5, *, 5

Finally, the output format will also be a line-oriented, with one key being output per line. The keys will be written in the same order as they have in the lock-chart, and each key will be output as a comma-separated list of cutting depths, sorted by their position, e.g. this output:


specifies 3 keys, where the first key cutting has depth 1 at all three positions, the second key cutting has depth 1 at first and second positions and depth 2 at the third position and the third key cutting has depth 1 at first and second positions and depth 3 at the third position.

Implementation details

As always, the first thing to do is to figure out how can we address the variables. Unlike with the sudoku example in the previous post, we won't be computing the variables directly[4], but rather we will keep a map from variable indices (position, depth and key/lock order) to the Minisat's internal variables, and create new variables on-demand. To simplify the code using our mapper, we will also cheat a bit; instead of storing the variables, we will store the appropriate literal in positive polarity:

// Inside the solver class:
using indices = std::tuple<size_t, size_t, size_t>;
std::map<indices, Minisat::Lit> m_key_vars;

// Implementation of variable (literal) accessor for _key_ variables
Minisat::Lit solver::key_lit(size_t position, size_t depth, size_t key) {
    auto indices = std::make_tuple(position, depth, key);
    auto it = m_key_vars.find(indices);
    if (it != m_key_vars.end()) {
        return it->second;
    return m_key_vars[indices] = Minisat::mkLit(m_solver.newVar());

We will skip over the implementation for $lock$ literals, as it is essentially the same modulo some variable names. What is more interesting is that in the final version of the code, we are not saving the $block$ variables. This is done because each block variable is only used twice, once when it is defined, and the second time when it is used to enforce the fact that a key doesn't open a lock. Because both of these uses are done at the same time, we never need to return to a previously defined blocking variable and thus do not need to store them.

With the variables ready, we can once again translate SAT formulation of a problem into C++ code. In our MKS solver, the main work is done via three helper functions, add_key, add_lock and add_gecon, that are responsible for adding clauses related to a specific key, lock or gecon respectively, so these three functions will be the focus of our investigation.

Let's start with the simplest one, add_lock. It is responsible for enforcing that each lock must have at least one cutting depth at each position (property 2).

void solver::add_lock(size_t lock) {
    for (size_t pos = 0; pos < m_geometry.positions; ++pos) {
        Minisat::vec<Minisat::Lit> literals;
        for (size_t depth = 0; depth < m_geometry.depths; ++depth) {
            literals.push(lock_lit(pos, depth, lock));

add_gecon is similarly easy, as it also has only one, simple, responsibility: enforce that no key cutting matches specific gecon (property 3).

void solver::add_gecon(size_t gecon) {
    auto const& pattern = m_geometry.gecons[gecon].pattern;

    for (size_t key = 0; key < m_lockchart.keys(); ++key) {
        Minisat::vec<Minisat::Lit> lits;
        for (size_t pos = 0; pos < pattern.size(); ++pos) {
            // -1 is the wildcard marking
            if (pattern[pos] != -1) {
                lits.push(~key_lit(pos, pattern[pos], key));

And finally, add_key is responsible for ensuring that each key has exactly 1 cutting depth at each position (property 1):

void solver::add_key(size_t key) {
    for (size_t pos = 0; pos < m_geometry.positions; ++pos) {
        Minisat::vec<Minisat::Lit> literals;
        for (size_t depth = 0; depth < m_geometry.depths; ++depth) {
            literals.push(key_lit(pos, depth, key));

This leaves 2 things unimplemented, opens and is-blocked-in relations between keys and locks. In our toy solver, these will also be part of add_key. The reason for that is a simple implementation detail, specifically that the internal lockchart implementation stores mapping from keys to the locks they open/they are blocked in.

This is the implementation of property 4 (keys can open specific locks):

void solver::add_key(size_t key) {
    // ...
    for (auto lock : m_lockchart.opens(key)) {
        for (size_t pos = 0; pos < m_geometry.positions; ++pos) {
            for (size_t depth = 0; depth < m_geometry.depths; ++depth) {
                // key_{p, d} => lock_{p, d} <---> ~key_{p, d} v lock_{p, d}
                add_clause(~key_lit(pos, depth, key), lock_lit(pos, depth, lock));
    // ...

And this is the implementation of property 5 (keys are blocked in specific locks):

void solver::add_key(size_t key) {
    // ...
    for (auto lock : m_lockchart.blocked_in(key)) {
        Minisat::vec<Minisat::Lit> blocking_lits;
        for (size_t pos = 0; pos < m_geometry.positions; ++pos) {
            for (size_t depth = 0; depth < m_geometry.depths; ++depth) {
                auto block = Minisat::mkLit(m_solver.newVar());
                // block_{p, d} <=> (key_{p, d} && ~lock_{p, d})
                // 1)   block_{p, d}  => (key_{p, d} && ~lock_{p, d})
                //     ~block_{p, d}  v  (key_{p, d} && ~lock_{p, d})
                //    (~block_{p, d} v key_{p, d}) && (~block_{p, d} v ~lock_{p, d})
                add_clause(~block, key_lit(pos, depth, key));
                add_clause(~block, ~lock_lit(pos, depth, lock));

                // 2)   block_{p, d} <= (key_{p, d} && ~lock_{p, d})
                //      block_{p, d}  v ~key_{p, d}  v  lock_{p, d}
                add_clause(block, ~key_lit(pos, depth, key), lock_lit(pos, depth, lock));
    // ...

Now, with the solver done, it is time for benchmarks...


Benchmarking will be once more problematic, but for entirely different reasons. Benchmarking the sudoku solver from the previous post was hard because there are example sudokus everywhere, but there is no agreed-upon set of representative sudoku puzzles. I solved this by picking a set of 95 supposedly hard (containing only 17 givens) inputs and using those as a reasonable approximation. However, benchmarking the MKS solver has the exact opposite problem: there are no non-trivial inputs publically available.

This doesn't mean there will be no benchmarks because I do have access to some proprietary inputs, thanks to our research partnership. It does mean, however, that I cannot publish them, or describe them in too much detail. I also can use only a subset of them, because some of them use require features that are not implemented in our toy solver. After further filtering this subset to only use lock-charts that have at least 100 keys, I have 7 inputs across 2 geometries to test our solver with.

Geometry A is interesting by being very long, as it has ~30 positions, but relatively shallow, with the shallowest position have only 2 cutting depths, and the deepest having ~5 cutting depths. It also contains ~100 gecons. In contrast, geometry B is much closer to being squarish, as it has ~10 positions and ~10 depths at each position, and contains ~80 gecons.

For geometry A, there are 2 lockcharts. The smaller one contains ~150 keys, and the larger one contains ~250 keys. For geometry B, there are 5 lockcharts, ranging between ~100 keys and ~500 keys. We will refer to them in order sorted by their increasing size so problem 1 will be the smallest one.

The measurements were once again taken on a stock i5-6600k @ 3.5 GHz, against binaries compiled with g++ using -O3 and -DNDEBUG flags. Each input has been run 10 times, and the median and stddev can be found in the table below.

Geometry Problem Median time to solve (s) stddev (s)
A Problem 1 23.74 0.09
A Problem 2 57.28 0.17
B Problem 1 5.37 0.01
B Problem 2 5.80 0.02
B Problem 3 48.43 0.14
B Problem 4 70.55 0.13
B Problem 5 394.82 9.32


As we could see in the previous chapter, our toy solver can solve non-trivial lockcharts and geometries in a reasonable amount of time. However, because there are no public solvers or inputs available, we have no point of comparison for them. Instead, let me tell you an anecdote from our own research into solving master-key systems.

The original approach our research group picked was to write a specialised solver for the problem, including all the manufacturer-specific constraints. This solver was in development for multiple years, and while it produced correct solutions, it didn't work quite fast enough -- only about 80% of all test inputs were solved within a specific time limit. In other words, things weren't going too well, until one of our colleagues had a fit of inspiration and suggested converting the problem to SAT.

In ~3 months the SAT-based MKS solver went from an idea to having feature parity with the specialised solver, including system integration and supporting vendor-specific constraints. It also performed much better, and the prototype was able to successfully solve ~90% of the inputs within the time limit. Because this approach proved fruitful, the SAT-based solver, along with the underlying concepts, was then developed further in our partnership with Assa Abloy (née FAB), and, as described in my thesis, the solver can now solve lockcharts with ~4k keys inside a reasonable amount of time.

I think this anecdote illustrates my point from the previous article well, in that we were able to quickly create a reasonably performing solver by translating the problem into SAT and using a modern SAT solver. However, translating MKS to SAT does have its limitations[5], and we are currently working on an open source solver that exploits the structure of the MKS domain to (hopefully) scale to even larger lockcharts.

This is all for part 2. Part 3 is out and it looks at the internals of modern SAT solvers.

Also, a small personal appeal: if you have an in with some key manufacturer, try to convince them to make obsoleted geometries public. Likewise, if you have access to large, real-world, complex lockcharts, see if you can get the rights to make them public.

  1. One of the more common constraints that cannot be translated into gecon is the desire for 2 key cuttings to differ at more than X positions. ↩︎

  2. Some manufacturers require keys to be blocked in a lock by more than one position. This can be implemented by encoding "at least 2" constrain over the $block_{p, d}^{k, l}$ variables for given key-lock pair. ↩︎

  3. There are good reasons why our production solver is written in C++, but the reason this toy example is written in C++ as well is just that I am used to it. ↩︎

  4. Because our toy solver always uses rectangular geometry, we could compute the variables on demand just as we did previously. However, the first step towards production-ready implementation would be to support jagged geometries directly, without gecons, which would make computing the variables directly harder to implement. Implementing variable access via lazy mapping can also be a useful technique when translating different problems into SAT, so I decided to showcase the implementation here. ↩︎

  5. You should read the links provided at the top of this post for detailed explanation and discussion, but the short story is that the size of the translated problem is dominated by $block$ variables, and the $block$ variables grow linearly in the number of positions, depths and (key, lock) pairs that are blocked. Because the number of positions and depths is fixed for given geometry, and the usual lockchart is mostly empty (blocked), the size of the SAT can be said to be quadratic in the number of keys. ↩︎