Take a moment to examine the artwork above. What do they all have in common?
If you’re an art enthusiast, chances are you’ll immediately spot and name the commonality. But, if you’re like me, you’ll notice something weird, but you’ll be unable to put a finger on it. In some circles, that feeling you’re feeling has become desirable, even preferable.
Did you notice that one of the rowers in the center painting is rowing without an oar?
You’ve just witnessed non-finito artwork — artwork that’s been left incomplete.
In paintings, artists use a preparatory drawing or sketch called underdrawing to plan their vision. When left unfinished, such as in the fascinating derelictions above, viewers are left with a sense of mystery and speculation — a yearning to fill in those outlines and blobs with the imperfect information they have aided by their personal vision.
That’s how humans approach remarkable challenges… and how I’m unconsciously going about one of my projects.
For two months now, I’ve been racking my brain to boost the performance of a marketing tool that:
- Predicts the popularity of a Reddit post in a subreddit.
- Suggests possible post rewrites that optimize performance.
I have improved the machine learning model from roughly 30% accuracy to 80% accuracy over a couple of weeks. However, there were still some nuanced issues that made the predictions useless for a typical user. I kept hitting a brick wall.
That’s until I picked up the book Radical Uncertainty, a guide to improving your decision-making beyond numbers. Radical Uncertainty argues that there is an overall mistake embedded in models: they all begin by considering how you might make a decision if you have complete and perfect knowledge of the world. They assume that outcomes are derived from a set of unchanging, underlying rules.
But real-life seldom structures a satisfying denouement.
In fact, real-life is more like scratching your head at the non-finito works above rather than gazing at a completed masterpiece. Of course, wanting the world to function on a set of discoverable rules is natural. And these types of situations do exist, but they’re known as games or puzzles. This quality is known as stationarity.
Unfortunately for my model, humans are far from stationary.
When projects deal with human behavior, it always results in radical uncertainty. Thus, makers won’t have complete and perfect knowledge of the situation. So, rather than trying to optimize the rules of a model that assumes that they do possess that knowledge, makers need to rethink how they approach the problem.
My platform’s value didn’t come from just providing tools, but also by sequencing those tools so that they collectively make the grander problem’s solutions accessible, understandable, and incisive.
Here’s the thing: when it comes to the application of models, makers usually aren’t solving a legitimate puzzle or optimizing outcomes based on game mechanics, they’re reframing a mystery. Makers are filling in the gaps of a non-finito masterpiece. But what does that tangibly look like?
For my project, I began by asking myself, What’s going on here? What is the challenge’s underdrawings that make up the overall picture? From here, I was able to reframe the mystery and begin filling in those gaps to my users’ benefit.
I understood that my Neural Network wasn’t the end-product itself. Instead, what I really needed to create was a toolbox that feels like using a single tool. Each tool would individually model “small-world problems,” which illuminate parts of the grander problem by adding insight to the overall question of what’s going on here? — each tool in the box furnishes only one of the masterpiece’s underdrawings.
My platform’s value didn’t come from just providing tools, but also by sequencing those tools so that they collectively make the grander problem’s solutions accessible, understandable, and incisive. For the user, this will seem like a one-stop-tool for a specific goal. For me, the maker, the execution looks like a woven-together series of smaller, specialized workshops operating to deliver a single answer.
While these conclusions may seem “basic” or “obvious” to some, it’s essential to consider why then we still make these same “basic” or “obvious” mistakes.
…within our battle plans, we should break down the underdrawings in our own challenges, whether it be incomplete data, misunderstood “rules,” or unchecked assumptions (just like trying to understand a non-finito piece).
In my experience, makers become obsessed with the grander problem and miss the necessary steps of addressing the “small-world” problems first. We dive right in, plugging away at that potentially revolutionary insight to fall short of our expectations. We either keep attacking the problem with similar methods only to find ourselves frustrated, or we move on, disappointed in our work to likely repeat the same cycle in our next good idea.
But we can, and should, break that cycle. This isn’t to say that we should stop addressing these Goliath-like challenges with models. However, within our battle plans, we should break down the underdrawings in our own challenges and fill them in individually, whether it be incomplete data, misunderstood “rules,” or unchecked assumptions (just like trying to understand a non-finito piece). In fact, Radical Uncertainty provides a helpful list of common abuses of models to prevent makers from falling into the same traps:
- 1. Overgeneralizing: Combining disparate situations to provide an overall answer.
- 2. Data Corrosion: Filling in gaps of missing numbers by purely inventing them.
- 3. Assuming Stationarity: Assuming that the rules that govern the situation don’t change over time.
- 4. Speculation as Fact: The models do not take into account uncertainty. Models are only useful if users understand that models do not represent the world as it is, they only highlight key relationships for further digging.
- 5. Overly Complex: The model has a high cost and complexity. This prevents meaningful public debate and consultation (seeking advice from others is of little use because a model is so convoluted).
With that being said, here’s your challenge. In your next project, I challenge you to tackle it with the understanding that we must:
- 1. Separate puzzles from mysteries by asking ourselves _what’s going on here?_ This allows us to frame mysteries so that we can discern where stationarity exists and where it doesn’t exist.
- 2. Break down our grander problems into “small-world” models and check for model abuse. This way, the end-product is a seamless network of specialized tools that fill in the challenge’s underdrawings to create one whole picture.
I hope that this inspires you in your next project. I know I’m looking forward to attacking future challenges with this more effective approach.