"Do something interesting"

Apr 14, 2018 | 1068 Views

Don't "do something interesting" with data, AI, and ML-do something human-centered

If you are a team with lots of data, how often have you heard the "do something interesting" request from managers, executives, and product people? I'm guessing a lot.

When building AI, machine learning, and deep learning systems the request might be "do something better than us humans could come up with."

In design fields requests for clarification will be followed up with "I'll know it when I see it."

But, it's really hard! Where do you even start?

The reason it's so hard is that the idea of insight is reversed. It shouldn't be about looking at data and telling you something interesting. It is about understanding the world, the problems people have, and running experiments to build confidence in this understanding.

Data alone, no matter how much, is only the "what" and not the "why." By understanding the "why" we can pull together better solutions for people's problems. More data beats better algorithms unless there is more human purpose which beats more data.

What is the problem with looking for happenstance correlations? They can be a distraction from the mission of the organization or in the worst case they may not be real at all (see the "Texas sharpshooter fallacy").

What is "interesting?"
Generally, "interesting" is something that is considered novel or unexpected. When people hunt for "interesting" they are trying a lot of different things until something works or "just clicks."

There is a lot to be said for novelty-based approaches. In "Why Greatness Cannot Be Planned: The Myth of the Objective" by Kenneth O. Stanley and Joel Lehman there are examples of how this explore-focused approach can get to better results than would have been found otherwise.

Novelty-based search addresses is a "deception" problem in progress. Doing a more directed search towards a specific objective it could seem you are going the right way but it isn't really on the path to the objective.

Are objectives the problem?
While working towards something that is "interesting" are we avoiding objectives? Not in the mind of the executive as they tend to be outcome based on a timeframe.

Novelty based search will have a good amount of failure and not all organizations (or their executives) are willing to fail and wait around for success. This is where the delicate balance between explore and exploit of strategy comes in.

We can learn from roadmapping and KPIs
There are two analogous situations in product work that could be helpful: roadmaps and KPIs.

Product roadmaps tend to รข??fail' when the wrong expectations are set. A roadmap is meant to help understand the future but due to the complexity of the world what we should do in any moment changes. The expectation should be built into roadmaps that they both change and get lower in confidence as the time scale is further out.

Thematic roadmapping tries to focus on the problems you are trying to solve not the features you are trying to ship. I have found that they are more successful than feature based roadmaps.

KPIs are like objectives in that they are what the organization will monitor and take action on. What isn't always understood is that as the market, customers, and organization changes those KPIs should change as well. Assuming that they can be set once and followed forever is just not appropriate.

For both roadmaps and KPIs it is key to understand how to focus on problems and learn to change. Poorly written objectives can presuppose a solution before you understand the problem. They are helpful when they are solving a problem that is known to be important for someone now and can be changed later.

How do we understand when our objectives are no longer appropriate? How do we know when to change our models? To do that we need to start with the way the world works today and learn from there.

Being human-centered
In a recent HBR article "What Happens When Data Scientists and Designers Work Together" there is an important aspect of how data scientists (or AI/ML engineers):

Instead of a version of data science that is narrowly focused on researching new statistical models or building better data visualizations, a design-thinking approach recognizes data scientists as creative problem solvers.

The way we do this is to focus on the problems that people are having. You can do this by talking to people that are experts in the domain you wish to help. Those experts know what the real problems are. Insights from these conversations will help you leverage data to understand how the problem manifests itself.

After you know the real problems, that is when you need to include many different people from the team that build solutions. This includes both technical and non-technical people inside and outside the industry.

Experts don't always help in this case. In fact, they may constrain the team to old ways of solving problems rather than allowing new ones.

Once you have solutions to the identified problems, test your prototypes of solutions with the people who you are trying to help. Listen to them and shut the f**k up.

This type of problem centric thinking, talking to real users, and co-creation is the cornerstone of Design Thinking. There isn't enough of it when doing data science, AI, ML, deep learning, or other intelligent systems.

In practice
At Philosophie, we have found that the foundations of human-centered design and cross-team collaboration sets the right stage to find out what really matters. Especially for AI and ML projects that could take months to build a fully trained model.

We use Empathy Mapping for the Machine, Confusion Mapping, Challenge Mapping, Crazy Eights, and other exercises get people on the same page without getting bogged down in AI terminology. This is especially helpful for the designers, product people, executives, and customers that you can engage with. The co-creation (and radical collaboration) between technical and non-technical roles gets all angles of the problem and possible solutions.

Also, understanding problems and rapidly testing out solutions through prototypes can be very effective. Even before you invest heavily in building the full solution, like an AI-focused MVP.

Rather than just "doing something interesting", do something impactful to solve people's problem.


The article was originally published here

Source: HOB