High Praise

On a demo of our application to a prospective customer, the instant feedback was "this looks easier to use than Alteryx". We'll take that sort of compliment any day of the week.

Human Analysts Guarantee Bias

During an interview between Shane Parrish and Daniel Kahneman, one of the many interesting comments made was around how to make better decisions. Kahneman said that despite studying decision-making for many years, he was still prone to his own biases. Knowing about your biases doesn't make them any easier to overcome.

His recommendation to avoid bias in your decision making is to devolve as many decisions as you can to an algorithm. Translating what he is saying to analytical and statistical jobs suggests that no matter how hard we try, we always approach analysis with biases that are hard to overcome. Sometimes our own personal biases are exaggerated by external incentive models. Whether you are evaluating your bosses latest pet idea, or writing a research report for a paying client, delivering the wrong message can be costly, even if it is the right thing to do.

Knowledge Leaps has an answer. We have built two useful tools to overcome human bias in analysis. The first is a narrative builder that can be applied to any dataset to identify the objective narrative captured in the data. Our toolkit can surface a narrative without introducing human biases.

The second tool we built removes bias by using many pairs of eyes to analyze data and average out any potential analytical bias. Instead of a single human (i.e. bias prone analyst) looking at a data set our tool lets lots of people look at it simultaneously and share their individual interpretation of the data. Across many analysts, this tool will remove bias through collaboration and transparency.

Get in touch to learn more. doug@knowledgeleaps.com.

Sign Up For Our Newsletter

Occasionally we will send out an email newsletter containing some detailed and interesting case studies based on the data we have access to. Sign up below.

Building An Agile Market Research Tool

For the past five years we have been building our app Knowledge Leaps, an agile market research tool. We use it to power our own business serving some of the most demanding clients on the planet.

To build an innovative market research tool I had leave the industry. I spent 17 years working in market research and experienced an industry that struggled to innovate. There are many reasons why innovation failed to flourish, one of which lies in the fact that it is a service industry. Service businesses are successful when they focus their human effort on revenue generation (as it should be). Since the largest cost base in the research are people, there is no economic incentive to invest in the long term especially as the industry has come under economic pressure in recent years. The same could be said of many service businesses that have been disrupted by technology. Taxi drivers being a good example of this effect.

This wouldn't be the first time market research innovations have come from firms that are outside of the traditional market research category definition. For example, SurveyMonkey was founded by a web developer with no prior market research experience. While, Qualtrics was founded by a business school professor and his son, again with no prior market research industry experience.

Stepping outside of the industry and learning how other types of businesses are managing data, using data and extracting information from it has been enlightening. It has also helped us build an abstracted-solution. While we can focus on market research use-cases, since we have built a platform that fosters analytics collaboration and an open-data philosophy finding new uses for it is a frequent occurrence.

To talk tech-speak what we have done is to productize a service. We have taken the parts of market research process which happen frequently and are expensive and turned them into a product. A product that delivers the story in data with bias. It does it really quickly too. Visit the site or email us support@knowledgeleaps.com to find out more.

Science Fiction and the No-Free-Lunch Theory

In a lot of science fiction films one, or more, of the following are true:

  1. Technology exists that allows you to travel through the universe at the "speed of light."
  2. Technology that allows autonomous vehicles to navigate complicated 2-D and 3-D worlds exists.
  3. Technology exists that allows robots to communicate with humans in real-time detecting nuances in language.
  4. Handheld weapons have been developed that fire bursts of lethal high energy that require little to no charging.

Yet, despite these amazing technological advances the kill ratio is very low. While it is fiction, I find it puzzling that this innovation inconsistency persists in many films and stories.

This is the no-free-lunch theory in action. Machines are developed to be good at a specific task are not good at doing other tasks. This will have ramifications in many areas especially those that require solving multiple challenges. Autonomous vehicles for example need to be good at 3 things:

  1. Navigating from point A to B
  2. Complying with road rules and regulations.
  3. Negotiating position and priority with other vehicles on the road.
  4. Not killing, or harming, humans and animals.

Of this list 1) and 2) are low level. 3) is challenging to solve as it requires some programmed personality. Imagine if two cars using the same autonomous software meet at a junction at the very same time, one of them needs to give way to the other. This requires some degree of assertiveness to be built. I am not sure this is trivial to solve.

Finally, 4) is probably really hard to solve since it requires 99.99999% success in incidents that occur every million miles. There may never be enough training data.

AI Developer, A Job For Life.

Last year we wrote about No Free Lunch Theory (NFLT) and how it relates to AI (among other things). In this recent Wired article, this seems to be coming true. Deep Learning, the technology that helped AI make significant leaps in performance has limitations. These limitations, as reported in the article, cannot necessarily be overcome with more compute power.

As NFLT states (paraphrased): being good at doing X means an algorithm cannot also be good at Doing Not X. Deep Learning models that have success in one area is not a guarantee they will have success in other areas. In fact the opposite tends to be true. This is the NFLT in action and in many ways, specialized-instances of AI-based systems was an inevitability of this.

This has implications for the broader adoption of AI. For example, there can be no out-of-the box AI "system". Implementing an AI solution based on the current-state-of-the-art is much like building a railway system. It needs to adapt to the local terrain. A firm can't take a system from another firm or AI-solutions provider and hope it will be a turn-key operation. I guess it's in the name, "Deep Learning". The "Deep" refers to deep domain, i.e. specific use-case, an not necessarily deep thinking.

This is great news if you are an AI developer or have experience in building AI-systems. You are the house builder of the modern age and your talents will always be in demand - unless someone automates AI-system implementation.

UPDATE: A16Z wrote this piece - which supports my thesis.

Building Persistent State Knowledge

The tools available to produce charts and visualize data are sadly lacking in a critical area. While much focus has been placed on producing interesting visualizations, one problem has yet to be solved: it is all too easy to separate the Data layer from the Presentation layer in a chart. It is easy for the context of a chart to be lost when it becomes separated from its source. When that happens we lose meaning and we potentially introduce bias and ambiguity.

In plain english, when you produce a chart in Excel or Google Sheets, the source data is in the same document. When you embed that chart in a PowerPoint or Google slide deck you lose some of the source information. When you convert that presentation into a PDF and email it to someone, you risk losing all connections to the source. Step by step it becomes too easy to remove context from a chart.

Yes, you can label the chart. You can cite your source but neither are foolproof methods. These are like luggage tags, while they are attached they work but they are all too easy to remove.

In analytics, reproducibility and transparency are critical to building a credible story. Where did the data come from, could someone else remake the chart following these instructions (source, series information, filters applied, etc). Do the results stand up to objective scrutiny?

At Knowledge Leaps, we are building a system that ensures reproducibility and transparency by binding the context of the data and its "recipe" to the chart itself. This is built into the latest release of our application.

When charts are created we bind them to their source data (easy) and we bind the "recipe". We then make them easily searchable and discoverable, unhindered by any silo information i.e. slide, presentation, folder, etc.

The end-benefit data and charts can be shared without loss of the underlying source information. People not actively involved in creating the chart can interpret and understand its content without any ambiguity.

Turning Analysis On Its Head.

Today we rolled out our new charting feature. This release marks an important milestone in the development of Knowledge Leaps (KL).

Our vision for the platform has always been to build a data analysis application platform that lets a firm harness the power of distributed computing and a distributed workforce.

Charts and data get siloed in organisations because they are buried in containers. Most charts are contained on a slide in a PowerPoint presentation that sits in a folder on a server somewhere in your company's data center.

We have turned this on its head in our latest release. Charts that are produced in KL remain open and accessible to all users. We have also built in a collaborative interpretation feature where a group of people spread across locations can interpret data as part of a team rather than alone. This shares the burden of work and build more resilient insights since people with different perspectives can build the best-in-class narrative.

Awareness Doesn’t Diminish Bias Effect

In an interview with Shane Parrish the co-creator of Behavioral Economics, Daniel Kahneman, was asked if he was better at making decisions after studying decision-making for the past 40 years. His answer was a flat no. He then elaborated, saying that biases are hard for an individual to overcome. This dynamic is most evident in the investment community, especially start-up investors. WeWork is a good case study in people ignoring their biases. An article in yesterday's Wall Street Journal (Paywall) describes WeWork's external board and investors looking on as the firm missed projections year-after-year. On the run up to the IPO, people were swayed by their biases and despite data to the contrary more gasoline was poured on the fire. It took public scrutiny for the real narrative to come out and for people to see their own biases at play. To be fair to those involved, the IPO process was used to deliver some unvarnished truths to WeWork's C-suite. As Kahneman said, even professional analysts of decision-making get it wrong from time to time. 

What hope do the rest of us have? With the right data it is easier to at least be reminded of your biases, even if you choose to accept them. With our data and analytics platform we have built two core components that give you and your team a greater opportunity of not falling into a bias trap.

Narrative Builder

This component uses an algorithm that outputs human-readable insight into the relationships in your data. Using correction techniques and cross-validation to avoid computational-bias you can identify the cold-facts when it comes to the relationships (the building blocks of the narrative) in your data. 

Collaborative Insight Generation

The second component we have built to help diminish bias is a collaboration feature. As you analyze data and produce charts other members of your team and provide input and hypotheses for each chart. Allowing a second, third or even fourth pair of eyes to interpret data helps build a resilient narrative.

Surfacing a bias-free narrative is only part of the journey, we still need to convince other humans, with their own biases, of the story discovered in the data. As we have learnt in recent years, straight facts aren't sufficient conditions of belief. At least with a collaborative approach we can help overcome bias traps.

Surfacing a bias-free narrative is only part of the journey, we still need to convince other humans, with their own biases, of the story discovered in the data. As we have learnt in recent years, straight facts aren't sufficient conditions of belief. At least with a collaborative approach we can help overcome bias traps.