We Eat Our Own Dog Food

I had heard about the phrase "do the dogs eat the dog food" from a start-up podcast I had listened. The idea being if your firm is building a product for customers, does your firm also use it.

I then read this adaption of the phrase and thought it applies to us. We ship features and code that help our customers and that help us do our jobs better. We make "dog food" and we eat it. So, if the UI for a new feature is clunky or an implementation doesn't quite hit the mark, we know about because our team will tell us.

Data Knowledge Graph

When you are building data products and filtering data files, it is important to keep track of what you have combined to make a new data set and what you have removed. This feature has saved us countless hours.

From an audit perspective we can build a complete history of a dataset - when it was added to the platform, how it was processed and when/who/where it was delivered / downloaded. This takes a removes a time-draining communication burden from our teams.

We can also add commentary and narratives to a data set. This helps us build transparency and persistent-state knowledge about data.

On Maxima: Search, Life and Software.

Until recently, I have wrestled with why people I knew growing up in a small village in the UK stayed in the village when there was a whole world of opportunities awaiting discovery. I have come to realize that life is a search process. A search for purpose, contentment and security. As with most search algorithms, some are better than others. Some peoples' search algorithm stop when they discover a local maxima - such as village life in the UK. Other algorithms drive people to travel much further.

Software development follows similar principles to a search algorithm. While we might think that we are heading towards a peak when we start out building an application, we soon discover that the landscape we are searching is evolving. If we rush too quickly to a peak we might find that we settle on a local rather than a global maxima. Facebook is a good example of the impact of search speed. The reason that Facebook prevailed is that the many social networking sites that came before it provided the company with a long list of technical and business mistakes to avoid. A major lesson was controlled growth - in other words, a slow search algorithm. Avoiding the strong temptation, especially when a social network is concerned, to grow very rapidly.

This is an example of a good search process and how it has to be a slow one for long term success. A slow search allows a process to find a stable solution. The Simulated Annealing Algorithm is a good example of this. The random perturbations applied to the search result diminish overtime as the solution gets closer to the optimum search result. The occasional randomness ensures the search doesn't get stuck on a solution.

We have also been running our own, slow search algorithm as we build Knowledge Leaps. We have been at this for a long time. App development began five years ago, but we started its design at least eight years ago. While we wanted to go faster, we have been resource constrained. The advantage of this is that we have built a resilient and fault-tolerant application. The slow-development process has also helped foster our design philosophy, when we began we wanted to be super-technical and build a new scripting language for data engineering. Over time our beliefs have mellowed as we have seen the benefits of a No Code / Low Code solution. Incorporating this philosophy into Knowledge Leaps has made the application that much more user friendly and stable.

Building An Agile Market Research Tool

For the past five years we have been building our app Knowledge Leaps, an agile market research tool. We use it to power our own business serving some of the most demanding clients on the planet.

To build an innovative market research tool I had leave the industry. I spent 17 years working in market research and experienced an industry that struggled to innovate. There are many reasons why innovation failed to flourish, one of which lies in the fact that it is a service industry. Service businesses are successful when they focus their human effort on revenue generation (as it should be). Since the largest cost base in the research are people, there is no economic incentive to invest in the long term especially as the industry has come under economic pressure in recent years. The same could be said of many service businesses that have been disrupted by technology. Taxi drivers being a good example of this effect.

This wouldn't be the first time market research innovations have come from firms that are outside of the traditional market research category definition. For example, SurveyMonkey was founded by a web developer with no prior market research experience. While, Qualtrics was founded by a business school professor and his son, again with no prior market research industry experience.

Stepping outside of the industry and learning how other types of businesses are managing data, using data and extracting information from it has been enlightening. It has also helped us build an abstracted-solution. While we can focus on market research use-cases, since we have built a platform that fosters analytics collaboration and an open-data philosophy finding new uses for it is a frequent occurrence.

To talk tech-speak what we have done is to productize a service. We have taken the parts of market research process which happen frequently and are expensive and turned them into a product. A product that delivers the story in data with bias. It does it really quickly too. Visit the site or email us support@knowledgeleaps.com to find out more.

Building Persistent State Knowledge

The tools available to produce charts and visualize data are sadly lacking in a critical area. While much focus has been placed on producing interesting visualizations, one problem has yet to be solved: it is all too easy to separate the Data layer from the Presentation layer in a chart. It is easy for the context of a chart to be lost when it becomes separated from its source. When that happens we lose meaning and we potentially introduce bias and ambiguity.

In plain english, when you produce a chart in Excel or Google Sheets, the source data is in the same document. When you embed that chart in a PowerPoint or Google slide deck you lose some of the source information. When you convert that presentation into a PDF and email it to someone, you risk losing all connections to the source. Step by step it becomes too easy to remove context from a chart.

Yes, you can label the chart. You can cite your source but neither are foolproof methods. These are like luggage tags, while they are attached they work but they are all too easy to remove.

In analytics, reproducibility and transparency are critical to building a credible story. Where did the data come from, could someone else remake the chart following these instructions (source, series information, filters applied, etc). Do the results stand up to objective scrutiny?

At Knowledge Leaps, we are building a system that ensures reproducibility and transparency by binding the context of the data and its "recipe" to the chart itself. This is built into the latest release of our application.

When charts are created we bind them to their source data (easy) and we bind the "recipe". We then make them easily searchable and discoverable, unhindered by any silo information i.e. slide, presentation, folder, etc.

The end-benefit data and charts can be shared without loss of the underlying source information. People not actively involved in creating the chart can interpret and understand its content without any ambiguity.

Turning Analysis On Its Head.

Today we rolled out our new charting feature. This release marks an important milestone in the development of Knowledge Leaps (KL).

Our vision for the platform has always been to build a data analysis application platform that lets a firm harness the power of distributed computing and a distributed workforce.

Charts and data get siloed in organisations because they are buried in containers. Most charts are contained on a slide in a PowerPoint presentation that sits in a folder on a server somewhere in your company's data center.

We have turned this on its head in our latest release. Charts that are produced in KL remain open and accessible to all users. We have also built in a collaborative interpretation feature where a group of people spread across locations can interpret data as part of a team rather than alone. This shares the burden of work and build more resilient insights since people with different perspectives can build the best-in-class narrative.

Awareness Doesn’t Diminish Bias Effect

In an interview with Shane Parrish the co-creator of Behavioral Economics, Daniel Kahneman, was asked if he was better at making decisions after studying decision-making for the past 40 years. His answer was a flat no. He then elaborated, saying that biases are hard for an individual to overcome. This dynamic is most evident in the investment community, especially start-up investors. WeWork is a good case study in people ignoring their biases. An article in yesterday's Wall Street Journal (Paywall) describes WeWork's external board and investors looking on as the firm missed projections year-after-year. On the run up to the IPO, people were swayed by their biases and despite data to the contrary more gasoline was poured on the fire. It took public scrutiny for the real narrative to come out and for people to see their own biases at play. To be fair to those involved, the IPO process was used to deliver some unvarnished truths to WeWork's C-suite. As Kahneman said, even professional analysts of decision-making get it wrong from time to time. 

What hope do the rest of us have? With the right data it is easier to at least be reminded of your biases, even if you choose to accept them. With our data and analytics platform we have built two core components that give you and your team a greater opportunity of not falling into a bias trap.

Narrative Builder

This component uses an algorithm that outputs human-readable insight into the relationships in your data. Using correction techniques and cross-validation to avoid computational-bias you can identify the cold-facts when it comes to the relationships (the building blocks of the narrative) in your data. 

Collaborative Insight Generation

The second component we have built to help diminish bias is a collaboration feature. As you analyze data and produce charts other members of your team and provide input and hypotheses for each chart. Allowing a second, third or even fourth pair of eyes to interpret data helps build a resilient narrative.

Surfacing a bias-free narrative is only part of the journey, we still need to convince other humans, with their own biases, of the story discovered in the data. As we have learnt in recent years, straight facts aren't sufficient conditions of belief. At least with a collaborative approach we can help overcome bias traps.

Surfacing a bias-free narrative is only part of the journey, we still need to convince other humans, with their own biases, of the story discovered in the data. As we have learnt in recent years, straight facts aren't sufficient conditions of belief. At least with a collaborative approach we can help overcome bias traps.

One Chart Leads To Another, Guaranteed.

We have just released the charting feature in Knowledge Leaps. The ethos behind the design is this: in our experience, if you are going to make one chart using a data set you are probably going to make many charts using the data.

Specifying lots of charts one-by-one is painful, especially as a data set will typically have lots of variables that you want to plot against one specific variable, date for example. Our UI has been built with this in mind: specify multiple charts quickly, and simply, then spend the time you save putting your brain to work figuring out what the data narrative is.

Charts tend to get buried further into a silo - either as part of a workbook or a presentation. This requires contextual knowledge: to know where the chart is. In other words, you need to know where the chart is to know what story it tells. This is suboptimal, so we fixed that too. Knowledge Leaps platform lets all the your charts remain searchable and shareable. That also goes for your co-workers' charts as well. This feature allows insight to be easily discovered and shared with a wider team - helping build persistent-state organizational intelligence, faster.

Market Research 3.0

In recent years, there has been lots of talk about incorporating Machine Learning and AI into market research. Back in 2015, I met someone at a firm who claimed to be able scale up market research survey results from a sample of 1,000 to samples as large as 100,000 using ML and AI.

Unfortunately that firm, Philometrics, was founded by Aleksandr Kogan - the person who wrote the app for Cambridge Analytica that scraped Facebook data using quizzes. Since then, the MR world has moved pretty slowly. I have a few theories but I will save those for later posts.

Back on topic, Knowledge Leaps got a head start on this six years ago when we filed our patent for technology that automatically analyzes survey data to draw out the story. We don't eliminate human input, we just make sure computers and humans are put to their best respective uses.

We have incorporated that technology into a web-based platform: www.knowledgeleaps.com. We still think we are a little early to market but there might be enough early adopters out there now around which we can build a business. 

As well as reinventing market research, we will also reinvent the market research business model. Rather than charge a service fee for analysis, we only charge a subscription for using the platform.

Obviously you still have to pay for interviews to gather the data, but you get the idea. Our new tech-enabled service will dramatically reduce the time-to-insight and the cost-of-insight in market research. If you want to be a part of this revolution, then please get in touch: Doug@knowledgeleaps.com.

Fear-Free Databases: No Code No SQL – Use Case

We rolled out our No Code Database feature today. Just plug in a data feed and add data to a customizable database with zero lines of code, and zero knowledge of the inner workings of databases. All this in under a minute.

Setting up a database in the cloud is confusing and complex for most people. Our technology puts the power of cloud-based databases at everyone's finger tips. No need for the IT team's intervention. No need to learn remote login protocols. No need to learn any code.

We have also added in some useful aggregation and summarization tools that let you extract data from databases straight into reports and charts. Again, no code required.