I had heard about the phrase "do the dogs eat the dog food" from a start-up podcast I had listened. The idea being if your firm is building a product for customers, does your firm also use it.
I then read this adaption of the phrase and thought it applies to us. We ship features and code that help our customers and that help us do our jobs better. We make "dog food" and we eat it. So, if the UI for a new feature is clunky or an implementation doesn't quite hit the mark, we know about because our team will tell us.
Benford's law is how the IRS/HMRC can tell if the information you submit on your tax filings is fraudulent. When people lie on their tax forms they tend to use random numbers, when really the number distribution should follow Benford's law. https://en.wikipedia.org/wiki/Benford%27s_law
In conversations with a friend from university I learned about the No Free Lunch Theorem and how it affects the state-of-the-art of machine learning and artificial intelligence development.
Put simply, the No Free Lunch Theorem (NFL) proves that if an algorithm is good at solving a specific type of problem then it pays for this success by being less successful at solving other classes of problems.
In this regard, Algorithms, AI Loops and Machine Learning solutions are like people; training to achieve mastery in one discipline doesn't guarantee that same person is a master in a related discipline without further training. However, unlike people, algorithm training might be a zero-sum game with further training likely to reduce the competency of a machine learning solution in an adjacent discipline. For example, while Google's AlphaZero can be trained to beat world champions at chess and Go, this was achieved using separate instances of the technology. A new rule set was created to win at chess rather than adapting the Go rule set. Knowing how to win at Go doesn't guarantee being able to win at chess without retraining.
What does this mean for the development of AI? In my opinion while there are firms with early-mover advantage in the field, their viable AI solutions are in very deep domains that tend to be closed systems, e.g. board games, video games, and making calendar appointments. As the technology is developed, each new domain will require new effort, likely to lead to a high number of AI solutions/providers. So rather than an AI future dominated by corporate superpowers there will be many providers, each with a domain-distinct AI offerings.
Shane Parrish wrote this great piece on Power Laws and why we need to consider non-linear relationships,