Machine Learning: A Product of Moonshot Thinking

Adrian Trujillo Duron
6 min readOct 26, 2021
Photo by Arseny Togulev on Unsplash

“Shoot for the moon. Even if you miss, you’ll land among the stars.”

— Leslie Calvin Brown

This phrase is commonly used in motivational talks, yet it is the core of Moonshot Thinking ideology. This way of thinking has given way to great technological advances over the years, especially from the hand of engineers at companies such as Google, SpaceX and Microsoft. In order to use this framework, you need to think about a giant problem and propose a solution using disruptive technology. You have to abandon the idea of improving things by 10%, instead you should aim to think of a solution that brings 10x improvements or that solves it completely. There are several examples, but one of the most notable is Machine Learning.

Machine Learning

Photo by Pietro Jeng on Unsplash

Machine learning is a branch of Artificial Intelligence and Computer Science that explores the use of algorithms to imitate the way humans learn. This approach focuses on making a computer learn to do a task without being explicitly programmed to do so. It started with a Moonshot thought of “What if we could replicate the human mind using machines?”. This led to the mathematical development of what would be neural networks that enable Deep Learning algorithms.

It’s all about the Data. Some of the ML algorithms were developed in the 70’s. So… why did the Machine Learning boom took so long? The answer lies at the very core of ML, DATA. Today, the world is filled with data, a lot of data and it doesn’t look like it’s going to slow down anytime soon. Every day approximately 2.5 million terabytes are generated. Machine learning brings the promise of deriving meaning from all of that data.

The Machine Learning Process

Machine Learning Process

This would be the typical workflow of a Machine Learning project. The first step would be to Gather Data, then you would proceed to Cleaning and Preparing the Data (this refers to get rid of any kind of data that would affect your model, for example null values, outliers or maybe even an entire set of data because they are not relevant). Next would be the Model Training (here you would choose a ML algorithm depending on the goal and train it with some data), when you achieve some some results you can continue with the Model Validation (in this step the model will be tested against data it has never seen before, and you can accurately measure whether or not the model is doing a good job) the last step is to Deploy the Model and continue to Improve it when you gather more Data.

Types of Machine Learning

Supervised, Unsupervised and Reinforced Learning

Supervised Learning algorithm deals with Labeled Data. This means that for each dataset given, an answer or solution is given as well. This is extremely helpful when dealing with Classification and Regression Problems.

Then we have Unsupervised Learning. This type of learning deals with data that is not labeled, meaning that no answers are given to the algorithm. Instead this type of algorithm will try to group the data based on the patterns and insights that are discovered. The main application is clustering.

The last type of Machine Learning is Reinforced Learning. This is when a machine interacts with its environment, performs actions and learns by trial-and-error method trying to maximize a reward. A typical use would be the self-driving car.


Photo by Sam Xu on Unsplash

Machine Learning sounds almost like magic, but there are some limitations and challenges it currently faces. First are problems with data. Some Machine Learning models need huge amounts of data to be able to start making accurate predictions. And even if you have a huge dataset there could be many errors, outliers and noise within the data that could lead to the ML model performing well on training data, but don’t give accurate predictions for new data.

The value of machine learning is only just beginning to show itself. There is a lot of data in the world today generated not only by people, but also by computers, phones and other devices. This will only continue to grow in the years to come. Before, humans have analyzed data and adapted systems to the changes in data patterns. However, as the volume of data surpasses the ability for humans to make sense of it and manually write those rules, we will turn increasingly to automated systems that can learn from the data and importantly, the changes in data to adapt to a shifting world. Another important limitation of Machine Learning is that it requires a lot of computational power. Without having the right hardware, the training of the model could take up many hours.

Quantum Computers as AI Accelerations

Photo by Anton Maksimov juvnsky on Unsplash

Today the computing power behind Machine Learning training algorithms are GPUs or TPUs. But maybe in the future we could use Quantum Computing to harness the phenomena of quantum mechanics to deliver a huge leap forward in computational power to solve this problem.

Normal computers encode information in binary bits that can be either 1 or 0. In a Quantum Computer, the basic unit is a quantum bit or qubit. These are made up are made up using physical systems, such as the orientation of a photon. What makes quantum computing so powerful is that qubits can represent both values (1 and 0) simultaneously. In situations where there could be a large number of possible combinations, quantum computers can consider them at the same time.

Harassing the power of quantum computing we could train complex neural networks in a matter of seconds, instead of waiting several hours for the model to finish its training.

Tech Knowledge of the Week: Unit Testing

Photo by Sigmund on Unsplash

We all make mistakes. Testing is essential because we could all make even the tiniest mistakes. Some mistakes can be insignificant, but others could be very expensive and in the worst case scenario risk lives. That’s why we must test everything we produce.

In the world of Software Engineering, unit testing refers to testing a single unit of the code, this means the smallest piece of code that can be isolated in the system. This could be a function, routine or method, the smaller the better. Smaller tests give you a more granulated view of how your code is performing and lets you pinpoint directly where an error is coming from.

The unit tests should be developed by the person who wrote the code, this because the programmer will likely know how to access the parts of the code more easily and what can’t be accessed by regular means.

Test Driven Development

Photo by Scott Graham on Unsplash

Another approach to Development would be Test Driven Development. This refers to a technique in code design that involves the software engineer writing the tests beforehand and then write the production code that will pass the tests. The advantage of TDD is that it enables programmers to take small steps when writing code, instead of the usual approach to code in large chunks.