ai-business-models

Human-in-the-loop AI In A Nutshell

Human-in-the-loop (HITL) is a subset of artificial intelligence that utilizes human and machine intelligence to develop machine learning models.

Understanding human-in-the-loop AI

Despite the unlimited potential of artificial intelligence, around 80% of all AI projects fail and never make a return on investment.

To reduce the likelihood of failure, teams are now utilizing the human-in-the-loop approach to rapidly deploy models with fewer data and with better quality predictions.

The failure rate of an AI model is due to a statistics-based understanding of the world which means the model can never predict anything with absolute certainty.

To account for this uncertainty, some models enable humans to interact with them via direct feedback which is then used by AI to adjust its “view of the world”.

With this preamble out of the way, we can now define HITL with more clarity. In essence, it refers to an AI system that allows for direct human feedback to a model where predictions fall below a certain confidence level.

Human-in-the-loop can be thought of as greater than the sum of its parts. In other words, it strives to achieve what neither a human nor machine could achieve on their own.

When the machine cannot solve a problem, a human provides help in the form of continuous feedback which, over time, produces better results. Conversely, humans turn to machines for assistance when smart decisions need to be made from vast datasets.

Where is human-in-the-loop integrated?

HITL can be integrated into two machine learning algorithms:

  1. Supervised learning – where algorithms are trained with labeled data sets to produce functions that are then used to map new examples. In this way, the algorithm can subsequently determine functions for data that are unlabeled. 
  2. Unsupervised learning – where algorithms take unlabeled data sets and work to find structure and memorize the data in their own way. This can be categorized as a deep learning HITL approach.

Either way, humans check and evaluate the results to validate the machine learning algorithm. If these results are inaccurate, humans refine the algorithm or verify the data once more before feeding it back into the algorithm. 

HITL is an iterative approach to building a model that is not unlike agile software development.

The model is trained from the first bit of data, and no more.

More data is then added and the model is continually updated with subject matter experts who build, adapt, and improve the model or adjust tasks or requirements as needed. 

When can HITL be used?

HITL is most effective in machine learning projects characterized by a lack of available data. In this situation, people are more capable (at least initially) of making an accurate judgment compared to a machine.

Put differently, they are better able to recognize high-quality training data and feed it into the algorithm to produce better results.

With that in mind, HITL is useful in the following situations:

  • When algorithms do not understand the input or when data is interpreted incorrectly.
  • When algorithms are not aware of how to perform a task. 
  • When costly errors need to be avoided during machine learning development, and
  • When the data are rare or not available. If an algorithm is learning to translate English into a language only a few thousand people speak, for example, it may have trouble sourcing accurate examples to learn from.

How is human-in-the-loop playing a key role in the current AI paradigm?

ai-business-models

“This is not a race against the machines. If we race against them, we lose. This is a race with the machines. You’ll be paid in the future based on how well you work with robots. Ninety percent of your coworkers will be unseen machines.”

This is what Kevin Kelly said in The Inevitable, published in 2016. Those words seem spot-on right now!

The technological paradigm which brought us here, moves along a few key concepts to understand, and that enabled AI to move from very narrow, to much more generalized.

And it all starts with unsupervised learning.

Indeed, GPT-3 (Generative Pretrained Transformer 3) is the underlying model, which has been used, to build ChatGPT, with an important layer (as I’ll cover in the coming days) on top of it (InstructGPT) that used a human-in-the-loop approach to smooth some of the key drawbacks of a large language model (hallucination, factuality and bias).

For now, the premise is, GPT-3, launched as a large language model – developed by OpenAI that uses the Transformer architecture – which was the precursor of ChatGTP.

As we’ll see, the turning point for the GPT models was the Transformer architecture (a type of neural network designed specifically for processing sequential data, such as text).

The interesting part of it?

A good chunk of what made ChatGPT incredibly effective is a kind of architecture called “Transformer,” which was developed by a bunch of Google scholars at Google Brain and Google Research.

The key thing to understand is that the information on the web might move away from a crawl, index, rank model to a pre-train, fine-tune, prompt, and in-context learn model!

In that context, human-in-the-loop plays a key role in various parts of this whole process.

Some examples comprise:

  • Fine-tuning: the fine-tuning process is instrumental in making the AI able to perform very specific tasks. This is a supervised learning approach in the context of large language models, and it’s human-in-the-loop. Here humans label the data and show specific and desired outputs to the AI model, to make it much better at a specific task. The main take here is that the fine-tuning process relies on a much much smaller dataset and sample to make the AI model much much better at specific tasks.
  • Reinforcement learning: this is simply the generic term to comprise all the aspects of a supervised learning approach, which leverages humans, to make the AI model way better.
  • Prompt engineering: this is one of the most exciting aspects of the current AI paradigm, where AI models can be made to perform any task, by understanding the context, in a way which makes it possible for them to be both general purpose and specialized.
  • In-context learning: this is another human-in-the-loop approach, where thanks to the in-context learning the result and output of the AI assistant can get way more relevant.

Key takeaways

  • Human-in-the-loop (HITL) is a subset of artificial intelligence that utilizes human and machine intelligence to develop machine learning models.
  • HITL can be integrated into supervised or unsupervised machine learning algorithms. The iterative and collaborative nature of building a model is not unlike the process that occurs in agile software development.
  • HITL is particularly suited to instances where the data is rare or unavailable. It is also useful when costly development errors need to be avoided.

Read: AI Business Models

Connected Business Frameworks

AIOps

aiops
AIOps is the application of artificial intelligence to IT operations. It has become particularly useful for modern IT management in hybridized, distributed, and dynamic environments. AIOps has become a key operational component of modern digital-based organizations, built around software and algorithms.

Machine Learning

mlops
Machine Learning Ops (MLOps) describes a suite of best practices that successfully help a business run artificial intelligence. It consists of the skills, workflows, and processes to create, run, and maintain machine learning models to help various operational processes within organizations.

Continuous Intelligence

continuous-intelligence-business-model
The business intelligence models have transitioned to continuous intelligence, where dynamic technology infrastructure is coupled with continuous deployment and delivery to provide continuous intelligence. In short, the software offered in the cloud will integrate with the company’s data, leveraging on AI/ML to provide answers in real-time to current issues the organization might be experiencing.

Continuous Innovation

continuous-innovation
That is a process that requires a continuous feedback loop to develop a valuable product and build a viable business model. Continuous innovation is a mindset where products and services are designed and delivered to tune them around the customers’ problems and not the technical solution of its founders.

Technological Modeling

technological-modeling
Technological modeling is a discipline to provide the basis for companies to sustain innovation, thus developing incremental products. While also looking at breakthrough innovative products that can pave the way for long-term success. In a sort of Barbell Strategy, technological modeling suggests having a two-sided approach, on the one hand, to keep sustaining continuous innovation as a core part of the business model. On the other hand, it places bets on future developments that have the potential to break through and take a leap forward.

OpenAI Business Model

how-does-openai-make-money
OpenAI has built the foundational layer of the AI industry. With large generative models like GPT-3 and DALL-E, OpenAI offers API access to businesses that want to develop applications on top of its foundational models while being able to plug these models into their products and customize these models with proprietary data and additional AI features. On the other hand, OpenAI also released ChatGPT, developing around a freemium model. Microsoft also commercializes opener products through its commercial partnership.

OpenAI/Microsoft

openai-microsoft
OpenAI and Microsoft partnered up from a commercial standpoint. The history of the partnership started in 2016 and consolidated in 2019, with Microsoft investing a billion dollars into the partnership. It’s now taking a leap forward, with Microsoft in talks to put $10 billion into this partnership. Microsoft, through OpenAI, is developing its Azure AI Supercomputer while enhancing its Azure Enterprise Platform and integrating OpenAI’s models into its business and consumer products (GitHub, Office, Bing).

Stability AI Business Model

how-does-stability-ai-make-money
Stability AI is the entity behind Stable Diffusion. Stability makes money from our AI products and from providing AI consulting services to businesses. Stability AI monetizes Stable Diffusion via DreamStudio’s APIs. While it also releases it open-source for anyone to download and use. Stability AI also makes money via enterprise services, where its core development team offers the chance to enterprise customers to service, scale, and customize Stable Diffusion or other large generative models to their needs.

Stability AI Ecosystem

stability-ai-ecosystem

Business Engineering

business-engineering-manifesto

Tech Business Model Template

business-model-template
A tech business model is made of four main components: value model (value propositions, missionvision), technological model (R&D management), distribution model (sales and marketing organizational structure), and financial model (revenue modeling, cost structure, profitability and cash generation/management). Those elements coming together can serve as the basis to build a solid tech business model.

About The Author

Scroll to Top
FourWeekMBA