pre-training

Pre-Training In A Nutshell

In the context of AI, pre-training describes the process of training a model with one task so that it can form parameters to use in other tasks.

Pre-training, a key component of the current AI paradigm

Pre-trained has turned out to be one of the most important aspects of the current AI paradigm, where large language models, to transform into general-purpose engines, need pre-training.

Pre-training, therefore, through a transformer architecture, becomes the stepping stone to make the AI model extremely versatile and able to generalize across tasks, which is the core innovation of what made AI commercially viable right now.

Understanding pre-training

Pre-training in artificial intelligence is at least partly inspired by how humans learn. Instead of having to learn a topic from scratch, we transfer and repurpose existing knowledge to understand new ideas and navigate different tasks.

In an AI model, a similar process unfolds. The model is first trained on a task or dataset with the resultant parameters used to train another model on a different task or dataset. In effect, the model can perform a new task based on prior experience.

One of the most critical aspects of pre-training is task-relatedness, or the idea that the task the model learns initially must be similar to the task it will perform in the future. For example, a model trained for object detection could not be later used to predict the weather. 

Pre-training methods

Here are some of the ways pre-training is conducted in the natural language processing space.

Word2vec

Developed by Google, Word2vec is a tool that produces static word embedding and can be trained on millions of words by measuring word-to-word similarity. Word2Vec is part of a family of related models that are trained to construct linguistic word contexts.

The model, released in 2013, can detect synonymous words once trained and suggest additional words for a partial sentence.

GPT

GPT is a transformer-decoder-based language model based on the core premise of self-attention. To compute a representation of a given input sequence, the model can attend to different positions of that sequence.

GPT is trained over two stages. In the first stage, creator OpenAI uses a language modeling objective on unlabeled data to learn the initial parameters. Then, those parameters are adapted to a target task (otherwise referred to as a training example) using the corresponding supervised objective. 

BERT

BERT is another transformer-decoder-based language model that is first trained on a large volume of text such as Wikipedia. 

BERT is a fine-tuning and encoder-based model that features a bidirectional language model. Instead of the left-to-right word protection that decoder-based models like GPT use, BERT operates based on two new tasks.

The first pretraining task of the model is known as Masked Language Model (MLM), where 15% of the words are randomly masked and BERT is asked to predict them. As we noted, BERT can predict words in either direction.

The second task is related to model input. BERT does not use words as tokens but instead as word pieces. For instance, the word โ€œworkingโ€ is โ€œworkโ€ and โ€œingโ€ instead of โ€œworkingโ€. The model then adds position embedding to avoid a weakness of self-attention where word position information is ignored. 

Key takeaways

  • In the context of AI, pre-training describes the process of training a model with one task so that it can form parameters to use in other tasks.
  • The model is first trained on a task or dataset with the resultant parameters used to train another model on a different task or dataset. In essence, the model can perform a new task based on prior experience.
  • Three pre-training methods include Word2vec, GPT, and BERT. Each model has its own way of learning the data to make predictions.

Connected AI Concepts

AGI

artificial-intelligence-vs-machine-learning
Generalized AI consists of devices or systems that can handle all sorts of tasks on their own. The extension of generalized AI eventually led to the development of Machine learning. As an extension to AI, Machine Learning (ML) analyzes a series of computer algorithms to create a program that automates actions. Without explicitly programming actions, systems can learn and improve the overall experience. It explores large sets of data to find common patterns and formulate analytical models through learning.

Deep Learning vs. Machine Learning

deep-learning-vs-machine-learning
Machine learning is a subset of artificial intelligence where algorithms parse data, learn from experience, and make better decisions in the future. Deep learning is a subset of machine learning where numerous algorithms are structured into layers to create artificial neural networks (ANNs). These networks can solve complex problems and allow the machine to train itself to perform a task.

DevOps

devops-engineering
DevOps refers to a series of practices performed to perform automated software development processes. It is a conjugation of the term โ€œdevelopmentโ€ and โ€œoperationsโ€ to emphasize how functions integrate across IT teams. DevOps strategies promote seamless building, testing, and deployment of products. It aims to bridge a gap between development and operations teams to streamline the development altogether.

AIOps

aiops
AIOps is the application of artificial intelligence to IT operations. It has become particularly useful for modern IT management in hybridized, distributed, and dynamic environments. AIOps has become a key operational component of modern digital-based organizations, built around software and algorithms.

Machine Learning Ops

mlops
Machine Learning Ops (MLOps) describes a suite of best practices that successfully help a business run artificial intelligence. It consists of the skills, workflows, and processes to create, run, and maintain machine learning models to help various operational processes within organizations.

OpenAI Organizational Structure

openai-organizational-structure
OpenAI is an artificial intelligence research laboratory that transitioned into a for-profit organization in 2019. The corporate structure is organized around two entities: OpenAI, Inc., which is a single-member Delaware LLC controlled by OpenAI non-profit, And OpenAI LP, which is a capped, for-profit organization. The OpenAI LP is governed by the board of OpenAI, Inc (the foundation), which acts as a General Partner. At the same time, Limited Partners comprise employees of the LP, some of the board members, and other investors like Reid Hoffmanโ€™s charitable foundation, Khosla Ventures, and Microsoft, the leading investor in the LP.

OpenAI Business Model

how-does-openai-make-money
OpenAI has built the foundational layer of the AI industry. With large generative models like GPT-3 and DALL-E, OpenAI offers API access to businesses that want to develop applications on top of its foundational models while being able to plug these models into their products and customize these models with proprietary data and additional AI features. On the other hand, OpenAI also released ChatGPT, developing around a freemium model. Microsoft also commercializes opener products through its commercial partnership.

OpenAI/Microsoft

openai-microsoft
OpenAI and Microsoft partnered up from a commercial standpoint. The history of the partnership started in 2016 and consolidated in 2019, with Microsoft investing a billion dollars into the partnership. It’s now taking a leap forward, with Microsoft in talks to put $10 billion into this partnership. Microsoft, through OpenAI, is developing its Azure AI Supercomputer while enhancing its Azure Enterprise Platform and integrating OpenAI’s models into its business and consumer products (GitHub, Office, Bing).

Stability AI Business Model

how-does-stability-ai-make-money
Stability AI is the entity behind Stable Diffusion. Stability makes money from our AI products and from providing AI consulting services to businesses. Stability AI monetizes Stable Diffusion via DreamStudio’s APIs. While it also releases it open-source for anyone to download and use. Stability AI also makes money via enterprise services, where its core development team offers the chance to enterprise customers to service, scale, and customize Stable Diffusion or other large generative models to their needs.

Stability AI Ecosystem

stability-ai-ecosystem

Main Free Guides:

About The Author

Scroll to Top
FourWeekMBA