microsoft-copilot

How Does Microsoft Copilot Work?

Microsoft Copilot is a security and orchestration layer that enables the company to integrate OpenAI’s technology into its suite of apps.

Right now, an essential thing to understand is you can’t just take OpenAI and push it into a product; you want to create an intermediate layer (a middle layer) between your application and the user, which sets the context to make the AI relevant.

For example, when Microsoft pushed ChatGPT into search, it didn’t do it by simply plugging the OpenAI APIs to search, as this would have had disastrous consequences.

Instead, they built an additional layer that worked as an intermediary between ChatGPT and the Bing AI search experience.

This layer was built for safety and constraints but also to ground the model by leveraging the index Bing has for search.

Similarly, Microsoft’s team has built The Microsoft 365 Copilot System, a processing and orchestration layer to integrate ChatGPT into its applications.

Like Bing AI’s Prometheus, this works as a layer that intermediates OpenAI to search.

The Copilot System is built as a middle layer able to integrate ChatGPT into Microsoft’s apps.

This copilot is made of three elements:

1. Microsoft 365 apps (Word, Excel, PowerPoint, Teams and more).

2. Microsoft Graph (all content and context, like email, files, meetings, chat, and calendars). This is a critical component to ground the model, thus translating the user’s prompt into something more relevant, which makes the app execute commands without hallucinating.

3. And LLM: a creative engine able to parse the text and data to integrate into its apps.

Let me explain this in detail.

If you’re trying to build an app that potentially reaches millions of users or a tool intended for enterprise use cases, this layer is critical to making the app valuable!

It starts with a prompt from you in an app.

This prompt can be about a user asking PowerPoint to generate a slide about any topic.

Yet, that prompt doesn’t directly translate into a command for ChatGPT/OpenAI.

Instead, Microsoft’s copilot takes over, preprocessing the prompt through an approach called grounding.

Put in simple words, (as Microsoft explains) grounding improves the quality of the prompt, so you get relevant and actionable answers.

In that respect, to make grounding possible, the Microsoft Graph plays a key role. In fact, Microsoft’s Copilot makes a call to the Microsoft Graph to retrieve your business content and context.

Take the case of you prompting Microsoft’s Excel to tell you how your company’s sales are doing respective to competitors; if ChatGPT were unleashed without the Microsoft Copilot, it would probably go off to make stuff up.

Instead, Microsoft’s Copilot System intermediates the prompt to tell ChatGPT to access the context (made of the sales data you have) to provide a relevant answer, which is accurate, and it doesn’t make stuff up!

That’s what grounding does; it calls the Microsoft Graph to retrieve your business content and context.

How would this work in practice?

Let me show you how we used a similar approach to build a web app.

Take the case of a user who can talk to an Excel Spreadsheet to ask it questions about the data.

The user might use a prompt like “give me a comparison of sales data between us and our competition.”

That seems a good prompt, doesn’t it?

Well, yes, but it can still generate an inaccurate answer instead of processing the prompt as it is.

Microsoft’s Copilot System translates the prompt into something like

“Access my sales data to provide a comparison respective to competitors, but make sure you use the context in a way that is relevant and accurate, stating only facts!”

This is the prompt that is passed to ChatGPT to provide a secure and accurate answer!

Below, it’s what the process looks like!

In short, you have based on your experience with ChatGPT or Bing chat.

Copilot takes the response from the LLM and post-processes it. This post-processing includes additional grounding calls to the graph.

Thanks to the graph, Microsoft can perform checks, security, compliance and privacy reviews, and command generation.

A good chunk of the intermediation Microsoft does to make ChatGPT prone to be used in its apps is to “reframe” the user’s prompt to ground it based on context (through a graph) so that the prompt is changed in a way that makes it more relevant.

In short, the user doesn’t prompt the apps, but Microsoft copilot intermediates it and translates the user prompting into a relevant one; that is most of the grounding Microsoft does through its copilot.

Sorry if I got too technical, but that is how you build a middle-layer application!

This process is critical either to building consumer-level applications or enterprise-level applications.

Read Next: History of OpenAI, AI Business Models, AI Economy.

Connected Business Model Analyses

AI Paradigm

current-AI-paradigm

Pre-Training

pre-training

Large Language Models

large-language-models-llms
Large language models (LLMs) are AI tools that can read, summarize, and translate text. This enables them to predict words and craft sentences that reflect how humans write and speak.

Generative Models

generative-models

Prompt Engineering

prompt-engineering
Prompt engineering is a natural language processing (NLP) concept that involves discovering inputs that yield desirable or useful results. Like most processes, the quality of the inputs determines the quality of the outputs in prompt engineering. Designing effective prompts increases the likelihood that the model will return a response that is both favorable and contextual. Developed by OpenAI, the CLIP (Contrastive Language-Image Pre-training) model is an example of a model that utilizes prompts to classify images and captions from over 400 million image-caption pairs.

OpenAI Business Model

how-does-openai-make-money
OpenAI has built the foundational layer of the AI industry. With large generative models like GPT-3 and DALL-E, OpenAI offers API access to businesses that want to develop applications on top of its foundational models while being able to plug these models into their products and customize these models with proprietary data and additional AI features. On the other hand, OpenAI also released ChatGPT, developing around a freemium model. Microsoft also commercializes opener products through its commercial partnership.

OpenAI/Microsoft

openai-microsoft
OpenAI and Microsoft partnered up from a commercial standpoint. The history of the partnership started in 2016 and consolidated in 2019, with Microsoft investing a billion dollars into the partnership. It’s now taking a leap forward, with Microsoft in talks to put $10 billion into this partnership. Microsoft, through OpenAI, is developing its Azure AI Supercomputer while enhancing its Azure Enterprise Platform and integrating OpenAI’s models into its business and consumer products (GitHub, Office, Bing).

Stability AI Business Model

how-does-stability-ai-make-money
Stability AI is the entity behind Stable Diffusion. Stability makes money from our AI products and from providing AI consulting services to businesses. Stability AI monetizes Stable Diffusion via DreamStudio’s APIs. While it also releases it open-source for anyone to download and use. Stability AI also makes money via enterprise services, where its core development team offers the chance to enterprise customers to service, scale, and customize Stable Diffusion or other large generative models to their needs.

Stability AI Ecosystem

stability-ai-ecosystem
Microsoft Copilot Work?">

About The Author

Scroll to Top
FourWeekMBA