Niki Parmar

Niki Parmar is a former member of the Google Brain team who studies novel techniques in deep learning. The AI researcher is also involved in the application of generative models to machine learning (ML), image generation, and language modeling tasks.

Below, we’ll discuss the senior research scientist’s professional history to date.

Education and early career

Parmar earned a Bachelor of Engineering at the Pune Institute of Computer Technology in India. Whilst there, she became interested in AI and ML after taking massive open online courses (MOOCs) from Andrew Ng and Peter Norvig. 

She then completed a subsequent Master’s Degree in Computer Science at the University of Southern California. At USC, Parmar worked under Professor Morteza Dehghani and explored social science questions with ML and big data. 

Google Research

Parmar joined Google Research in 2015 and worked as a software engineer for just over two years. Initially, she worked on end-to-end deep learning systems that created alternative ways to solve NLP problems. 

As Parmar learned to harness the power of transferrable embeddings and end-task optimization, she became inspired to transition into a pure research role and contribute to advances in ML.

Google Brain

Parmar joined Google Brain as a Research Software Engineer in October 2017. She was involved in research into the idea of self-attention and how other inductive biases could be used to improve machine learning algorithms. Parmar was also involved in deep learning research for language understanding and vision.

Together with Ashish Vaswani, Noam Shazeer, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin, Parmar co-authored the seminal self-attention paper titled Attention Is All You Need.

Adept AI

Parmar left Google Brain in November 2021 and announced soon after via Twitter that “I’m grateful for the 6+ years I spent there, the peers and friends that are inspiring and the opportunities to push on some of the most important problems in AI.

She then co-founded Adept AI Labs with Ashish Vaswani, David Luan, and various former employees of Google Brain and DeepMind. The start-up was described as an AI research and product lab with the intention to build useful general intelligence.

Luan had worked with both Parmar and Vaswani at Google with the trio credited with the development of the AI transformer architecture. Parmar currently serves as Adept’s CTO.

Key takeaways:

  • Niki Parmar is a former member of the Google Brain team who studies novel techniques in deep learning. The AI researcher is also involved in the application of generative models to machine learning, image generation, and language modeling tasks.
  • Whilst an engineering student at the Pune Institute of Computer Technology, Parmar became interested in AI and ML after taking massive open online courses (MOOCs) from Andrew Ng and Peter Norvig. 
  • Parmar joined Google Research in 2015 and worked as a software engineer for just over two years. She then worked at Google Brain for another four years before co-founding the general intelligence start-up Adept AI. 

Connected Business Model Analyses

AGI

artificial-intelligence-vs-machine-learning
Generalized AI consists of devices or systems that can handle all sorts of tasks on their own. The extension of generalized AI eventually led to the development of Machine learning. As an extension to AI, Machine Learning (ML) analyzes a series of computer algorithms to create a program that automates actions. Without explicitly programming actions, systems can learn and improve the overall experience. It explores large sets of data to find common patterns and formulate analytical models through learning.

Deep Learning vs. Machine Learning

deep-learning-vs-machine-learning
Machine learning is a subset of artificial intelligence where algorithms parse data, learn from experience, and make better decisions in the future. Deep learning is a subset of machine learning where numerous algorithms are structured into layers to create artificial neural networks (ANNs). These networks can solve complex problems and allow the machine to train itself to perform a task.

DevOps

devops-engineering
DevOps refers to a series of practices performed to perform automated software development processes. It is a conjugation of the term “development” and “operations” to emphasize how functions integrate across IT teams. DevOps strategies promote seamless building, testing, and deployment of products. It aims to bridge a gap between development and operations teams to streamline the development altogether.

AIOps

aiops
AIOps is the application of artificial intelligence to IT operations. It has become particularly useful for modern IT management in hybridized, distributed, and dynamic environments. AIOps has become a key operational component of modern digital-based organizations, built around software and algorithms.

Machine Learning Ops

mlops
Machine Learning Ops (MLOps) describes a suite of best practices that successfully help a business run artificial intelligence. It consists of the skills, workflows, and processes to create, run, and maintain machine learning models to help various operational processes within organizations.

OpenAI Organizational Structure

openai-organizational-structure
OpenAI is an artificial intelligence research laboratory that transitioned into a for-profit organization in 2019. The corporate structure is organized around two entities: OpenAI, Inc., which is a single-member Delaware LLC controlled by OpenAI non-profit, And OpenAI LP, which is a capped, for-profit organization. The OpenAI LP is governed by the board of OpenAI, Inc (the foundation), which acts as a General Partner. At the same time, Limited Partners comprise employees of the LP, some of the board members, and other investors like Reid Hoffman’s charitable foundation, Khosla Ventures, and Microsoft, the leading investor in the LP.

OpenAI Business Model

how-does-openai-make-money
OpenAI has built the foundational layer of the AI industry. With large generative models like GPT-3 and DALL-E, OpenAI offers API access to businesses that want to develop applications on top of its foundational models while being able to plug these models into their products and customize these models with proprietary data and additional AI features. On the other hand, OpenAI also released ChatGPT, developing around a freemium model. Microsoft also commercializes opener products through its commercial partnership.

OpenAI/Microsoft

openai-microsoft
OpenAI and Microsoft partnered up from a commercial standpoint. The history of the partnership started in 2016 and consolidated in 2019, with Microsoft investing a billion dollars into the partnership. It’s now taking a leap forward, with Microsoft in talks to put $10 billion into this partnership. Microsoft, through OpenAI, is developing its Azure AI Supercomputer while enhancing its Azure Enterprise Platform and integrating OpenAI’s models into its business and consumer products (GitHub, Office, Bing).

Stability AI Business Model

how-does-stability-ai-make-money
Stability AI is the entity behind Stable Diffusion. Stability makes money from our AI products and from providing AI consulting services to businesses. Stability AI monetizes Stable Diffusion via DreamStudio’s APIs. While it also releases it open-source for anyone to download and use. Stability AI also makes money via enterprise services, where its core development team offers the chance to enterprise customers to service, scale, and customize Stable Diffusion or other large generative models to their needs.

Stability AI Ecosystem

stability-ai-ecosystem

About The Author

Scroll to Top
FourWeekMBA