- Lukasz Kaiser is a Polish machine learning researcher who co-designed neural models for machine translation, parsing, and other algorithmic and generative tasks. He also played a significant part in the development of Google’s TensorFlow System.
- Kaiser joined Google Brain as a senior software engineer in October 2013 before moving to a staff research scientist role in mid-2016. He started at a time when NLP was a relatively new field of AI and made several key advancements with attention-enhanced models.
- Kaiser left Google Brain in June 2021 and joined OpenAI as a researcher soon after. There, he has been involved with the development of ChatGPT and, more recently, the GPT-4 multimodal LLM.
Lukasz Kaiser is a Polish machine learning researcher who co-designed neural models for machine translation, parsing, and other algorithmic and generative tasks. He also played a significant part in the development of Google’s TensorFlow System.
| Category | Details |
|---|---|
| Full Name | Łukasz Kaiser |
| Place of Birth | Poland |
| Nationality | Polish |
| Education | Ph.D. in Mathematics from RWTH Aachen University |
| Early Career | Researcher in mathematics and computer science, focusing on computational models and algorithms |
| Major Companies | Google Brain, OpenAI |
| Positions | Research Scientist at Google Brain, Research Scientist at OpenAI |
| Business Milestones | – 2013: Joined Google Research, focusing on natural language processing and machine learning. – 2014: Co-authored TensorFlow, an open-source machine learning framework that has become widely used in AI research and industry. – 2017: Co-authored the seminal paper “Attention Is All You Need,” which introduced the Transformer model, revolutionizing natural language processing and enabling advancements such as BERT and GPT. – 2018: Contributed to the development of BERT (Bidirectional Encoder Representations from Transformers), a breakthrough model in NLP that achieved state-of-the-art results on various tasks. – 2019: Continued to work on advancing Transformer models and their applications in AI, influencing the development of numerous AI systems and tools. – 2020: Joined OpenAI as a Research Scientist, focusing on large-scale AI models and their applications. – 2021: Played a key role in developing and improving GPT-3, one of the largest and most powerful language models, further advancing the field of NLP. – 2022: Actively engaged in research on scaling AI models, improving efficiency, and addressing ethical considerations in AI development. – 2023: Continued to contribute to cutting-edge research at OpenAI, exploring new frontiers in AI and machine learning, and publishing influential papers in the field. |
Education
Kaiser completed his Ph.D. in Computer Science from RWTH Aachen University in Germany in 2008. His thesis was centered around algorithmic model theory and demonstrated the deep interplay between logic and computability in automatic structures.
In 2009, Kaiser won the E.W. Beth award for outstanding dissertations in the field of logic and spent the next two years or so at the university in post-doctoral research.
Kaiser later became a chargé de recherche (permanent research scientist) at the French National Centre for Scientific Research (CNRS) in Paris in October 2010. There, he continued his work on logic, games, and artificial intelligence.
Google Brain
Kaiser joined Google Brain as a senior software engineer in October 2013 before moving to a staff research scientist role in mid-2016.
He started with Google at a time when NLP was a relatively new field with many unknowns. Kaiser later explained that “When neural networks first came out, it’s built for image recognition to process inputs with the same dimension of pixels. Sentences are not the same as images.”
While Ilya Sutskever, Oriol Vinyals, and Quoc Le proposed a solution in their 2014 paper Sequence to Sequence Learning with Neural Networks, the model was far from perfect and performed poorly when trained on human-annotated datasets.
Soon after, Kaiser and his counterparts proposed an attention-enhanced model that enabled it to pay more attention to the keywords in a sentence and achieve state-of-the-art results. His work later laid the foundation for the Google Neural Machine Translation (GMNT) – an end-to-end learning system for automated translation now used in Google Translate.
TensorFlow library
Kaiser has also been a key contributor to the development of Google’s open-source TensorFlow library for large-scale machine learning. The library is now the world’s most prominent ML system and enables developers and researchers to overcome some of the obstacles associated with starting their first model.
To that end, Kaiser and his team have also released the Tensor2Tensor (T2T) repository on GitHub. In addition to making deep learning more accessible, it was also hoped that the release would accelerate machine learning research.
OpenAI
Kaiser left Google Brain in June 2021 and joined OpenAI as a researcher soon after. There, he has been involved with the development of ChatGPT and, more specifically, the GPT-4 multimodal LLM.
On OpenAI’s website, Kaiser is listed as having worked on the pretraining data for GPT-4 and is also a core contributor to the model’s long context which is now capable of handling over 25,000 words of text.
In addition, Kaiser is also part of a substantial team of researchers who worked on reinforcement learning and alignment.
Key Highlights:
- Background and Education: Lukasz Kaiser is a machine learning researcher from Poland who has contributed significantly to neural models for various tasks, including machine translation and parsing. He completed his Ph.D. in Computer Science from RWTH Aachen University in 2008, focusing on algorithmic model theory and its connections to logic and computability.
- Google Brain Contributions: Kaiser joined Google Brain in 2013 as a senior software engineer and later became a staff research scientist. He played a pivotal role in the development of neural models for natural language processing (NLP), particularly in the field of machine translation. His work on attention-enhanced models improved NLP tasks and laid the groundwork for the Google Neural Machine Translation (GNMT) system, which powers Google Translate.
- TensorFlow Development: Kaiser was also instrumental in the development of Google’s open-source machine learning library, TensorFlow. He contributed to making the library more accessible and user-friendly. His team released the Tensor2Tensor (T2T) repository on GitHub to help researchers and developers tackle machine learning challenges more effectively.
- Move to OpenAI: In June 2021, Kaiser transitioned from Google Brain to OpenAI. He became involved in the development of ChatGPT and played a crucial role in the creation of GPT-4, a multimodal language model. He focused on pretraining data and contributed to the model’s long-context capabilities, allowing it to handle extensive text input.
Read Next: History of OpenAI, Who Owns OpenAI, AI Business Models, AI Economy.
Connected Business Model Analyses
AI Paradigm





OpenAI Organizational Structure




Stability AI Ecosystem

