In 2018, most people when thinking about the web, they relate it to major products and sites that have entered our everyday routine. One of those is Google. Yet these people neglect the fact that Google is only a tiny fraction of what is called the visible web.
Before we get to how Google business evolved over the years, let’s first look at how the web evolved.
- A quick history of the Internet
- When the only computers were people
- The Turing machine
- When computers were as big as a house
- From batch processing to time sharing
- From the ARPANET to the chaotic web
- The World Wide Web
- Search engines take over
- What Web are you looking for through Google?
- What is the indexable web?
- What is the visible web?
- What is the deep web?
- Enter Google
- When search seemed to be a solved problem PageRank showed they had just scratched the surface
- When Page and Brin saw advertising as the worst business model for search
- Google revenues start to take off, yet the company would take a few years to become a “unicorn”
- Google deal with AOL as the first traction phase
- The missing piece for Google business model: the launch of AdSense (Google network websites)
- Google embraced the whole web with its business model
- Google gains traction, and it goes from less than a billion to over ten billion in revenues in three years
- Google has become Alphabet
- Where does Google stand today?
- Google and the rise of AI
- The rise of voice search and the battle ahead
A quick history of the Internet
Today we all give the internet for granted. That is not surprising, considering that over 3.5bln people are connected. Also, that number grows each second, faster and faster. Almost like a rocket heading toward the space undisputed, the web is conquering larger and larger pieces of the world population.
Indeed, as of today, around 46% of the world population has access to the internet. To put that into context, imagine that only in 1995, the world population connected to the internet was less than 1%! This means that each year in the last two decades the web users base has grown at a double-digit rate.
And although that rocket, which we call “web” sooner or later will stop, the only question that pops to the mind of any modern human is “how long will it take for the web to be used by 100% of the world population?”
Even though this question is legitimate, that is not the most important one. We make a case for the benefits that the web offers to its users. We may almost get horrified when we hear that China’s government is censoring the internet. We may also find ridicule when the French Government states that workers have the right to be disconnected, outside the working hours.
Yet, instead, that thinking about the web, from the quantitative standpoint (how many users will join in?); the real, most crucial question is “how will the web work in the future?” In other words, we want to make a qualitative assessment.
So far we managed to create web technologies that although bettered our lives, also made us more stressed, confused and dehumanized. Is this where we are heading? Hopefully not. In this article, I want to give you a very quick summary of the web. This is not a complete history, but just my point of view.
When the only computers were people
We all give for granted that a computer is a squaring device, that sits on your desk, and it is able to perform a variety of tasks. From computation to coding and video editing, computers nowadays are indispensable for any human. Yet the term “computer” was used starting from the early 17th century. How is that possible?
Before those machines that sit on our desk were invented, all the tasks related to computing were delegated to humans. In fact, what seemed a trivial task, actually had a crucial role in modern human societies, for thousands of years.
From roman numbers to the invention of logarithms (by the Scottish mathematician John Napier, see “Computing before computers“), computing was made easier and easier. While in middle-age societies computing was mainly an accounting task, which made possible the birth and rise of empires and powerhouses (like the Medici Family in Florence and the Rothschild Family in Europe); it eventually became more and more important in the scientific and military field.
The first real attempt at substituting a human with a machine, to perform computational tasks was made nu Charles Babbage in 1828. The seed was planted.
The Turing machine
What we want is a machine that can learn from experience. Alan Turing, London 1947
Even though the quote above was pronounced in 1947, it still sounds revolutionary! It was pronounced by Alan Turing, the father of modern computation. Still, today is hard to think of a machine as “intelligent,” (yet Alan Touring deeply thought about machine intelligence, more than three-quarters of a century ago.
His idea was simple but extremely powerful. Turing thought what if we made machines able to learn from experience and to solve problems by using rule-of-thumb principles fed to them, those machines could become extremely useful to humanity. In short, by making a machine working through a heuristic, a short-cut, that same machine could have solved a problem way faster that otherwise would have been possible.
It was the birth of the theoretical framework behind modern computers.
When computers were as big as a house
Time is running out. In one hour or so, you have the presentation of the quarterly financials. Your boss is waiting for you in his office. You are about to panic. Yet, you take a deep breath, relax and open an excel spreadsheet.
Within that file, there are hundreds of tabs, tables, arrays. In a plethora of excel formulas, which go from what if statements, to VLOOKUP, you are ready to summarize the huge amount of data in a nice and clean Pivot Table. You are ready to rock!
That same file, which is saved on your desktop computer, stored in a device that weighs less than one kilogram, is something that you take for granted. Yet it took decades for computers to become as useful to humans as it was first imagined by Alan Turing.
Also, computers were not the slim and light machines that we are used to but instead vast and voluminous machines that occupied entire rooms.
Thanks to Turing‘s new theoretical framework the first computers were created. It was the year 1943, in the middle of the Second World War. There were two huge battles going on. One was only visible to people. It was the war on the battlefields between the allies’ soldiers against Germany.
Another battle, way less visible to common people, yet of extreme importance. It was the competition between the allies’ intelligence service against the Germans. It was not a battle fought through weapons but rather through the rise of a new technological tool, the modern computer.
In that scenario, in 1943, the British scientist Tommy Flowers, developed the Colossus. A computer used by Biriths to encrypt the German messages. Not long after Eckert and Mauchly from the University of Pennsylvania, developed the ENIAC. This machine looked more like a modern super-computer; in fact, it occupied 1,800 square feet (167 square meters), with 18,000 vacuum tubes and weighing almost 50 tons.
Except for the fact that to perform the same calculations that you can through your 11 inch Mac Air it probably took many ENIAC computers combined. In other words, imagine an entire castle occupied by those first digital computers, only to perform the calculations you are now able to do in a simple excel spreadsheet.
From batch processing to time sharing
Today programmers around the world write their lines of codes, press the enter button, and that’s it. The computer executes the command at the speed of light. Yet that is not how things worked back in the 1950s. In fact, at that time computers performed one task at the time.
Those huge machines required a cooling system so powerful, that made sense to close those computers in a room. Yet those computers’ room was accessible only to a few people, called “operators.” In other words, as Paul Alan, co-founder of Microsoft narrates in his book “Idea Man;” programmers had to use a keypunch machine to convert code into a punch card.
Each punch card coincided with one line of code. Those punch cards were given to the operators, that according to their schedules and how they prioritized it, inserted those cards in those huge machines, to be eventually executed.
This meant that if a punch card contained one single error, or it was bent, the whole work of the programmers was invalidated. On the other hand, the programmer wouldn’t have known it for days. This kind of system was called “batch processing,” and it made programmers’ life miserable.
It is not surprising then, that a new system came out only a few years later, in 1957. Rather, than having those computers controlled only by few operators; a remote connection was created. This remote connection allowed each programmer to communicate with the computer directly. In other words, finally, programmers could write their codes, without relying on punch cards, and operators.
Therefore, multiple users could work on the same computer. “Time-sharing” was the first new revolutionary system, that allowed a leap forward in computer’s programming. In fact, finally, a computer could “communicate” with multiple users. Yet another step, which would have revolutionized the internet and therefore humanity, was the ability of computers to communicate with each other. How?
From the ARPANET to the chaotic web
As Kevin Kelly would put computers didn’t become interesting until they didn’t connect to the internet. Yet, the internet of today is not that of yesterday but evolved in a few decades. Yet Kelly’s point is correct.
Before, computers were those giants boring machines that performed clerical tasks. Before, computers could really make a difference in humans’ lives it took the advent of the web. But why did the web took off in the first place?
The World Wide Web
In less than a decade, the world wide web exploded! Today more than a billion websites comprise it!
How did it all start? Put it very shortly Sir Tim Berners-Lee in a side-project he was working on when at the CERN of Geneva figured out he could connect web pages with what we all know today as hypertext. Therefore, he came up with a protocol, which was the result of an extreme need to give a standard to the web. In fact, Tim Berners-Lee later said:
I just had to take the hypertext idea and connect it to the Transmission Control Protocol and domain name system ideas and—ta-da!—the World Wide Web … Creating the web was really an act of desperation, because the situation without it was very difficult when I was working at CERN later. Most of the technology involved in the web, like the hypertext, like the internet, multifont text objects, had all been designed already. I just had to put them together. It was a step of generalising, going to a higher level of abstraction, thinking about all the documentation systems out there as being possibly part of a larger imaginary documentation system.
However, surfing the web was still limited because you could just go from one page to the next through links: the effort it took to find what you were looking for was massive.
Search engines take over
In this chaotic web that was in some ways tamed by Sir Berners-Lee protocol, the Web was still a confusing place. That is why many ventured out in finding a way to search through those pages to see specific content to queries.
This idea led to the creation at the beginning of the 90s of search engines. In short, those were websites that allowed to scan through the web pages available at the time through the web to find what a user was looking for with the use of keywords.
That is until two young fellows and Ph.D. students at Standford University came up with an algorithm able to rank web pages by using a system similar to that used by research papers. In short, one of the most powerful ways to assess the popularity of a research paper was to look at the citations it received from other publications.
That citation mechanism was also used to rank web pages. Those citations were represented by links. A website receiving a link from another website would receive a so-called backlink.
The more quality backlinks a website received, the more it could rank higher in the SERP. Backlinks are still the backbone of the web. However, on that spine, a new web blossomed.
What Web are you looking for through Google?
When you surf the web through a commercial search engine like Google you are only scratching the surface:
What you see on Google or other online communities is just the so-called surface web, which may comprise a small percentage of the total internet.
Therefore, if we had to proceed on layers, the following would be the layers of the web:
Surface Web > visible and indexable web
Deep web > non-indexable web and dark web
Let’s see the differences.
What is the indexable web?
Search engines use algorithms to rank web pages on the surface web. However, before to rank those pages, a commercial search engine has to form an index.
To build that index the search engine uses software, called spiders that crawl web pages. Those web pages contain hyperlinks. By following those links, the web crawlers can index the web. However, what they index is only the so-called indexable web.
In fact, there are web pages which are not accessible to those crawlers. In other words, each website has a file called robots.txt that instructs a web crawler how to behave on a page. In short, it says something like “you ain’t index the page!”
All the pages indexed according to the robots.txt form the indexable web.
What is the visible web?
When you type something in Google‘s search bar, you’re not accessing the whole web but only their “point of view” of the internet, or if you want of the world.
Indeed, while Google‘s bots to crawl all the indexable pages, that doesn’t mean those pages will be shown to users, quite the opposite. Google‘s search algorithm will perform a sort of “censorship” of those pages to decide what is relevant and what not. That is how you get the results you want.
Those results you get from Google, any other commercial search engine, but also from online communities such as Reddit, Facebook and so on are only partial versions of the indexable web. That is the visible web.
In short, the main difference between indexable and visible web stands in the “censorship” or “selection” applied by an algorithm to decide what the users can see.
What is the deep web?
Everything outside the indexable web is part of the deep web. Those can be pages that are protected by passwords, membership pages, but also pages that for instance company’s websites are not showing because are not useful to their users.
There is a subset of the deep web called dark web, where instead you can find anything from political activists to smugglers. That part of the web is accessible with ad hoc search engines like Tor.
We’re in an era of great inspiration and possibility, but with this opportunity comes the need for tremendous thoughtfulness and responsibility as technology is deeply and irrevocably interwoven into our societies.
This is what Sergey Brin said in the 2017 Alphabet founders’ letter.
When search seemed to be a solved problem PageRank showed they had just scratched the surface
When Page and Brin managed to create a search engine that was 10x better than competing engines, it made clear that search was all but a solved issue. At the end of 1990s Page came out with its algorithm, PageRank, which managed to rank the entire web based on relevance and authoritativeness of the web pages it indexed.
It took off right away! It was at that point that many of Google’s competitors understood that search was just at the embryonic stage. It was by then that Google had already taken over the search market, yet revenue was still far away.
When Page and Brin saw advertising as the worst business model for search
Bacn in the days, Brin and Page didn’t hide their resentment toward the advertising business model, which was the prevalent model for search. Indeed, in a paper “The Anatomy of a Large-Scale Hypertextual Web Search Engine” where Page and Brin presented their first prototype of Google. With full text and hyperlink database of at least 24 million pages, in a paragraph dedicated to advertising, they explained: “We expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.“
The main issue they had toward advertising was the fact that it was biased and it caused a lot of spam in search results. Indeed, when they met Bill Gross, founder of GoTo, which would later become Overture, the encounter might not have been among the most cordial. That’s because Bill Gross had figured the market for advertising had massive potential, as he introduced an auction-based system for bidding businesses, based on performance and clicks.
However, this was still back when Page and Brin were two academics completing their Ph.D. at Stanford University. The transition to becoming businessmen would take soon to arrive. Indeed, as venture money was soon to be over a plan B was needed.
In addition, as Google managed to rank advertising based on relevance (for instance, by ranking higher those ads that got more clicks) advertising became a possible option. As Larry Page pointed out in the first Google letter to shareholders:
Advertising is our principal source of revenue, and the ads we provide are relevant and useful rather than intrusive and annoying.
Google revenues start to take off, yet the company would take a few years to become a “unicorn”
By 2000 Google was already a key player in the search industry. However, it wasn’t yet in the safe zone at a financial level. Indeed, in 2000 Google made $20 million in revenues. Even though it had launched its AdWords network, which would allow it to speed up growth Google business model was still transitioning.
Some pieces of the puzzle were still missing. However, the first massive deal came into the door.
Google deal with AOL as the first traction phase
By 2002, Overture was still a valid competitor for Google, yet, it was losing ground. Overture had managed to grow thanks to a series of deals. One of the leading deals was with – at the time – one of the most successful portals, AOL. However, in May 2002 the agreement between AOL and Overture was to expire.
It was time for a battle, which would finally allow Google to have its chance for the second stage of massive growth, both regarding users acquisitions, then revenues. As reported in the book “Googled: The End of the World As We Know It” Page escorted his head of business development and sales, Mr. Kordestani, “I want us to bid to win.” Whether or not this story is apocryphal, there is no doubt that the AOL deal played a crucial role in Google‘s future growth.
The missing piece for Google business model: the launch of AdSense (Google network websites)
Back in 2003, Google acquired Applied Semantics, which as reported at the time on Google blog:
Applied Semantics’ products are based on its patented CIRCA technology, which understands, organizes, and extracts knowledge from websites and information repositories in a way that mimics human thought and enables more effective information retrieval. A key application of the CIRCA technology is Applied Semantics’ AdSense product that enables web publishers to understand the key themes on web pages to deliver highly relevant and targeted advertisements.
Google was primarily targeting technology from Applied Semantics called AdSense. It was the missing piece of the puzzle. In fact, with AdSense, Google could finally offer targeted ads within websites of partners that joined the program. In short, Google would provide businesses with the chance to show their banners on the estate of those blogs which had become the heart of the web back in the 2000s. It would also allow those blogs to jump from being amateurs to make some money via advertising. It was all tracked and based on the context of the page.
The AdSense “generate revenue by delivering relevant, cost-effective online advertising. Businesses use AdWords program to promote their products and services with targeted advertising. Also, the thousands of third-party websites that comprise our Google Network use our Google AdSense program to deliver relevant ads that generate revenue and enhance the user experience.“was quite compelling. As pointed out on a 2004 financial report Google would
AdSense would become a critical part of the business.
Google embraced the whole web with its business model
At that stage, Google was ready to take off. I pointed out time and time again that when all the pieces of a business model are in place, that’s when the company is ready to take off for years to come. Back in 2003 when Google had finally fine-tuned its business model, it had three primary constituencies:
- Users: Google provided users with products and services that enabled them to find any information, quickly
- Advertisers: Google AdWords program, the auction-based advertising program allowed businesses to deliver ads both to customers on Google sites (for instance, the search page) and through the Google Network (any blog or site part of the AdSense program)
- Websites: Google free products, Google AdWords and Google AdSense embraced the whole web. While users get information for free and quickly. Businesses could make money by sponsoring their products on Google and via Google network. Publishers could also quickly monetize their content
Google gains traction, and it goes from less than a billion to over ten billion in revenues in three years
Once the business model had all the pieces, needed growth became the norm. If at all, Page and Brin had to make sure not to have Google implode for hypergrowth. Thus, the hardest challenge might have been managing hypergrowth that would continue for over two decades.
Google has become Alphabet
In 2014 Google restructured the company as Alphabet, with Google as a subsidiary. Beyond Search, today Alphabet offers services like YouTube, Maps, Play, Gmail, Android, and Chrome to billion of people worldwide.
Where does Google stand today?
The Google business model is way more diversified today than it was back in 2000. In 2017 Advertising still represented 86% of its revenues. Google – now Alphabet – also devoted part of its revenues in investing in bets which might become its next cash cow. Today those bets only represent over 1% of the total Google turnover.
Google and the rise of AI
As reported in the 2017 founders’ letter Google now uses AI for several aspects comprised of its products:
- understand images in Google Photos;
- enable Waymo cars to recognize and distinguish objects safely;
- significantly improve sound and camera quality in hardware;
- understand and produce speech for Google Home;
- translate over 100 languages in Google Translate;
- caption over a billion videos in 10 languages on YouTube;
- improve the efficiency of data centers;
- suggest short replies to emails;
- help doctors diagnose diseases, such as diabetic retinopathy;
- discover new planetary systems;
- create better neural networks (AutoML);
… and much more.
The rise of voice search and the battle ahead
The next major battle for Google will be in voice search. As Google has become smarter, it has also managed to understand more and more users intent. Will Google manage to be the dominant player in this rising industry?
Handpicked related articles:
- What Is a Business Model? 30 Successful Types of Business Models You Need to Know
- What Is a Business Model Canvas? Business Model Canvas Explained
- Who Owns Google? Under The Hood Of The Tech Giant That Conquered The Web
- The Power of Google Business Model in a Nutshell
- How Does Google Make Money? It’s Not Just Advertising!
- Ok Google, Are You In Search Of A Business Model For Voice?
- The Future of Google: The Curse of Engineers Become Advertisers
- When The AI Meets Users’ Intent, Google Takes A Cut On Every Sale On Earth
- Baidu Vs. Google: The Twins Of Search Compared
- How Does DuckDuckGo Make Money? DuckDuckGo Business Model Explained
- What Is The Most Profitable Business Model?
References for this article: