Discover more from Sentient Syllabus
Reading Between the Lines
Quick thoughts on the search wars' opening moves
This letter discusses first developments in the “search wars”, and what they might mean for our concerns about generative AI in education.1
Recent news, toots, and blogs have been overflowing with Microsoft’s plans to integrate generative AI into its struggling Bing search engine, and Google’s plans to leverage their own technology stack on top of a search engine enhancement called Bard. Of course, the current explosion of new AI-based services is a much broader topic, but search is where the money is,2 and this will determine the developments in the near term. The search wars have begun, we are seeing the opening moves, we can’t tell how this will end, but we can get a better idea where the battlegrounds will be.
I watched Google’s researchers present their vision in a press event in Paris (Google 2023), and enjoyed it as an exercise of reading between the lines. To begin, let’s briefly summarize what got us to the showdown between Bing and Bard.
No doubt, the opening move was made by Microsoft. Although Google’s technologies had made the game possible – through reinforcement learning,3 transformer architectures4 and the like, although the enabling technology received crucial ideas, manpower, and funding from Google investments, it was OpenAI, with Microsoft’s financial backing,5 who captured the world’s imagination through its computed conversations with over a hundred million users.6 Clearly, ChatGPT was not going to make money from teaching us how to unstick a peanut butter sandwich from our VCR, KJV style,7 but search would be another matter. And indeed, after brief rumours that a ChatGPT powered version of Bing would drop in March, that event was moved ahead by a month, to February 7 to preempt a Google response. The new Bing is now available – for selected users, who will gently take it through a phase of RLHF (Reinforcement Learning with Human Feedback) – the magic sauce that shapes our experience with the AI.
This has potential, for three reasons. (A) ChatGPT’s achilles heel is its tendency to generate “Schrödinger Facts”: superimposed truths that collapse into untruths. Curing it of such hallucinations by intersecting its statements with sourced information would make it infinitely more useful. (B) We are usually not interested in links when we search, we are interested in information. Giving us valid information, distilled from actual sources, would be a huge incentive to drive uptake. (C) Google’s market dominance is well earned. Over the years, their algorithms have continuously improved. I can’t remember a recent search for which I had to go to the second page of results or further, to find what I was looking for. Can they be unseated? To do so would require something disruptive, to make an actual difference – like ChatGPT.
But there is another aspect, a strategic problem that has been widely overlooked. Google may not be able to respond to Bing in kind without damaging its core business. You see, once search results are condensed into aggregated, factual responses, the whole cacophony of twitching banner ads, subliminal affiliate links, and barking popup vendors becomes obsolete. But this is the source of Google’s revenues. Interfering with this ecosystem could damage the whole business model. Microsoft does not have this problem, since they are only earning a minor share of their revenue this way in the first place. By proposing a different way to search, they may have made a move that leaves Google with only two options, and both of them are bad: lose revenues per search or lose users. What would you prefer?8
Of course, when there are only two bad options, you need to find the third one, and it may be hidden in the Paris event. What was talked about, and what was not?
Perhaps most remarkable about the Paris event was that Bard, the AI search engine enhancement built on top of Google’s LaMDA model,9 was hardly present. Bard featured for only about three minutes of the 38 minute event (12:53).10 And not in much detail. We heard in passing that a “light” version of the LaMDA language model is behind it, which requires less computing power: this makes the deployment less costly and more scalable – important points for the financial bottom line. But rather than showcase the fascination of natural language interactions, what we were shown was not inspiring, and always presented with an undertone of caution: to ensure “quality, safety, and groundedness” (15:18).11 This was most apparent when the discussion moved to NORA queries (No One Right Answer; 15:30), the minefield of socially charged questions and answers that can break a whole company's reputation – and valuation – these days. Google chose “what are the best constellations to look for when stargazing” (15:56) as its paradigm for “a diverse range of opinions and perspectives” – and I could not possibly have come up with a less controversial example than night sky constellations either. Which tells us a lot about the priorities that are given to expectation and risk management.
It also tells us that Google is not interested in a head to head race between Bard and Bing. There is a bigger picture to pursue.
A prevalent subtext of the presentations was that the mode of our interactions with the services will change. Multimodal search, built around Google Lens technology, using text, voice, images, and video as input, is one pillar of this approach. Multimodal results – sites, images, and again and again AR (Augmented Reality) is the other pillar.12
This contains an unspoken premise: users are expected – and enticed – to rely ever more on their phones, always present, always on. Prabhakar Raghavan, Google’s senior VP of Search, who hosted the event made this absolutely clear: “Your camera is the next keyboard” (6:47);13 with more than 10 billion visual search interactions per month “the age of visual search is here”, and “you won't have to type your queries”. The problem is, phones are designed for consumers of information, not for producers. This may align perfectly with commercial interests, but for us it is a rather dystopian perspective. It does however reveal the backbone of Google's game plan: to leverage its strengths in data and technology integration.
Various modes of service integration took centre stage in the event. Here, AI would only appear under the hood – but the emergent vision is to run our experience of the world through assistive filters: maps, searches, AR, translation – and wherever Google did not think of something, they are opening the gates for others to step in (19:05). For a fee. Indeed, since Google’s most unique assets are not technology but data, that defines a unique position from which to compete. Will this benefit us? That is not certain. Consumer level data is not of much concern to us, and whether we can hope for data integration in the realms of research and scholarship, i.e. not waxing about shopping options or synthesizing 3D models of sneakers, but linking genotypes to phenotypes, examining architectural styles through social needs, contrasting Scottish Enlightenment with Neo-Confucians, that remains to be seen. Though I’ll be the first one to concede that sneakers matter more to more people, discovering the joy of thinking instead is where we would rather hope they go.
Why would this vision matter to anyone? Google used two paradigms to make their case for utility. One was directed at their clients: shopping choices, like purchasing furniture and clothing in alternate colours and styles were shown; advice for selecting a new car; looking for a coffee shop and virtually exploring its inside – all these are signals to reassure their industry partners that they matter, that Bard will not cut them off to compete with Bing. The other paradigm was directed to their users: a message of social and cultural responsibility (19:10). Take translation as just one example of several: Zero-shot Machine translation – i.e. being able to build translation tools that do not require translation-pairs14 was showcased as a breakthrough to support under-resourced and endangered languages. And much space was given to responsibility, Bard as the search assistant that is safe to use. That would be a concern in a K12 context, but excessive filtering will drive users away, quickly.15 Thus the message to advertisers was: we won't abandon you. And the message to users: we won't be evil.
On this topic, there is more. A major potential stumbling block that concerns Google and Microsoft alike, and is quite relevant for scholarship as well, is the unsolved question of copyright: is this use of the training data “fair”? And who owns the generated text. At first I wondered why the organizers spent time to demonstrate a funny AI model that had coloured blobs sing in classical voice and harmony. But then we were told something significant. The model was trained by professional singers. But when it is used, “what you hear aren’t the voices of the opera singers, but instead the neural network’s interpretation of what opera singing sounds like” (32:18). According to Google, the AI is a form of interpretation, which is distinct from a copy, it should be seen as the result of learning from the data the algorithm has seen, not learning the data itself. If this view is considered compelling, the use of training data or its generated results would be deemed to raise even fewer issues than a search engine’s use of data would.
Google has strengths in its ecosystem and this presentation showed how it is positioning itself to leverage them: their applications are more diverse, they have data available, they have clients, they understand search better than anyone. Therefore, they will try to redefine what the search wars will be about, on their own terms: practical ways to integrate data, viable technology solutions in the backend, nurturing the relationships with paying clients, ubiquitous engagement in a million of everyday ways – and not necessarily making smarter AI. Yet, I did not see that this becomes compelling for the kind of social interactions and cognitive tasks we have in the classrooms of today.
I am left with the uneasy feeling that both Bing and Bard do not build on the magic – whether black magic or white – that we felt when we first met ChatGPT. Connecting those open-ended conversations with real-world resources, that would be huge.
But I actually don’t think this means we wanted AI to give us better search at all. We wanted search to give us better AI.
CASWELL, Isaac and BAPNA, Ankur (2022). “Unlocking Zero-Resource Machine Translation to Support New Languages in Google Translate”. Google Research Blog. (link)
Google (2023-02-08). “Google Presents: Live from Paris”. Youtube live stream. (link)
Hub (2023-02-09). “Where is ChatGPT taking us? And do we want to follow?” Interview with Daniel KHASHABI. Johns Hopkins University. (link)
MEHDI, Yusuf (2023-02-07). “Reinventing search with a new AI-powered Microsoft Bing and Edge, your copilot for the web”. Official Microsoft Blog (link)
DASTIN, Jeffrey (2023-01-23). “Microsoft to invest more in OpenAI as tech race heats up”. Reuters. (link)
HU, Krystal (2023-02-02) “ChatGPT sets record for fastest-growing user base - analyst note”. Reuters. (link)
PICHAI, Sundar (2023-02-06). “An important next step on our AI journey”. The Keyword. (link)
SILVER, David; HUBERT, Thomas; SCHRITTWIESER, Julian; ANTONOGLU, Ioannis; LAI, Matthew; GUEZ, Arthur; LANCTOT, Marc; SIFRE, Laurent; KUMARAN, Dharshan; GRAEPEL, Thore; LILLICRAP, Timothy; SIMONYAN, Karen and HASSABIS, Demis (2018). “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go through Self-Play.” Science 362(6419): 1140–1144. (doi) .
Statista (2023) https://statista.com
TAN Huileng (2023-02-08). “I asked Microsoft's 'new Bing' to write me a cover letter for a job. It refused, saying this would be 'unethical' and 'unfair to other applicants.'” Business Insider. (link)
VASWANI, Ashish; SHAZEER, Noam; PARMAR, Niki; USZKOREIT, Jakob; JONES, Llion; GOMEZ, Aidan N; KAISER, Lukasz, and POLOSUKHIN, Illia (2017) “Attention Is All You Need.” arXiv. (doi)
Feedback, comments, and experience are welcome at email@example.com .
Sentient Syllabus is a public good collaborative. To receive new posts you can enter your email for a free subscription. If you find the material useful, please share the post on social media, or quote it in your own writing. If you want to do more, paid subscriptions are available. They have no additional privileges, but they help cover the costs.
Cite: STEIPE, Boris (2023) “Reading Between the Lines”. Sentient Syllabus 2023-02-09 https://sentientsyllabus.substack.com/p/reading-between-the-lines .
For a few weeks, updates may be made to this newsletter, to include corrections and reflect thoughts from comments, and other feedback. After that period, it will remain unchanged, a DOI will be obtained, and this note will be removed.
This is analogous to how Google makes its sheets / docs / slides ecosystem available. Google benefits from recognition and reputation, and from data-collection, but Microsoft loses out on license fees.
Minutes:seconds refer to the live-stream video.
I would have expected this emphasis to mean that Google is in a stronger position – but first reported experiences appear to show that “quality, safety, and groundedness” is not a solved problem for either Bing or Bard.
David Khashabi thinks that such modalities are a prerequisite for the integration AI assistants with the physical world, and that forms of “ChatGPT with eyes and ears” will drive this cultural shift (Hub 2023).
This has obvious implications for writing and thinking tasks. As we all know, hand-writing leads to different thoughts from typing, which is again different from dictating, which is different from conversing. The medium matters tremendously. We need to pay more attention to this development.
An early Bing user reported that the algorithm had refused to write them a cover letter, citing ethical concerns (Tan 2023). However, such overreaching attempts to sanitize interactions do not only raise ethical concerns of themselves, they are also a typical example of a failed understanding of the user-assistant hierarchy: we need machines that think with us, not for us. The claim that such models would be more “aligned” with human values (ibid.) could not possibly be more mistaken: the human value that overrides all others is autonomy.