Working as an infotrainer for an exhibition on data security at the chamber of labor here in Vienna 1, can offer interesting insights into people’s articulation of their daily interaction with technology. While being at an intersection point between a theoretical approach and subjective practice, I’m always interested in how aware and prone the visitors are of their digitalized lives since a spreading pessimism seems to dominate the current discourse on our daily use of technology.

Especially, when talking to the visitors about the Western techno-capitalist giant, Google, mostly everyone admits frequently using their search engine for any information they desire: Whether it’s ‘googling’ an address or getting information on something delicate – Google seems to be omnipresent in our lives. Some even claim, the search engine operates as an outsourced, second brain 2 and we turn towards its network-based archive whenever we need to acquire new information.

In fact, we deal with digital archives on a daily basis and are used to navigating and being navigated through a vast amount of information. As we tend to identify ourselves through the cultural and societal constructions of our realities, these archives of information and data shape us and our views and perspectives on the world surrounding us. Data-based archives like Google therefore occupy a dominant role in what we know and hence, how these forms of ‘knowledge’ produce new ‘knowledge’. Such an imprint is easily illustrated through sayings like “if it’s not on the first page of Google, it may just don’t exist at all.” 3

As we can see, the way we access information and knowledge has drastically changed over several decades due to digital technologies. In what American media scholar William Uricchio calls an ‘algorithmic turn’4, automated recommendation systems inherit a central position on what appears to be of importance. But unlike the assumption that technology is an equalizer, a tool that is without human-made biases, algorithms and network-based technologies reinforce societal ideology. Based on predesigned programming and data-mining the search results we get are personally curated for us.

And this curation indeed has an appeal to it: in the name of what many companies like Google or Facebook call ‘positive user experience’, we are now fed with information and content that is designed to be relevant and interesting for us. By offering an invisible, helping hand, the algorithms help us to navigate through this tornado of data, initiating a faster and deeper connection between us, the world and everything it is seemingly offering to us. And who would not love to explore “our heritage, the beautiful locations and the art in this world?”5 as Amit Sood, director of Google’s Cultural Institute and Art Project asks in a TED talk held in 2016, concerning the 2011 launched App Google Arts and Culture.
Presenting itself as a ‘digital’ museum with themed exhibitions and a collection with over six million high-quality digitized pieces, provided by – to recall Sood – over 1.000 institutions from 68 different countries, it’s not just a collection of art and culture, it also presents itself as a tool to experience them from the perspective of machine learning. Tools like the ‚virtual creators table‘, “where all these six million objects are displayed in a way for us to look at the connections between them”, as Sood explains, or ‚X Degrees‘, which is building visual bridges between two images in connecting them step by step, using other artworks from the archive, allow users to access the content on a much more personalized and intimate level.
Shortly before asking the above-cited question, Sood claims that „the world is filled with incredible objects and rich cultural heritage” but that „most of the time, the world’s population is living without real access to arts and culture.” What Sood stresses here, is that through apps like Google Arts and Culture, we can increase the pace of spreading information and content – as he doesn’t get tired of repeating – from all around the globe, all over the globe. But regarding the curated concept that underlies archives in general, in this case the Western and Silicon Valley-based Google, we need to reevaluate which perspective on arts and culture is integrated into their collection and what they essentially contain – despite any assertions by Sood.

The archives we construct and turn to are never neutral nor objective – as French philosopher Michel Foucault establishes, the archive represents itself as a generic term for repositories of knowledge and information. They not only store, they include every condition under which knowledge and information are getting created 6. Thus, the ruling ideology is actively shaping what becomes valuable and what not. So to speak: what is stored and saved to remember in an archive always simultaneously reflects on what simply isn’t from a certain point of view. The Google Arts and Culture App is no exception to that: Although Sood emphasizes in his talk, that the App cooperates with a global range of institutions, the archive is still very eurocentric and fixated on western, mostly white, male artists, mirroring an elitist and limiting view on art and culture. Considering Google’s Arts & Culture list of places 7 one can quickly see that the vast majority of the objects are by far from the United States (2.469.166), the United Kingdom (192.196), France (174.236) and Germany (145.000), which offers a very distinct, biased view on the world and its valued arts and cultures. In terms of Google, diversity is caged in those institutions, as Sood talks in the video about how well-represented the Dutch painter Vincent Van Gogh is on the platform: “Thanks to the diversity of the institution, we have over 211 high-definition, amazing artworks by this artist, now organized in one beautiful view.” While having diversity on what is already considered and valued through an elitist Western group as ‘valuable art’, other cultures and regions are not as strongly recognized and integrated in the collection as Michael Nuñez states – for example, Latin America 8.

This prejudiced view ultimately came to the foreground through a new feature the App launched in 2018 called ‘Art Selfie’: The feature allowed users to upload selfies, to get it matched via face recognition to their huge archive of portraits. Due to this new tool, the App immediately shot to the top of major app stores despite being on the market for years. The algorithm was used to analyze patterns in the uploaders face to find similarities across digitized photos and paintings or as Google simply calls it: “Meet your match!”9

Besides its popularity, the tool quickly experienced a backlash since it was an obvious example of art histories’ as well as tech industries’ biases and lack of diversity. As Catherine Shu states it: “Many people of color found that their results were limited or skewed toward subservient and exoticized figures. In other words, it pretty much captured the experience of exploring most American or European art museums as a minority.”10 Or as Nuñez regards it in his self-experiment: “Hispanics have been practically whitewashed from the system.”11 The algorithms that guide the users and curate their experience remind us that technology is after all made by human beings and not based on what David Beer calls ‘computational objectivity.’’12 Shu subsumes it: “Algorithms are only as good as their benchmark datasets, and those datasets reflect their creators’ biases (conscious or not).” Referring to Joy Buolamwini, the founder of the Algorithmic Justice League, which prevents bias from being coded into software, this is described by the computer scientist and digital activist as ‘The Coded Gaze’.13 Buolamwini describes this phenomenon as an algorithmic practice of “embedded views that are propagated by those who have the power to code systems.”14 In the case of Google Arts and Culture, it is of importance to actually reflect on who is programming and establishing code, regarding that the Silicon Valley has an ongoing problem of lacking diversity. The Coded Gaze therefore represents a curation in a curation: The algorithms, as semi-subjective actors, curate in the same ideology and interests by accessing an already curated pool of data. By combining human actors and technical devices, the algorithms receive a certain individuality, leading to sometimes randomness or glitches – in most cases not in favor of minorities. 15

This ultimately raises the question towards which goal Google is aiming for, what the actual benefits are and what Google receives from the project. Since we already know that user-generated data gets used on the search engine to show the individual user personalized advertisements to make profits, one would assume that Google benefits from the project as it’s generating a horrendous amount of data by the people interacting with the app. With this data, they can improve their machine learning software and facial recognition in form of training data. This circumstance already rose concerns of data security regarding the Art Selfie function:16 What happens to the thousands and thousands of selfies that get uploaded on the Google App? Doesn’t it seem like perfect material for their face recognition system? While Google denies storing the pictures on a database or using them for other purposes, something we will never actually know for sure since a huge lack of transparency in late techno-capitalism prevents such insights, the data produced can still be used as training material for the machine learning software. Every click, every interaction with the archive can be analyzed and used as empiric evidence of behavioral patterns of taste, preference and inquiry – creating a huge database of the users only Google has access to.

Eventually, this demonstrates Google’s influence on how we experience, see and construct our world. Surely, one could argue that Google is not responsible for actually representing a diverse view on the world (not only diverse views on Vincent Van Gogh), but by establishing itself as ‘curator’ and ultimately ‘creator’ of arts and culture, Google has to take on responsibility and reflect its own construction and role in our society. As stated before, the archives we construct mirror what is seen as valuable through an ideological lens, as do they shape what we perceive and understand as arts and culture – forming a basis from which new kinds of arts and culture can emerge, limiting us to a biased view on the world and our cultural heritage. One may visualize it as the Ouroboros of value and meaning-making. Technology as like digital archives and algorithms are human made – they mirror the people creating and interacting with it more than the actual content they provide. To look at arts and culture with the App doesn’t simply mean to objectively access content but to see it from a ‘googled’ perspective – with the app looking back at you or as Friedrich Nietzsche calls it: And when you gaze long into an abyss, the abyss also gazes into you.17

  1. In the name of self-promotion: https://wien.arbeiterkammer.at/outofcontrol

  2. For example, American professor of marketing Scott Galloway associates Google with the brain in: The Four: The Hidden DNA of Amazon, Apple, Facebook, and Google, Plassen Press 2017.

  3. As to be seen a lot in popcultural discourse on the search engine and most famously quoted by the American Internet entrepreneur Jimmy Wales.

  4. Uricchio, William, „The algorithmic turn: Photosynth, augmented reality and the changing implications oft he image“, in: Visual Studies 26, p. 25 – 35

  5. “Every piece of art you’ve ever wanted to see –up close and searchable / Amit Sood”, TED, YouTube, https://www.youtube.com/watch?v=CjB6DQGalU0, published 29th of June 2016, accessed 28th of August 2019

  6. Foucault, Michel, The Archaeology of Knowledge, Éditions Gallimard

  7. https://artsandculture.google.com/category/place?tab=pop, accessed 28th of August 2019.

  8. Nuñez, Michael, „The Google Arts and Culture app has a race problem“, mashable, https://mashable.com/2018/01/16/google-arts-culture-app-race-problem-racist/?europe=true, published
    16th of January 2018, accessed 28th of August 2019.

  9. “Google Arts & Culture – For the Culturally Curious”, Google Arts & Culture, YouTube, https://www.youtube.com/watch?v=jERGXnQT9W0, published 4th of June 2019, accessed 28th of August 2019.

  10. Shu, Catherine, „Why inclusion in the Google Arts & Culture selfie feature matters“, techcrunch, https://techcrunch.com/2018/01/21/why-inclusion-in-the-google-arts-culture-selfie-feature-matters/, published 21st of January 2018, accessed 28th of August 2019.

  11. Nuñez, Michael, „The Google Arts and Culture app has a race problem”.

  12. Beer, David, “Power through the algorithm? Participatory web cultures and the technological unconscious”, New Media & Society 11, p. 985-1002.

  13. “The Coded Gaze: Unmasking Algorithmic Bias”, Joy Buolamwini, YouTube, https://www.youtube.com/watch?v=162VzSzzoPs, published 06th of November 2016, accessed 28th of August 2019.

  14. Ibid.

  15. As Loren Grush among others addresses in her article “Google engineer apologizes after Photos app tags two black people as gorillas”, the verge https://www.theverge.com/2015/7/1/8880363/google-apologizes-photos-app-tags-two-black-people-gorillas, published 1st of July 2015, accessed 28th of August 2019, when the machine learning systems identified black people as gorillas, mirroring yet again, ‘The Coded Gaze’.

  16. As Ann-Marie Alcántara states in her article “Google’s Art Selfies Are Fun but Stir Up Potential Privacy Concerns, https://www.adweek.com/digital/googles-art-selfies-are-fun-but-stir-up-potential-privacy-concerns/, published 22nd of January 2018, accessed 28th of August 2019.

  17. Nietzsche, Friedrich, Beyond Good and Evil, Aphorism 146.