Submitted by Techno on
Image by Gerd Altmann from http://Pixabay.com
They’re all around us, yet most of us remain largely unaware of their presence. What’s behind algorithms—and what’s ahead for us?
Have you ever had to learn a ballroom dance, executing each of several steps in the correct sequence for a waltz or a samba or a quickstep? In the modern world we’ve surrounded ourselves with computers that also follow sets of sequential steps. They’re called algorithms, and they form the basis of most computer programs. It’s a fairly simple thing in principle: a list of instructions to follow in order to solve a problem through a process that transforms information or energy. The essential thing is that the steps be followed in the correct order.
Our world is increasingly centered on algorithms. Whether we’re shopping online, being hit by advertising, or traveling across the globe, algorithms work silently behind the scenes. In the case of advertising, for example, players such as Facebook, Twitter and Google leverage them to follow and analyze our activity, our searches and our locations. Whether we’re using a desktop computer or a mobile device, the data may include things like page likes, app use, which websites we visit (and for how long), who we communicate with, and even the content of that communication. Companies use the data for various purposes, including to predict our preferences and show us relevant advertising from their paying clients.
Algorithms tell us what to watch, buy, share and wear. They can accurately predict our dating and voting preferences, our sexual orientation, and our habits—both good and bad. It’s quite a thought. Little mathematical pieces at multiple points of our experience, running unseen like clockwork, helping to form our experience of the world. As they run in repetition, night and day, they can begin to take on quite sinister connotations, a perpetual if silent drumbeat that has us unwittingly dancing to the tune of their creators.
What’s the origin of algorithms? And what might their nature tell us about our own nature, and about what that implies for the future?
“What I expect people will find is that the algorithms are primarily exposing the desires of humanity itself, for better or worse.”
Andrew Bosworth, Facebook vice president (Facebook post, January 7, 2020, based on internal memo titled “Thoughts for 2020”)
Origin of the Algorithm
Because we can’t see algorithms, their increasingly influential role in decision-making processes may have gone somewhat unnoticed. Further, we probably think of them as a modern phenomenon.
The basic idea is quite ancient, however. Around 300 BCE, the Greek mathematician Euclid worked out what we now call the “Euclidean algorithm” or “Euclid’s algorithm.” The term itself dates from eighth- or ninth-century CE Baghdad. Algorithm is actually an eponym, a word named after a person—in this case a noted Persian polymath by the name of Muhammad ibn Musa al-Khwarizmi (ca. 780–850). As director of Baghdad’s House of Wisdom, he was interested in mathematics, astronomy, astrology, geography and cartography.
Al-Khwarizmi wrote a book called Concerning the Hindu Art of Reckoning, in which he advocated Western adoption of the Hindu-Arabic number system (including the decimal point and the numeral zero as a placeholder) to replace Roman numerals. Not only the West but the whole world uses that system today.
About 900 years ago, when rendering the book from Arabic into Latin, a translator converted al-Khwarizmi’s name to the more Latin-sounding Algoritmi, from which we derive algorithm. Al-Khwarizmi also gave the world the word algebra, from the title of another of his works.
We know little about al-Khwarizmi himself, despite his lasting influence. In at least one of his books he presents himself as a pious Muslim, yet he cast horoscopes, which would have been more in keeping with Zoroastrianism. It may be that he was raised a Zoroastrian; certainly he lived in a period when conversions from that religion to Islam were common.
The possibility of Zoroastrian influence is perhaps bolstered by an account by medieval Iranian historian Muhammad ibn Jarir al-Tabari. He gives al-Khwarizmi the title “al-Majusi,” or magus, from which we get the word magic. Magi were interested in astronomy and astrology. The title was specifically used as an ethnic designation for a practicing priest of the Zoroastrian religion. Scholars remain divided on al-Tabari’s lone reference, with some taking it at face value while others are adamant that it’s a textual error. In any case, there is no doubt that, like the Zoroastrian magi, al-Khwarizmi practiced astrology.
Also notable is that Ahura Mazda (“Wise Lord,” Zoroastrianism’s supreme, transcendent creator deity) was viewed as working through emanations as the first cause of a specifically sequential creation. As world-religions scholar S.A. Nigosian puts it, “everything follows the sequence he has ordained from the beginning.”
All of this is interesting in that any such Zoroastrian influence could have played into al-Khwarizmi’s formulation of the mathematical sequences of algorithms.
The Road to Ubiquity
Considering the seismic effect of al-Khwarizmi’s mathematics on the West, it’s surprising that he hasn’t been more widely spoken of, though the word to which he lent his name did gradually come into use. In medieval Latin, algorismus meant the decimal number system. The word appeared again in the 13th and 14th centuries, including in the writings of Chaucer.
By the late 19th century, algorithm had become a set of successive rules for solving a problem. In 1936, pioneering British mathematician Alan Turing theorized that a machine could follow such step-by-step instructions to solve highly complicated mathematical problems. Known today as the father of computer science, Turing went on to build the Bombe, an algorithm-based machine that successfully cracked the Nazi Enigma codes in World War II. Since then, algorithms have become primarily linked with computers.
Marcus du Sautoy, professor of mathematics at Oxford University, points out that one of an algorithm’s primary functions, in computing terms, is to sort and reassemble data. Their use is much broader than that, however. Du Sautoy calls them nothing less than “the secret to our digital world,” explaining that “the essence of a really good algorithm, its magic if you like, is mathematics.” He describes them as “strangely beautiful, tapping into the mathematical order that underpins how the universe works.”
“Algorithms are everywhere. These bite-size chunks of maths have become central to our daily lives. But because they’re invisible, we tend to take them for granted—even misunderstand them.”
Marcus du Sautoy, “Algorithms: The Secret Rules of Modern Living”
Trading on Unstable Ground
The potential commercial benefit of computer algorithms did not go unnoticed in the worlds of business and finance. It’s here that we see their critical value to retail and social media advertising. Facebook reportedly makes about 98 percent of its revenue from advertising. This equated to $69.7 billion in 2019, as compared to $134.8 billion for Google and more than $3 billion for Twitter. Amazon’s net sales in its “other” category, which consists primarily of advertising revenue, was $14.1 billion in 2019. Algorithms are critical to the immense profits of these and an ever-growing list of other companies, large and small.
On the advertisers’ side of the fence, algorithms have revolutionized the targeting of a potential audience via such platforms. The technology has radically transformed how they acquire ad time and space; “programmatic marketing” uses algorithms to purchase online ads or ad time on social media and retail sites and on platforms such as YouTube. So where people used to buy ad space, in the digital domain computers can now use data not only to buy the ads but to determine how much they’re worth, often in real time. That data can incorporate a huge amount of highly detailed information into its targeting, from a person’s age, personal tastes and likely medical issues to his or her local weather conditions and pollen count. In 2018, 83 percent ($49.2 billion) of US digital display advertising was programmatic. It isn’t just an American phenomenon, of course; China, for example, reportedly spent US$65.4 billion on digital ads in 2018.
As might be expected, programmatic advertising is not a flawless technology; the FBI has been looking into media buying practices, citing transparency as their major concern. This automated approach, where human judgment takes a back seat, can also lead to wasted ad dollars, fraud, or the positioning of an ad where a brand would not wish to appear. Two of the largest spenders on advertising, Unilever and Procter & Gamble, have been vocal in their concern about brand safety on such algorithm-driven platforms as Instagram and YouTube. Despite improvements on the part of social media giants, some big-ticket advertisers still refuse to spend on certain sites.
Another algorithm-based tool that has marketers’ attention is biometrics—“the science of tracking and analyzing people’s unique biological characteristics,” including the growing field of facial recognition. An eMarketer report notes that “biometric technology may soon give marketers the opportunity to learn more about their customers and deliver personalized messaging.” It goes on to warn, “While this could be a potential boon for business, it also has major privacy implications.”
Algorithms play a central and expanding role on Wall Street and in other financial centers around the world as well. In what’s known as black-box trading, or algo trading, computers are programmed to make fast, emotion-free but fairly opaque buying and selling decisions. One of their built-in strategies is to break a huge transaction into small chunks, thereby avoiding notice and diminishing the risk of a sudden price drop. Of course, competing traders use their computers and algorithms to track down such transactions in order to make their own instant buy-sell decisions. As Kevin Slavin put it in a highly popular 2011 TED talk, “the same math that you use to break up the big thing into a million little things can be used to find a million little things and sew them back together and figure out what’s actually happening in the market. . . . What you can picture is a bunch of algorithms that are basically programmed to hide, and a bunch of algorithms that are programmed to go find them and act.”
Within these combative mathematic clouds, things sometimes go very wrong. Slavin cites the example of “the Flash Crash of 2:45” when, on May 6, 2010, the Dow Jones Industrial Average suddenly dropped by 9 percent. The catastrophe lasted mere minutes and resulted in a trillion-dollar stock market crash before rebounding almost as quickly.
At the time nobody even knew what caused the crash. It wasn’t until April 21, 2015, close to five years after the incident, that the US Department of Justice laid 22 criminal counts, including fraud and market manipulation, against Navinder Singh Sarao, a trader operating out of his family home near Heathrow Airport in London. Among the charges was the use of so-called spoofing algorithms, used to outpace other market participants and to manipulate markets by feigning interest.
To clear up the opacity that characterizes algorithms, some have sought to visualize the technology’s movements and impacts. One such company is Nanex. They track the market—the algorithms at work—and depict the results graphically so financiers can get a better look at what’s happening. Using terminology reminiscent of du Sautoy’s, Slavin quips that to extract these visuals, Nanex uses “math and magic.”
“We’re writing things that we can no longer read. We’ve rendered something illegible, and we’ve lost the sense of what’s actually happening in this world that we’ve made.”
Kevin Slavin, “How Algorithms Shape Our World”
Human and Near-Human Algorithms
Chains of algorithms form the basis of artificial intelligence (AI). Via machine learning, the algorithms continue to absorb data and thus build on themselves, which only increases that sense of magic. Some theorists are even talking about an Internet-based “Global Webmind,” a self-aware worldwide web able to monitor its own system for optimal functioning and trading.
Yet for all of their hidden, seemingly metaphysical influence, algorithms aren’t magic; they’re mathematical and therefore inert. That doesn’t render them harmless, however, because human nature is an integral part of the equation.
It’s true that as mere consumers we’ve been given little if any say in the secreting of algorithms behind our world’s digital interface. It was a relatively tiny cohort in business, finance and government who determined that these mathematical programs should silently direct many of our individual and collective choices, responses and actions. To suggest that those few people were largely motivated by the possibility of political or commercial gain wouldn’t seem a stretch.
It gets more complicated, however, in that some algorithms are ostensibly designed to help us by taking away our need to think or waste time sorting through irrelevant information. Yet we often go to great lengths to stop them by blocking ads, preventing tracking, turning off cookies, etc. We crave the convenience of a personalized experience, but we’ve learned to distrust the way others use the massive amounts of data they’re collecting, viewing it as predatory. And yet it’s possible for tech companies to deliver a personalized experience without spying or preying on their users. Will it happen? The reality is that how our data is used will always come down to a human choice on someone’s part.
In terms of regulation, then, who can we trust to think on our behalf about the “thinking” that algorithms do on our behalf? Can we prevent decisions being made for selfish or exploitative reasons? Is current legislation sufficient to ring-fence the commercial machinations of big tech—or even, in the case of claims made against China’s Huawei, potential spying by state governments?
In certain respects, the human brain operates just like a series of algorithms, leading some to suggest that a human being is a hackable animal. In fact, attempts are underway to simulate the human brain via computer algorithms. The aim is to give humanity what some believe is the only gateway to higher forms of intelligence able to save us from our own faulty nature. A critical question arises, however: Can a digital brain ever be programmed with such traits as morality and conscience?
“We must resist the idea, pushed on us by a Silicon Valley techno-elite and their ideologues, that there is no difference in kind between us and the algorithms they want to use to bend our thinking and behaviour to their advantage.”
David Mattin, “You Are Not an Algorithm”
Perhaps the main thing to fear from AI is the degree to which it reflects human nature: a digital version of highly fallible people. Indeed, the issues and challenges we face seem to be bound up with our own inner algorithms, the repeated behaviors and habits that determine what we do.
When AI Takes Over
Neuroscientist Anders Sandberg talks about the challenges and risks inherent in our steady march toward an age where machine intelligence surpasses that of human beings.
Few experts would deny that problems inherent in human nature are central to any discussion of AI. Specifically, if AI can even partially replicate functions of the human brain, then our flawed nature will necessarily be a part of it. This is no small concern, because efforts to simulate the 86-plus billion interconnected neurons and trillions of synapses packed into the human brain are, as already noted, well underway. Despite current limitations in the needed computing power, the algorithms themselves are becoming increasingly sophisticated. Fears of a dystopian future are therefore not without foundation. Computer scientists are very familiar with the principle of “garbage in, garbage out,” and it certainly applies here. Algorithms are, after all, a product of their creators, so the creators’ tendencies, biases and attitudes will inevitably affect the final product.
Sadly, examples are not difficult to find. A May 2016 report claimed that a computer program used by many US courts for predicting recidivism was racist, demonstrably biased against black prisoners. According to the investigative journalism organization ProPublica, the program, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), was prone to flagging black defendants as more likely to commit further crimes after their release from prison. In fact, the error rate (those whom the software labeled as higher risk but who didn’t reoffend) was almost twice as high for black as for white defendants: 45 percent vs. 23 percent.
This further validates the concern that in creating a technology to which we relegate some of our decision-making, we will pass on some of humanity’s worst traits. It seems clear that AI is not going to solve the fundamental issues at the heart of our own nature, so changing ourselves is our only hope for a true solution.
Dancing to a Different Tune
A profit-at-any-cost mindset drives many businesses, and we as individuals are unlikely to change that. But we are active participants in our own life. So what can we do? How can we cut the strings that enable others to manipulate our every step?
Who Am I? Who Should I Be?
Meaningful change is achieved by personal action that arises from the self-directed will to do the right thing for the good of all.
There’s no doubt that algorithms are useful, but they are not a panacea, and people do exploit them. We simply can’t control all the ways in which our personal data might be used or misused, but we can’t afford to be complacent. We must take some responsibility for the data others are able to collect. Checking the privacy settings on our digital gadgets is a good place to start.
In the face of seemingly countless reasons to distrust others, we can go further by striving to be the kind of person that others can trust. Honesty, integrity, kindness, fairness and discretion are universal aspects of good character that never go out of date, even if they don’t appear to be widely practiced. We can’t change the character of others, nor, perhaps, how they do business. We can, however, work hard to improve ourselves—to treat others the way we’d like to be treated and to become the kind of people we’d like others to be.
Changing ourselves may not be easy, but it is possible—one step at a time.
Daniel Tompsett
https://www.vision.org/algorithms-history-future-9080
Selected References
Julia Anguin et al., “Machine Bias” (ProPublica, 2016).
BBC, “Why Algorithms Are Called Algorithms” (2019).
Corona Brezina, Al-Khwarizmi: The Inventor of Algebra (2006).
Marcus du Sautoy, “Algorithms: The Secret Rules of Modern Living” (BBC, 2017).
Jonathan Lyons, The House of Wisdom: How the Arabs Transformed Western Civilization (2009).
David Mattin, “You Are Not an Algorithm” (March 2019).
Bahman Mehri, “From Al-Khwarizmi to Algorithm,” Olympiads in Informatics (2017).
Mohaini Mohamed, Great Muslim Mathematicians (2000).
S.A. Nigosian, The Zoroastrian Faith (1993).
Kevin Slavin, “How Algorithms Shape Our World” (TEDGlobal 2011).