on minds

Jordan Lei
The Startup

--

This is the sixth chapter in a series of pieces about our modern relationship with time and the future. Titled Hourglass, It’s an exploration into how our abstract view of time has changed in modernity, how it has met (or has yet to meet) the needs of the present, and what we can do to better prepare ourselves for what’s to come.

First Chapter / Previous Chapter / Next Chapter

If the human brain were so simple
That we could understand it,
We would be so simple
That we couldn’t.

— Emerson M. Pugh

Inside that head of yours is something magnificent. A processing unit so complex, so mysterious, that even those who have dedicated decades to its study have barely scratched the surface. An organ so dynamic and interconnected, some have called it the most complex system in the known universe. I’m talking, of course, about the human brain.

The study of the brain, and that which it confers onto humankind — the gifts of intelligence and consciousness, have captured the human imagination since times of old. In some sense, replicating this process in an external setting has been a deep-rooted goal of humankind for a long time. A mind made from a machine, not from flesh. Before modernism, romanticism, before all the isms that have come to define our modern age, there was the will to understand who we are, and what mad force of the universe compelled it to create, within itself, a small and insignificant being capable of observing it. Some sought the stars for answers, others sailed across the world. But it wasn’t until we started looking inward that we finally uncovered the intricacies of the mind.

An Artful Mind

If you’ve ever seen the pages of Leonardo Da Vinci’s sketchbooks, you’ll realize that the illustrations before you are well-ahead of their time. Mechanical machines, anatomical studies, and drawings of flora and fauna explode out of the pages, a sublime marriage of art and science. It is here where you will find sketches of the mind that are nothing short of extraordinary.

In 1487, Leonardo drew what he called the senso comune (literally, common sense), the linkage of the senses. His early sketches hypothesized how visual information was transferred to the brain via the optic nerves² ³ and how sensory input would be conveyed, via the spinal cord, to the muscles of the body⁴. By 1509, Leonardo used his knowledge in sculpting and molding to map out the structure of cerebral ventricles, the hollow cavities in the human brain⁴. His work in understanding anatomy had led him to some of the greatest breakthroughs in the inner workings of the mind at the time. Sadly, his work did little to influence neuroscience, as he rarely published any of his work³ ⁴. It would not be until over four centuries later that Walter E. Dandy would develop similar techniques for mapping the brain⁵.

In 1887, another artist-scientist would make a significant discovery on the inner workings of the mind. Using a silver-staining technique (“Golgi staining”), Santiago Ramon y Cajal was able to isolate brain cells for the first time. The staining technique allowed Cajal to view a small subset of the cells at a time, turning what used to be a convoluted mess of grey matter into a clear, pristine look into the underlying neural circuitry. Take one look at the sketches he made from his discovery and you’ll once again see the interconnected nature of science and art. It’s almost like looking at an old, hand-drawn map: each neuron is drawn with exquisite detail, its dendrites, axon, and cell body clearly outlined in an interconnected network⁶.

His work, rendered beautifully on paper by his own artistic talent, demonstrated a number of things: first, that the brain’s computing power rose from a vast array of diverse and differentiated neurons, and second, that they operated in a network-like fashion — propagating information from one end of the cell to the next, onward and outward in spindly fibers to other neurons⁷.

Later contributions would compound evidence for this “network” style correspondence in the brain, particularly in the visual cortex, the region responsible for making sense of visual information. The visual cortex consists of many distinct layers of neurons, that propagate information deeper and deeper into the network, much like sports fans doing “the wave” in an arena. Information is passed from V1 to V2 to V4⁸, and later to the inferotemporal cortex, where images are integrated and processed⁹. The question, however, remained — how might we, as humans, model these intricate neural networks?

The Mind and the Machine

The modern computer has come a long way since the mid-twentieth century, when large, monstrous behemoths with switches and dials were used to calculate missile trajectories and send people into space. What used to be stacks and stacks of metal shelves and tubing can now fit neatly in the palm of your hand. Modern digital computers use algorithms, a set of step-by-step rules, to make calculations. These calculations take inputs, such as a tap on your screen or a number, and translate them into outputs, like a Facebook post or a bank account statement. These calculations may seem simple individually, but they can get ever-more complex as more layers are added to existing modules. Take Google for example — it consists of an extensive list of search results, a massive infrastructure required to connect and rank those results, and a distributed way to deal with multiple requests at once, just to name a few functions. With such complex systems, it might even be possible to attribute some level of intelligence to them — not biological, of course, but artificial.

One modern development in artificial intelligence is known as machine learning. The basic idea behind machine learning is that we (humans) can write algorithms (code) that learn to adapt better and better every time. Suppose I give you a very rudimentary game, like Tic-Tac-Toe. If I play a game once, I can have the computer store it in memory. If it wins and I play the game again in the same fashion, it will play its old strategy. If it loses, or if I play a different strategy, it’ll try to change things up — and once that new game is done, it’ll store it in memory again. This is a very rudimentary example of adaptive learning, and it’s pretty clear to see that it won’t get you very far. For a game of Tic-Tac-Toe, we might make some headway, but we aren’t going to drive cars with this thing anytime soon. To do that, we’re going to need to try something a bit more complex.

Many modern machine learning algorithms use a learning method called gradient descent¹⁰. It sounds fancy, but it really isn’t. Here’s how it works. Imagine you’re tuning an instrument; a guitar, for example. You pluck one string, and it sounds a little bit off. So you turn a knob a little bit, and it sounds worse. No good. So you turn the knob in the other direction, and it sounds better. Great. So you turn it ever so slightly in the same direction, plucking as you go, until you get the string to play the right key. Do that for all the strings and you’ve got an instrument that is completely in-tune. What you’ve done there is a manual version of gradient descent. You’ve got an objective function — something that defines what you’re trying to do. In this case, it might be a tuning fork or an iPhone tuning app. And every time you “sample” the input by plucking a key, you know what direction you should turn it in, based on whether your objective function tells you you’re getting closer or farther away. Do this for multiple strings, all at once, and you’ve got a system of learning modules that get progressively better over time.

Inspired by the architecture of the brain, computer scientists realized that they could stack these computational modules on top of one another, mimicking brain behavior. Suddenly, you’ve got an algorithm — several “layers” deep — that can model all sorts of different cool functions — from images of cats to whale sounds to stock market prices. This is the essence of most modern-day machine learning algorithms, and the basis of neural networks (so-called “deep learning”). These represent some of the most powerful computational tools we have in our wheelhouse today, and we’re just barely scratching the surface of what’s possible. So — the million dollar question is — what is possible? Let’s find out, shall we?

Redefining Impossible

What’s so special about us? What’s one task that is so hard, it would be difficult — no, impossible — for another animal or machine to beat us at it? For a long time, the answer to this was chess. In the 1980s, the notion that a computer could win a chess match against a human was a laughable notion. But in February 1996, the DeepBlue algorithm defeated Gary Kasparov, the reigning world chess champion, at his own game¹¹. Okay, fine. So computers could beat us at chess. No big deal, it was’t that interesting anyway. Why not try your hand at Go, a game so complex that the number of possible moves exceeded the number of atoms in the universe. Surely, surely, this was a uniquely human task. In 2015, Google DeepMind unveiled AlphaGo, a bot trained by neural networks to play the game. It won 5–0 against reigning three-time European Champion, Fan Hui¹². And that was just the beginning.

In late 2017, we introduced AlphaZero, a single system that taught itself from scratch how to master the games of chess, shogi, and Go, beating a world-champion program in each case.

— Google DeepMind, on AlphaZero [12]

The past decade has resulted in a veritable Cambrian Explosion of neural network architectures, each specializing in different domains. By pitting neural networks against each other in a rivalrous fashion, you get Generative Adversarial Networks¹³, capable of rendering faces so real that a dedicated website has been made for entirely artificially generated photos of people who don’t exist. These networks can turn photos into artwork¹⁴, paint-by-numbers¹⁵, and fabricate images of celebrities and world leaders (so-called DeepFakes)¹⁶. Other “species” of network specialize in processing language and language-like variants. By using selective “attention”, these networks, called Transformers, are capable of generating text so similar to human writing that the developers opted to withhold releasing the source code to the public¹⁷. Other transformer variants are capable of generating music that mimics the style of other composers¹⁸. The list goes on and on — bots that can detect heart disease, play videogames, drive cars. These are not musings of a futurist — these are things happening right now, as we speak. We’d do well to tread carefully on the next thing we deem “impossible”, because based on our track record, there’s a good chance we’d be wrong.

Contrary to the hypotheses of most science fiction writers (and, sadly, many pop-science journalists), the risk we run with artificial intelligence isn’t likely to come from generalized intelligence anytime soon. That is to say, it’s unlikely that computers will develop to the point where they are conscious, intelligent, and outperform humans at virtually every task. The fear of AI overlords isn’t totally out of the realm of possibilities, but it’s a slim one, at least for now. That’s the good news. There isn’t going to be a robot that can be you better than you can be you, at least for now. The bad news is that it doesn’t have to be.

Already, humans are finding that machine learning models and AI agents are capable of outperforming themselves at a wide variety of tasks. This isn’t a problem in principle; technology of the past has typically enabled the creation of new jobs, new opportunities, and new ways of life. We’ve seen how the invention of the train and the telegraph led to massive improvements in science and innovation, leading to a massive boom in industry growth from the Industrial Revolution. Old industries die, and new industries emerge, like phoenixes from the Schumpeterian flame (if you got that reference, we can be friends). So the story goes. Unfortunately, the scale and pace at which these industries may be replaced may far outmatch our ability to relocate workers to new positions. In simple terms, not every truck driver can become a computer scientist in a few years. And it’s not just truck drivers. Clerks, deliverymen, sales representatives, and even lawyers might go out the window in a few decades¹⁹. The thing is, AI doesn’t have to be better at you at everything, it just has to be better than you at one thing. That one thing? Your job.

It’s important to note that (as of 2020), artificial intelligence doesn’t have any markedly sinister intent. Machine learning models don’t want to take over your job, nor do the engineers coding them want to replace you. But it doesn’t really matter, now does it? That’s effectively the same thing as saying “it’s nothing personal”, as you send an employee out the door. So what do we do about it?

En Passant

First, we need to decouple the science-fiction from science-fact. While fears of artificial generalized intelligence (AGI) run high, the real threat isn’t what’s happening in the future, it’s what’s happening right now. Redirecting the national and international focus onto the problems of today (job loss, data privacy, data ethics) is paramount to ensure that careful steps are taken towards the right direction. That requires a coordinated effort between computer scientists and journalists to dispel myths about artificial intelligence and focus on issues that may actually pose significant risks in the future. For example, algorithms that calculate your credit score or decide whether you should be hired shouldn’t be able to discriminate against you based on race²⁰.

Second, we need companies on board. Many companies have stated outright that there needs to be better alignment on AI ethics. They’re right. And they can’t do it alone. In a previous chapter I mentioned Google’s Code of AI Ethics²¹, pointing out the fact that it’s incredibly vague. Unfortunately, at the company level, it doesn’t really matter how vague or specific your policies are if you can’t be reasonably convinced that all major players will follow them. The future of ethical artificial intelligence requires alignment across the vast array of institutions that are major players in the field — a “UN General Assembly” on Artificial Intelligence, if you will. And as ineffective as that may sound, at the very least it would indicate some message of alignment between the major players about the state, direction, and concerns of the technology at hand.

Third, the government. I know, I know. Groan. I’m not thrilled about it either. The government hasn’t exactly had the best track record for dealing with these kinds of problems, and this one is proving no different. But, like it or not, the government must play a role — in regulation, yes, but also in dealing with the potential wave of mass unemployment that will follow. Plans of implementing universal-basic income have been rolled out somewhat effectively on medium-to-large scales, especially as a response to the coronavirus outbreak (unfortunately, these successful rollouts exclude the United States, whose performance in this regard hasn’t been as stellar as its stars and bars would have you believe). Analyzing the impact of UBI in these contexts may give us valuable data on how to move forward. As for regulation, AI policy in the US predictably lags behind. In 2017, Congress passed the FUTURE of AI Act²², which among other things established a federal advisory committee, which in laymen’s terms amounts to practically nothing. Congress sets up advisory committees for everything. New bills introduced to the House are a bit more promising, with H.Res.153²³ targeting some of the societal implications of artificial intelligence mentioned earlier.

Finally, research. While the idea of ethical algorithms might sound like a whole lot of frou-frou for most, the field itself is actually incredibly rigorous and analytical. We have already made strides in data privacy with the development of a new concept called differential privacy, which encapsulates in a mathematical way a lot of the ideas about privacy we want to instill in our algorithms. There’s a fantastic book about it written by Michael Kearns and Aaron Roth, professors at the University of Pennsylvania, which you should read if you’re interested²⁴. Investing in the research of designing ethical algorithms could be the difference between feeling around in the dark and having a flashlight in our hands when it comes to developing algorithms for the future.

These are just a few things that will be necessary for us as we move into the new age of artificial intelligence. These tools will come to define our relationship with technology as well as our place in society, and we had better come to grips with reality quickly.

What About Love?

There might be something that has been bothering you all this time about my deep dive into minds, both biological and artificial. Earlier I mentioned the gifts that our brain gave us — intelligence and consciousness. What about consciousness? The ability to recognize that we are feeling, thinking beings?

Unfortunately, we haven’t a clue. For millennia, philosophers thought that the two were one and the same. It appears, at least for now, they’re not²⁵. To date, we’ve successfully created agents that can play chess, master facial recognition, and play music, but all without a lick of self-awareness or emotion. No remorse for the pawns, no glee at a smiling face, no sublime feeling at the flow of a melody. These machines can’t experience the most basic emotions, let alone complex ones like regret, embarrassment, or love.

It might be the case that consciousness, and the emotions that arise from self-containment, is hard — computationally, mathematically, possibly even biologically. It’s a well known fact of behavioral psychology that we are more emotional than intelligent — that our emotions rule over our rationality. At the end of the day, we are feeling beings. Our rationality arises from the emotions we feel, not the other way around²⁶. So perhaps our future lies in unlocking some part of our emotional being, allowing us to differentiate ourselves from the intelligent agents we have created. But this is all, of course, speculation.

In the next few chapters we will discuss, among other things, what happens when the systems we built care more about intelligence than consciousness, about accuracy over emotion. We will discuss that all-important motivator of progress in the technological age: data, and what it means to live in a data-driven economy. We will discuss what happens when enough becomes too much, and how we can live in a world with both too many resources and not enough to share. All this in the near future, whether you read it or not.

But I’ll leave you with this. Our minds are some of the most magnificent systems in the universe, and we aren’t even close to cracking what makes them work. It is, in my opinion, the most noble of causes to understand how to model and understand it — to know thyself, if you will. But along the way, we must be vigilant in knowing what we give up when we create systems that can outperform us on an intelligent level. We’ve come a long way from the days of World War II, when we used computers to crack encrypted German messages. We’ve come a long way since the days of the Space Race, where computers were used to calculate trajectories of rockets (and the people inside them) into space. We’ve even come a long way since the introduction of the personal computer into our modern homes. Today’s algorithms are faster, smarter, and more adaptable than the algorithms of old. When we order from Amazon’s Echo, we do so not because some AI overlord is forcing us to, we do so because it’s convenient. Same thing for why you prefer to watch free videos on YouTube and search the internet on Google. We listen to music on Spotify, and are pleasantly surprised when its algorithms pick similar music for us. We like having more and more content to view on Instagram — hand-tailored to our preferences. In short, the AI revolution won’t be loud. It won’t come with the sound of robots knocking down our doors. No, no, no. Let’s be very clear: the AI revolution will happen because we wanted it to. Because at the end of the day, it was just so much easier to leave the door wide open.

[1] Reuteler, David. The Drawings of Leonardo Da Vinci, www.drawingsofleonardo.org/.

[2] Philo, Ron, and Kevin Petti. “Leonardo da Vinci: Images of the Brain.” The FASEB Journal 32 (2018): 781–8. https://www.fasebj.org/doi/abs/10.1096/fasebj.2018.32.1_supplement.781.8

[3] Penttila, Nicky. “The Hidden Neuroscience of Leonardo Da Vinci.” Dana Foundation, Dana Foundation, 30 Apr. 2020, dana.org/article/the-hidden-neuroscience-of-leonardo-da-vinci/.

[4] Isaacson, Walter. Leonardo Da Vinci. , 2017. Print.

[5] da Mota Gomes, M. “From the wax cast of brain ventricles (1508–9) by Leonardo da Vinci to air cast ventriculography (1918) by Walter E. Dandy.” Revue neurologique (2020).

[6] Bentivoglio, Marina. “The Nobel Prize in Physiology or Medicine 1906.” NobelPrize.org, Nobel Media AB 2020, 20 Apr. 1998, www.nobelprize.org/prizes/medicine/1906/cajal/article/.

[7] Bergland, Christopher. “The Father of Modern Neuroscience Was an Athlete and Artist.” Psychology Today, Sussex Publishers, 27 Jan. 2017, www.psychologytoday.com/us/blog/the-athletes-way/201701/the-father-modern-neuroscience-was-athlete-and-artist.

[8] Huff, Trevor, Navid Mahabadi, and Prasanna Tadi. “Neuroanatomy, visual cortex.” StatPearls [Internet]. StatPearls Publishing, 2019.

[9] Tanaka K. Inferotemporal cortex and object vision. Annu Rev Neurosci. 1996;19:109–139. doi:10.1146/annurev.ne.19.030196.000545

[10] Pandey, Parul. “Understanding the Mathematics behind Gradient Descent.” Medium, Towards Data Science, 17 Jan. 2020, towardsdatascience.com/understanding-the-mathematics-behind-gradient-descent-dde5dc9be06e.

[11] Eschner, Kat. “Computers Are Great at Chess, But That Doesn’t Mean the Game Is ‘Solved’.” Smithsonian.com, Smithsonian Institution, 10 Feb. 2017, www.smithsonianmag.com/smart-news/what-first-man-lose-computer-said-about-chess-21st-century-180962046/.

[12] “AlphaGo: The Story so Far.” Deepmind, deepmind.com/research/case-studies/alphago-the-story-so-far.

[13] Goodfellow, Ian, et al. “Generative adversarial nets.” Advances in neural information processing systems. 2014.

[14] Zhu, Jun-Yan, et al. “Unpaired image-to-image translation using cycle-consistent adversarial networks.” Proceedings of the IEEE international conference on computer vision. 2017.

[15] Salian, Isha. “Stroke of Genius: GauGAN Turns Doodles into Stunning, Photorealistic Landscapes.” The Official NVIDIA Blog, 18 Mar. 2019, blogs.nvidia.com/blog/2019/03/18/gaugan-photorealistic-landscapes-nvidia-research/.

[16] Roose, Kevin. “Here Come the Fake Videos, Too.” The New York Times, The New York Times, 5 Mar. 2018, www.nytimes.com/2018/03/04/technology/fake-videos-deepfakes.html.

[17] Radford, Alec. “Better Language Models and Their Implications.” OpenAI, OpenAI, 11 May 2020, openai.com/blog/better-language-models/.

[18] Payne, Christine McLeavey. “MuseNet.” OpenAI, OpenAI, 29 June 2020, openai.com/blog/musenet/.

[19] Garbade, Michael J. “10 First Jobs That Will Be Eliminated by AI.” Medium, Medium, 30 Aug. 2019, medium.com/@ledumjg/10-first-jobs-that-will-be-eliminated-by-ai-c5c481b5ecbb.

[20] Smith, Allen. “AI: Discriminatory Data In, Discrimination Out.” SHRM, SHRM, 28 Feb. 2020, www.shrm.org/resourcesandtools/legal-and-compliance/employment-law/pages/artificial-intelligence-discriminatory-data.aspx.

[21] Pichai, Sundar. “AI at Google: Our Principles.” Google, Google, 7 June 2018, www.blog.google/technology/ai/ai-principles/.

[22] “AI Policy — United States.” Future of Life Institute, futureoflife.org/ai-policy-united-states/?cn-reloaded=1.

[23] Lawrence, Brenda L. “Text — H.Res.153–116th Congress (2019–2020): Supporting the Development of Guidelines for Ethical Development of Artificial Intelligence.” Congress.gov, 27 Feb. 2019, www.congress.gov/bill/116th-congress/house-resolution/153/text.

[24] Kearns, Michael J., and Aaron Roth. The Ethical Algorithm: the Science of Socially Aware Algorithm Design. Oxford University Press, 2020.

[25] Harari, Yuval N. Homo Deus: A Brief History of Tomorrow. [Toronto, Ontario]: Signal, 2015.

[26] Haidt, Jonathan. The Righteous Mind: Why Good People Are Divided by Politics and Religion. New York: Pantheon Books, 2012. Print.

--

--

Jordan Lei
The Startup

Neuro x Machine Learning x Art. PhD Student in Neuroscience @ NYU. Penn M&T 2020. www.jordanlei.com