nima arkani-hamed is one of the most influential theoretical physicists of our time - and possibly the most interesting advocate of the idea that space and time are likely ‘emergent’ rather than ‘fundamental’ properties, arising from much deeper, weirder stuff.
arkani’s work is at the gnarly edge of modern scientific discovery - particle physics, quantum field theory, cosmology, string theory etc. i’ve watched this 40’ish minute lecture several times, still struggling my way through the ideas and implications.
we won't go into the details of the subject matter here - if you’re interested in learning more, the lecture is great - and / or, donald hoffman (cognitive psychologist), has done a great job articulating the story in more detail in his book ‘the case against reality’.
also recommend listening to his podcast with Lex.
##so why is spacetime ‘doomed’? in the early 20th century, einstein overhauled newtonian physics - revealing that space and time are not separate, but part of a single interconnected fabric ‘spacetime’. this fundamentally changed how we understood the universe - it had major technological, cultural and philosophical implications.
arkani-hamed (and many others) are suggesting we are in for a similar rug-pull.
if this idea is even approximately correct, it will likely be the most bizarre and profound discovery ever. everything we know (or thought we knew) about our world, our universe, our experience of reality, it all assumes that spacetime is the base layer - an irreducible background. if it turns out that spacetime is, in fact, emergent from some deeper layers, everything we know about the world changes.
on an individual level, every moment of your life unfolds on the 3D stage of spacetime. everything that underpins what we experience assumes (and feels as though) spacetime is fundamental.
a growing body of modern quantum physics and high energy particle physics (like the work of arkani-hamed) suggests that ‘reality’ as we experience it might not only be an emergent illusion, but just a fragment of something much grander, much weirder, and well beyond our current comprehension.
how does a doomed spacetime relate to ai?
three related thoughts.
if we accept (1) there is a possibility that ‘reality’ as we experience it (and spacetime) might not be fundamental.
then it follows, (2) we should reconsider our approach to building, training, evaluating and interpreting ai.
why?
here’s the pitch in a nutshell:
current models are trained on vast amounts of human language, specifically predicting next token. human language is embedded in, filled with, and reliant upon, the assumption that spacetime is fundamental.
if spacetime isn’t the fundamental fabric of reality, then the current language model / transformer architecture - where we train on human language, steeped in the grand illusion - is likely missing many layers critical for building a true coherent world model. we would be simultaneously training ai on false assumptions and knee-capping its ability to develop its own models of deeper realities that exist beyond our current comprehension.
current models are clearly impressive when applied toward formal reasoning - mathematics, code, structured problem solving etc - within domains that depend on clear rules and patterns. however, when it comes to everyday tasks, like maintaining context in long conversation, tracking objects, or exhibiting what we’d consider “common sense”, they often suck. but this ‘common sense’ is just emergent phenomena existing within our shared illusion, reliant upon space and time and causality.
it’s entirely possible that current models, somewhere through the training process, are learning rules or patterns beyond spacetime, leading to some of the more impressive symbolic reasoning capabilities. and when they ‘suck’, it’s equally likely we’re expecting them to apply intelligence toward a uniquely human, biological and emergent illusion that we meat-sacks have mistakenly labelled fundamental.
the great debate and some thoughts on symbolic reasoning
ai, or at least the most recent incarnation of ai - language models / transformers, are a black box. they’ve been trained with so much data and at such scale, that it’s currently impossible for anyone to know exactly how they are doing what they’re doing.
this has led to some interesting divergence in opinion.
some believe that through predicting next token, a degree of ‘general intelligence’ has emerged. others believe this is still just pure statistical token-tumbling.
to make things more complex, recent inference advancements have given the newest models ability to integrate ‘on-the-fly’ context and augment response, making it feel even more reasoned and coherent.
there is significant overlap here with the renewed interest in ‘symbolic reasoning’ and building ‘neural-symbolic systems’.
i’m qualified to disentangle neither the ‘emergence versus stochastic parrot’, or ‘symbolic vs non-symbolic debates’, so i guess the suggestion here is crude and simple - ‘discrete symbols’ and ‘explicit rules’ might exist beyond spacetime. they might also exist (though not explicitly) in the training data. and when they’re happened upon, regardless of the architecture or training process, they make for the more interesting characteristics and capabilities we’re observing in current models.
for this reason, we should be considering a more aggressive pursuit of architectures and training methods less reliant on human language, next token prediction, and far more on novel exploration.
why doomed spacetime might reconcile the puzzle of coherent / inchoherent world models
a popular current gripe against increasingly general intelligence goes something like this - 'if a human had access to such vast knowledge (training data), that human would certainly have made major discoveries - ‘connecting the dots’, developing a coherent world model etc. therefore, ai must be stupid.
but if we accept (1) there is a possibility that ‘reality’ as we experience it (and spacetime) might not be fundamental, then it’s plausible we’re crudely strapping a human-shaped mask on a very new and very different form of intelligence, and then calling it stupid when it can’t make sense of that reality.
there’s no shortage of people circling the problems of anthropomorphisation, but I don’t think enough are approaching this from the position of the space time fallacy - if the reality that we humans experience is just an emergent and shared illusion, with far deeper and more complex layers and structures lurking beneath the surface (which is kind of what modern quantum physics is implying), then training and evaluating ai on this illusion makes us, not ai, stupid.
this ‘inverse anthropocentrism’ or ‘anthropocentric overfitting’ might explain the divergence in opinion, obsoleting ‘general-intelligence’ altogether. which is, by its very nature, a measure of human-like-intelligence within a potentially emergent human illusion of spacetime.
##final thoughts on serendipity and self-play suppose we employ AI to develop a groundbreaking scientific idea - within its training set, it has all experimental observations ever recorded - should be a piece of cake, but this is not panning out. humans can spot deep connections from relatively little data. but that data has been processed through incalculable layers of abstraction in ways we may never fully understand.
as Kenneth O. O. Stanley so epicly illustrates in his book ‘why greatness cannot be planned’, if you unpack almost any great invention or discovery, you find a disturbing degree of what we might call ‘serendipity’. gravity, spacetime(*), penicillin, LSD, transistors - the list is endless.
these inventions were never the objective, rather the outcome of the unrestrained pursuit of interestingness - and (potentially / probably) some tapping into the deeper layers beyond spacetime.
we should allow ai the same lack of restraint, rather than strapping on the human-shaped mask.
..
recent developments demonstrate thar we are already adapting our approach to training and evaluating ai in ways more aligned with and exploration of some deeper building blocks beyond spacetime. i’d argue these examples are also early suggestions that reinforce the idea that spacetime may be emergent rather than fundamental.
i’ll share those examples in the next essay.