Creating Shared Language is Hard
on naming, metaphor, and whether language creates reality.
when new and transformative things emerge — new technologies for example, we often don't have words to describe them.
developing shared language helps people think and reason about these things more productively.
successful buzzwords are not invented out of thin air. they instead identify common experiences which we all have and can relate to. giving these experiences a name crystallizes these ideas and experiences and makes them communal. that's the power of a buzzword. and when these things snap into focus, they create community, change culture, and influence our work.
things are moving and changing real fast right now.
ai is technology with far reaching implications. it's obvious (and normal) that we lack sufficient language to really reason about these implications.
and if we don't have language for something, we can't think clearly about it.
Drew Breunig has been writing about this exact thing — take "context engineering" for example, a term to describe how people structure information for AI systems.
before the term, people were already exploring, tinkering with and thinking about 'context-engineering', but without that shared term, it was hard for a community to really converge.
..
initially, i just wanted to write about the importance of buzzwords.
i think Drew has captured this well with his context-engineering example.
but I think it's worth calling out that 'creating shared language' is a deeply complicated affair. especially given the nature of this new technology we're all grappling with.
on metaphor and analogy
drew presented at a LatentSpace dev writers meetup last week.
in one of the slides he was essentially unpacking the different ways to 'coin a term' - how to identify a buzzword to capture a new phenomenon.

i don't like this slide.
or, more specifically - i don't like that 'metaphor' was included.
it's quite possible i'm misunderstanding, and Drew was referring to the lazy application of metaphor. ie grabbing the first mapping that comes to mind.
but isn't everything metaphor? even 'context-engineering' is metaphorical in some sense.
All meaning is mapping-mediated, which is to say, all meaning comes from analogies.
— Douglas R. Hofstadter, I Am a Strange Loop
Hofstadter also said this:
Since I believe that metaphor and analogy are the same phenomenon, it would follow that I believe that all communication is via analogy.
i'm not even going to attempt to disentangle buzzword, metaphor and analogy.
it's a big old fucking mess.
go and ponder the statement - "time is money". is time actually money?
it's metaphor and analogy and abstraction, all the way down.
entrenched mappings and unknown unknowns
most of the words we use to explain new technologies are borrowed from somewhere else. "attention" is borrowed. "agent" is borrowed. "memory" is borrowed.
so the interesting distinction isn't metaphor vs. analogy vs. buzzword. it's whether the mapping is helpful - or a tax. often, it's both.
the first challenge is entrenchment. we forget maps are maps. "the brain is a computer" started as useful analogy. now people treat it as literal truth. the explanation becomes the thing. Professor Mazviita Chirimuuta has a great discussion about this (and her book) — how abstractions lead us to mistake the model for reality.
the second challenge is specific to AI. we're mapping from one unknown to another. we don't fully understand human cognition, memory, attention, or consciousness. and we don't fully understand what's happening inside a 3 trillion parameter model. so when we borrow terms from human experience — and we will, because what else would we borrow from? we're mapping from one unknown to another.
take "hallucination".
hallucination is the term we use to describe AI systems generating false information. but that word implies something abnormal, something to eliminate entirely. and now every conversation about AI accuracy is colored by this framing. the metaphor is helpful because it gives us a reference/framing, but it's also a tax.
but the deeper problem might be that we don't fully understand hallucination in humans either. so we're mapping from one thing we don't understand to another thing we don't understand.

a linguistics tangent
i don't have the qualifications to make strong claims here - but there's a debate in linguistics that feels very relevant to this discussion.
Chomsky vs Everett.
Chomsky says language is innate. we're born with "universal grammar" — a biological capacity for language. the structure is already there.
Everett spent years with the Pirahã people of the Amazon and found a language without recursion, numbers, or color terms. his conclusion: language is culturally determined, not biologically hardwired.
Chomsky called Everett "an irrelevant, mistaken charlatan."
if language is culturally shaped (everett) and not biologically wired (chomsky), it makes the development and design of shared language much more weird and wonderful.
either way, creating shared language for ai isn't just complicated, it's consequential in ways we can't fully appreciate ahead of time.
maybe that's a stretch.
what do you think?