🍣 Your intelligence is one form of intelligence
Read to the end for brand chocolate cake drama (yes, really).
Welcome to the second paid edition of Salmon Theory, where basically i grab a conceptual playground and chuck in toys like my all-consuming internet reading habits, a Notion database of links, some pithy notes that kinda capture what i’m going on about at any given time, and try and turn it into some sort of ~narrative~.
This week we got to slicin’ and dicing’ on things like:
Why there isn’t a single form of intelligence, and this is a good thing
Why being a parent is making me rethink reply guys, consent and emojis
Why i am ever so slightly geeking out about Looney Tones and Aldi cakes
You can read about half(ish) of what follows, but after that it truncates, so you know…
🧠 Artificial intelligence(s)
I’m not trying to flex some weird hipster ‘i was into it before it was cool’ energy, but the reality is at some point in 2017 i read what is still probably one of my favourite foundational pieces on how to think about AI and intelligence. It’s from Kevin Kelly (of course), and although it is now paywalled on Wired, it argues that thinking of AI as ‘general intelligence’ is naive because frankly there is no general intelligence even in humans, just general types of intelligence.
Well, basically, my read on it was that saying AI will reach a point of general intelligence (which is to say, human-like intelligence) creates a dangerous precedent of thinking our types of intelligence are the superior ones versus, say, a dolphin. And that feels pretty much like human hubris for me, and not a very respectful way of thinking about the brain. Are we more intelligent than a dog? On some things like learning how to do maths, sure (and even there, i wouldn’t test myself against a dog). On other things like spotting if someone’s estranged in the snow by smelling them five miles away or something, maybe the dogs have superior types of intelligence.
Though then you might go, is that intelligence or just biological traits? I dunno, but the point is that we can’t just say there’s a threshold after which something has ‘general intelligence’, feels like the spectrum is far more wide and varied than it might feel comfortable. Which is perhaps why we try to create rules. But maybe we shouldn’t.
Then, at some point in the last two weeks, i read this other piece on, which offers a different way of seeing things:
“When we get cheaper energy, we simply demand more and better energy-consuming products. When we get more lanes, we simply demand more driving.
I suspect the same will be true for intelligence.
Will AI take lawyers’ jobs, or will we simply sue each other more?
Will it take designers’ jobs, or will we demand better design?
Will it take doctors’ jobs, or will we all demand our own concierge doctors?
Will it take engineers’ jobs, or will we demand more personalized and higher-quality software?
Will it replace startups, or will we demand more diverse and better instantiations of founders’ visions, faster?
Will it take writers’ jobs, or will readers just demand higher-quality, more well-researched, more original work?”
The point is different – more intelligent output will raise the bar for intelligence – but for me it speaks to the same point, about how there isn’t a single endpoint where what we call intelligence ends (because our expectations continually improve. And perhaps it’s also quite messy to determine where it begins (as with dogs and dolphins, which have just diverse, perhaps divergent, types of intelligence compared to human brainz; and even there of course, we haven’t even begun talking about neurodivergent brains).
So, there are different forms of intelligence, and as those forms of intelligence increase in variety and output, so do our expectations for what good and interesting and useful mean. Well, what does that mean?
Well, there is the human layer to all this which for me suggests we need to take a far more humble approach for what we think is intelligent, because it also varies wildly on worldviews. As per this piece on Collab Fund (a fab resource for all things ‘how we want to think vs how we should think vs how we really think’), there are many stories suggesting many forms of intelligence and decision making, therefore we need to stop assuming our way of processing intelligence is the way of processing intelligence. Whether that’s compared to a non-human animal, or compared to other humans who may be neurodivergent from us (and trust me you, there are far more neurodivergent folks out there than any of us care to admit, and this is ~a very good thing for us all~).
Then there is this other part of it, which is not how intelligence works in the human sense, but rather how it works at an organisational level. And of course at that level, intelligence is partly how we make decisions, but also partly how we organise information for those decisions in the first place. Enter the new corporate analogy du jour which i kinda like all things considered: a business as a searchable database.
Or, look let me just quote(the author) on it as he’s more eloquent than i:
“There's an interesting conceptual model here, one that I think will come to define organization structure over the next five years. The modern enterprise will be defined as a data set and system of business rules with AI at the center as orchestrator. AI will function as a dedicated and infinitely knowledgeable employee positioned alongside human collaborators inside of internal communication infrastructure like Slack, Discord or Teams. The best and most nimble companies will make generative tools part of every dimension of the organization, sometimes as human replacement but more often augmenting human creation and judgment in a way that makes people significantly more informed and productive.”
Really nice and useful
Really hard and possibly idealistic
All of the above so let’s try and build it shall we?
Yeah, i’m gonna go with 3. here. The world feels far intellectually richer that way. And look, will it feel clunky sometimes and the promise of fully automated systems may never come to fruition? Perhaps. But maybe that’s another form of thinking about it, that it’s not meant to be fully automated but rather fully assisted by AI in some capacity, even if the very human strings that tie it all together show up every now and then. I dunno, maybe we should embrace the fact that imperfections are beauty too? And that aspiring to improve things as we go is more realistic than static end states where finally everything can be perfectly in balance and we can go plant tomatoes?
😕 Content, consent and confidence
Look, parenting is hard. And beautiful. And tiresome. And all of these things, but as like the tech brahs like to say, 10x’d. I love it. It gives my life extra meaning to look at how my daughter observes everything i do, and uses that as part of a template for what she might want to do too. Which… whoa, right? Modelling behaviour is intense.
Anyway, part of that modelling point goes beyond what we do, but rather what else we observe in the wider culture that may affect what our kids decide is ok, or not ok. First off, there’s this interesting article by on thenewsletter about what happens when kids discover their parents' old social media, and i quote:
“Gen Z have more awareness, if not understanding, of their parents’ earlier lives than any generation prior, and it’s all thanks to social media. This will only be more true for Gen Alpha: Vlogs, Twitter rants, Instagram posts, and more will be available for hours and hours of perusing, and easy to come back to at any given moment, prompting some awkward conversations. Oh, I can’t start drinking until I’m 21? The 80 blurry Instagram pictures of you in college say otherwise.”
It’s quite wild to think about this stuff, right? But on the flip side, it does make me wonder about something which has bothered me for a while: how we come across online, especially when we show less, uh, flattering sides of our personalities. To put it bluntly: when your public record shows you were by and large an argumentative prick. Which look, i’m not pointing fingers or anything, but you know who they are. And of course, that may not be all of who they are, but perception is reality, yadayadayada. And i’m not sure ‘i argued a lot on the internet in my 20s and 30s’ is a thing that deserved to be on our tombstone. Just saying, thinking in decades is healthy.
The other side of this is not how kids see their parents’ social media, but rather how kids see themselves on social media before they even had enough awareness to consent to it. Which look, i’m not here to judge other parents and it’s not fair, but our household’s policy is no public images of our daughter’s face until she decides she’s ok with that, and even there, well, of course it will depend on what the photo is about.
Because alternatively, you can think of what the glossy magazine photoshoot vibes did to a whole generation of young women (and men), but amp it up massively because now there’s not just the pressure of how other people look, there’s the pressure of how you used to look. Which also creates this weird sense that everything you ever done was somehow captured as a performance by your own parents, and i can only imagine the damage that creates to your sense of self-worth and need for external acceptance.
It’s not, however, all doom and gloom.
Keep reading with a 7-day free trial
Subscribe to Salmon Theory to keep reading this post and get 7 days of free access to the full post archives.