🍣 Leadership, AI training humans, fame by doing vs saying, bonkers benefits, and ++
It's all happening. I'm a bit nervous. But it's ok.
This is the first paid edition of Salmon Theory and, i’m a bit nervous. Partly because i never asked for money around this thing, and partly because once i do i gotta commit because people are more invested in it. But that’s ok. In five years i’ve consistently dedicated 5+ hours a week either reading, organising or writing stuff for it, so it’s unlikely my brain will just decide to go brrrr. One can dream, at the very least.
So what does this thing include? As i still try and work out the format, i got to:
Why leaders don’t care, what to do about it, how training AI is training ourselves
The importance of doing fame by doing stuff, vs just saying stuff that feels fame-y
Obsessing over value beyond price, and embracing big bad bonkers benefits
Let’s get to it. You can read a fair bit of it, but at some point it truncates, so y’know…
🥱 Why leaders don’t care
Some 3, 4?, years ago, Bruce “gentleman thief of thought” McTague introduced me to the work of Zach Mercurio, and i was immediately hooked to his thing. So what is his thing? Well, leadership, purpose, meaning, all that good stuff, but without the Silicon Valley energy of “FOLLOW YER PURPOSE OR PERISH”. More like a thoughtful conversation about doing things that mean things to you, and helping others do it too.
Anyway, Zach has a recent piece on why leaders don’t care. And I loved it, because:
How can you ignore a headline like that, and
He actually offers productive and practical advice on making it easier to care
One of the main reasons leaders don’t care is personal – you’d assume. But the biggest point is that a lot of it is environmental, because an environment that doesn’t give you space to care will probably make you, over time, simply care less. No matter who you are. Culture has a funky way of creeping up on you like that.
This matters because, therefore, understanding people’s environments is as important as understanding individuals themselves. From experience, this will take you a long way in getting better relationship with clients, creatives, colleagues and – dare i even say it? – consumers. Yes, sure, people not consumers, but it sounds nicer with four Cs.
The other part of this tricky equation between environment and caring: as we train more AI models to do human things, i wonder if AI models are training us in valuing humans less. Sounds intriguing, huh? But basically it’s because i saw this other piece on how Levi’s will begin testing AI generated models, which i guess:
Yay! Neat! Innovation!, but also
Er, what about the real human models who kinda need a job?, and also
Does this mean we will devalue other sorts of humans sooner than later, but also
AI doesn’t age, does this mean we will perpetuate our idolatry of youth?
Maybe. Maybe not. Maybe Levi’s is thinking about all this. But it’s worth reflecting on it nonetheless because it’s simple plays like this that eventually lead to wider societal – and hey, environmental, see the link i did with the first piece? – consequences.
A bit like some random uni-focused social face comparison hack by some dude called Marty Zuckerborg or whatever his face was, led to shitty election practices and endemic misinformation, little AI plays might lead to [insert worst case scenarios].
On top of this, i saw this other piece – well, it’s more of a link to a Github repository, but it has text in it so it’s now officially ‘a piece’ – about how someone’s trying to model social behaviour through a machine learning model in the hopes of, uh, creating artificial social interactions or something? I’m not a technical brain but i think that’s the gist (Git?) of it. Point is: i’m conflicted about all this.
Yes, all the above on devaluing humans feels true and important to reflect on and discuss. But i am also increasingly aware that AI is another form of saying ‘other forms of intelligence’, and certainly this last link suggests that the types of social relationships we feed it might actually help? Let’s say you’re autistic and struggle to intuitively understand more contextual cues, but actually a model can help train and support you as you go about it? Is this useful? Is it even ok? Or does it lead to us trying to converge the neurodivergent a bit too much into a uniform way of thinking about social relationships? What are the questions we haven’t even begun to ponder?
I don’t know. But holy hell, does this feel like an exciting time to think about it. What do you think? Will AI make us value each other less? The other way around? Is it going to be a bit of both and no one really knows anything? You can reply to this email or in the comments of the piece itself on Substack, let’s start a ~conversation~ about it. Unless you’re an AI guy with growth hacks about how to 10x your content with ChatGPT and Midjourney, in which case please just. Don’t. You’re not welcome.
🤩 Doing fame by doing stuff
Alright friends, let’s switch gears and get going with some gaming stuff, shall we? It’s always incredibly amusing when people go “we should do something in gaming”, and often we go “but why, just to make sure we’re super rigorous and strategic about it”, and often the reply is “gaming is yuuuuuuuuuuuuuuuuuuge let’s get famous”, and we go “yes but like is that your customer profile, does your product have a genuine role, do you even have any cultural clout in the space or stuff worth talking about?”, to which the answer is often “yeah but let’s like do a reactive tweet to the Diablo 4 beta codes”, to which we go “oh do you have budget to do stuff beyond that” and the answer is, “well if the ideas are good enough we can make it happen”, and we go “lol”.
It’s… fun. It gets weird.
Then, there’s the proper way of doing it.
Keep reading with a 7-day free trial
Subscribe to Salmon Theory to keep reading this post and get 7 days of free access to the full post archives.