š£ Leadership, AI training humans, fame by doing vs saying, bonkers benefits, and ++
It's all happening. I'm a bit nervous. But it's ok.
Yo,
This is the first paid edition of Salmon Theory and, iām a bit nervous. Partly because i never asked for money around this thing, and partly because once i do i gotta commit because people are more invested in it. But thatās ok. In five years iāve consistently dedicated 5+ hours a week either reading, organising or writing stuff for it, so itās unlikely my brain will just decide to go brrrr. One can dream, at the very least.
So what does this thing include? As i still try and work out the format, i got to:
Why leaders donāt care, what to do about it, how training AI is training ourselves
The importance of doing fame by doing stuff, vs just saying stuff that feels fame-y
Obsessing over value beyond price, and embracing big bad bonkers benefits
Letās get to it. You can read a fair bit of it, but at some point it truncates, so yāknowā¦
š„± Why leaders donāt care
Some 3, 4?, years ago, Bruce āgentleman thief of thoughtā McTague introduced me to the work of Zach Mercurio, and i was immediately hooked to his thing. So what is his thing? Well, leadership, purpose, meaning, all that good stuff, but without the Silicon Valley energy of āFOLLOW YER PURPOSE OR PERISHā. More like a thoughtful conversation about doing things that mean things to you, and helping others do it too.
Anyway, Zach has a recent piece on why leaders donāt care. And I loved it, because:
How can you ignore a headline like that, and
He actually offers productive and practical advice on making it easier to care
One of the main reasons leaders donāt care is personal ā youād assume. But the biggest point is that a lot of it is environmental, because an environment that doesnāt give you space to care will probably make you, over time, simply care less. No matter who you are. Culture has a funky way of creeping up on you like that.
This matters because, therefore, understanding peopleās environments is as important as understanding individuals themselves. From experience, this will take you a long way in getting better relationship with clients, creatives, colleagues and ā dare i even say it? ā consumers. Yes, sure, people not consumers, but it sounds nicer with four Cs.
The other part of this tricky equation between environment and caring: as we train more AI models to do human things, i wonder if AI models are training us in valuing humans less. Sounds intriguing, huh? But basically itās because i saw this other piece on how Leviās will begin testing AI generated models, which i guess:
Yay! Neat! Innovation!, but also
Er, what about the real human models who kinda need a job?, and also
Does this mean we will devalue other sorts of humans sooner than later, but also
AI doesnāt age, does this mean we will perpetuate our idolatry of youth?
Maybe. Maybe not. Maybe Leviās is thinking about all this. But itās worth reflecting on it nonetheless because itās simple plays like this that eventually lead to wider societal ā and hey, environmental, see the link i did with the first piece? ā consequences.
A bit like some random uni-focused social face comparison hack by some dude called Marty Zuckerborg or whatever his face was, led to shitty election practices and endemic misinformation, little AI plays might lead to [insert worst case scenarios].
On top of this, i saw this other piece ā well, itās more of a link to a Github repository, but it has text in it so itās now officially āa pieceā ā about how someoneās trying to model social behaviour through a machine learning model in the hopes of, uh, creating artificial social interactions or something? Iām not a technical brain but i think thatās the gist (Git?) of it. Point is: iām conflicted about all this.
Yes, all the above on devaluing humans feels true and important to reflect on and discuss. But i am also increasingly aware that AI is another form of saying āother forms of intelligenceā, and certainly this last link suggests that the types of social relationships we feed it might actually help? Letās say youāre autistic and struggle to intuitively understand more contextual cues, but actually a model can help train and support you as you go about it? Is this useful? Is it even ok? Or does it lead to us trying to converge the neurodivergent a bit too much into a uniform way of thinking about social relationships? What are the questions we havenāt even begun to ponder?
I donāt know. But holy hell, does this feel like an exciting time to think about it. What do you think? Will AI make us value each other less? The other way around? Is it going to be a bit of both and no one really knows anything? You can reply to this email or in the comments of the piece itself on Substack, letās start a ~conversation~ about it. Unless youāre an AI guy with growth hacks about how to 10x your content with ChatGPT and Midjourney, in which case please just. Donāt. Youāre not welcome.
𤩠Doing fame by doing stuff
Alright friends, letās switch gears and get going with some gaming stuff, shall we? Itās always incredibly amusing when people go āwe should do something in gamingā, and often we go ābut why, just to make sure weāre super rigorous and strategic about itā, and often the reply is āgaming is yuuuuuuuuuuuuuuuuuuge letās get famousā, and we go āyes but like is that your customer profile, does your product have a genuine role, do you even have any cultural clout in the space or stuff worth talking about?ā, to which the answer is often āyeah but letās like do a reactive tweet to the Diablo 4 beta codesā, to which we go āoh do you have budget to do stuff beyond thatā and the answer is, āwell if the ideas are good enough we can make it happenā, and we go ālolā.
Itās⦠fun. It gets weird.
Then, thereās the proper way of doing it.
Keep reading with a 7-day free trial
Subscribe to Salmon Theory to keep reading this post and get 7 days of free access to the full post archives.