Interstellar Pt. 2: Are Emotions More Important Than Logic?

All right, welcome back to Lead Wisely by
Wunder Tour.

We are gonna talk about another tough
question about leadership by examining the

movie Interstellar by Christopher Nolan.

So what does Interstellar show us about
how humans should behave in the presence

of huge, intractable outside forces?

If you're a leader and you've got
something happening in your environment

that you don't have control over that's
gonna influence you, how do you approach

that challenge?

How do you think about it?

And so in this movie,

the challenges that we're presented with
are the biggest ones you can imagine,

right?

The intractable forces of gravity and
time, which are maybe implied to be the

same thing somewhere under the hood.

But let's go into this, Drew.

So tell us about a scene in this movie
where we're grappling with this challenge

of external forces that we maybe don't
have control over, but they're obviously

controlling us.

Okay, so in Interstellar, you get this
mirror across the movie.

You have right at the middle of the mirror
is what we talked about in Wondertor

Episode 1.

So you have them coming back off of
Miller's planet and the perspective shifts

from Cooper to Murph.

So if you look back at the movie, you'll
see that there are images, there are

scenes that are mirrored on opposite sides
of the movie from that point.

And so.

I think what we want to talk about here is
like we teased gravity and time.

So the scene that we're looking at here
that's mirrored is the first interaction

early on with time and gravity where you
have Murph being like, dad, there's a

ghost and Cooper, this scientist engineer
is like, no, there's no ghosts.

Murph, there's no ghosts.

And then slowly coming to the realization,
like there's something to this.

And this weird intractable force of
gravity that we thought we didn't

understand and that, you know, was
consistent is acting inconsistently.

And then you get the flip side parallel
version of that at the end when you have

Murph setting the farm on fire and finally
going back into the house that she's been

unwilling to go back into basically for
the entire movie.

She just tells, she says she can't bear to
go back into it.

Finally, she's like, I have to go back to
that spot.

Why?

Because gravity, the equation that she and
Dr.

Brand had been trying to solve, that she
had believed up until a point was

solvable, Brand has that dying moment with
her where he tells her, I saw that

equation a long time ago.

I was lying to you to try to keep the
mission going.

I was lying to try to keep humanity going.

At which point she basically realizes we
have to do something new.

So to me,

It's how does she respond to this weird
phenomena that's happening with Murph's

ghost versus how does Cooper respond and
how does Dr.

Brand respond to those sort of...

it's almost super natural but what
Interstellar tries to cast it as like you

said Brian is natural but hard to
understand.

right.

Yeah, so let's take a moment to just
sympathize with the sort of the leadership

situation that Murph finds herself in,
right?

So she spent her career becoming a
scientist and studying this huge,

interactable problem of how do we invert
gravity so we can lift all the humans off

the planet and save the species, right?

And she's just been saddled with the...

The person that she trusted, the person
that trained her, the person that she's

been working with for her whole adult life
is like, oh yeah, this was all BS.

This was never gonna work.

I figured that out a long time ago.

This was just a charade to distract us
while we're gonna go solve the real

problem with the other half of the movie.

And that emotional weight of like, this is
all pointless and it can't possibly be

solved because it's just too intractable.

There's nothing that can be done about it,
right?

many of us would respond to that quite
appropriately with despair.

This stinks.

I'm going to go have a drink somewhere and
not worry about it anymore.

I'm done.

I'm giving up.

Right.

That would be, you know, and, and like we
talked about in the first episode, they've

still got the giant wave, you know,
bearing down on them.

They still have the human extinction, the
planet falling apart, you know, where this

is, you know, we're all doomed.

They still have that bearing down on her.

And so Murph being one of our archetype
leaders in this movie, right.

She doesn't respond that way.

She doesn't respond, well, rats, I guess
we're doomed.

She responds, there's still a thing in the
world that doesn't make sense to me,

right?

There's an unexplained little corner over
here that doesn't work with our

understanding of the world.

And maybe there's something there, right?

If the stuff that I do understand says
it's impossible, then I'm gonna go play

with the thing that I don't understand.

I'm gonna open myself up to the
possibility that there's some nuance in

the world that I don't get.

I'm just gonna go be open to that and
we'll see what happens.

Right?

Because she doesn't really have a better
alternative.

Like she has to be basically despairing to
get there.

But she responds to that with, okay, I'm
gonna open my mind a little farther.

I'm gonna be open to new possible
solutions and I'm gonna be, I'm gonna go

sit there and look at this and like, okay,
well, this is a gravity thing.

And she, you know, in the context of the
movie, she makes the connection like, oh,

time and gravity are connected in this
mysterious way.

And...

You know, my father's been reaching out to
me to try to explain it to me the whole

time, but so to speak.

But that's.

In the movie, we're shown, you know,
gravity and time as these two, as the

intractable forces, but the emotional
content of like, I'm going to be open to

the solution is a thing that we see from
her that I think we can aspire to, that is

very recognizable.

Like, you know, maybe my understanding of
the universe is imperfect, and we're not

actually.

Yeah, I think that's really good to just
because you can see a constraint, just

because you maybe have experienced a
constraint, not to let it take too deep

seated a hold on you because that seems to
be what happens to Dr.

Brand, right?

He has solved the gravity equation and he
can't rectify it with relativity.

And so he just freezes up and moves to
Plan B.

That's his response.

He's like, nope, there's no impacting this
force.

It's too big for me to touch it.

But what we see Murph and Cooper do is
take a non-obvious path to a solution.

They start looking for the things that
don't make sense.

They keep stoking their curiosity.

They understand that they might not have
control of this, that who knows how Cooper

got the coordinates of NASA, right?

At that point in the movie, he has no
idea.

You know, how does Murph as a child
understand that she has a ghost?

She doesn't understand, but she does come
back to that again and say, that didn't

make sense to me.

It's not that maybe I'll ever understand
it, but I'm not going to give up on it

just because it didn't make sense to me.

And I think there's things that we
experience in our lives potentially that

don't necessarily make sense to us that,
you know, you experience something that

you felt shouldn't have happened the way
that it happened and it leaves an impact

on you, but maybe you don't understand it.

You experience something that felt, you
know, too tough, too much like it was just

like, why did this happen to me?

You know, the universe did this to me or
whatever.

But how do I progress from here?

And as we've talked about on Wonder Tour
in the past, MIRV goes back to the house,

which is generally what you have to do in
order to move past these huge moments

internally.

But the external thing that happens with
the intractable force is...

she recognizes that maybe there's some
consistency to it.

Maybe the anomaly is actually what
underlies reality in the end.

And maybe the rest of things could perhaps
be the anomaly.

And it also, we have the emotional lesson
of she has to go face the painful thing,

right?

She's had to swallow the idea that her
father's probably actually gone and wasn't

ever intended to come back, right?

And she's just got to go back to the house
and go look at his watch.

So there's that emotional lesson.

We can contrast this.

This is good.

This will get us to our mountaintop
moment, right?

Is that we can contrast that with how
other characters in the movie behave in

the face of this possible likely despair,
right?

So...

The Dr.

Brand, the mentor character, he responds
to despair by like, I'm just not gonna

tell anybody about it, right?

We're gonna do our best this one way, that
we're just not gonna tell everybody else

that that's what's happening, right?

We're just gonna have the charade.

Murph responds to it by digging deeper
into science, which is her thing and

facing her anxiety.

And then we have Dr.

Mann.

We have the best of us, the greatest of
the explorer, the leader of the program,

the guy who put everything together, the
greatest mind of the century, who is, of

course,

played by Matt Damon, like, you know, he's
out on some far flung planet.

So so what do we see from him in the face
of this intractable forces in the face of

despair?

Yeah, Dr.

Mann takes the, and this is something that
we see in this movie.

We see these contrasted versions of
similar stories.

And this is where we can talk about
technology.

This is where we can talk about how you
approach something determines the types of

solutions that can come out of it.

You know, the same type of inputs can
result in different outputs depending on

how we approach it essentially.

So Dr.

Mann sees this intractable force

He tells them that.

He's like, yeah, I knew.

I knew all along that he'd already solved
the gravity equation and that plan B was

the only plan.

But eventually that mindset of, that Dr.

Brand kind of instills in him of, just be
quiet and do what we have to do.

You'll be the bad guy, it's fine, but
we'll keep humans going.

I'm willing to take the moral plunge so
that humans can keep going.

That mindset.

seems to almost take hold in man and
evolve to the point where man has become

so prideful stop me if you've heard man
becomes prideful before and can't let his

own life go he has inextricably tied
himself to the future of humanity instead

of being able to separate himself from the
future of humanity which is what cooper

and murph do

Yeah, and so we see, so Cooper has already
had to make the one sacrifice, right?

He already had to leave his daughter
behind, he already had to leave his family

behind to be able to go do this thing,
right?

So he's already investing in the future of
humanity at some personal expense.

And then we see, yeah, the, you know, the
Dr.

Mann character calls them to his barren
planet, even though there's no way that

they could possibly, you know, colonize
it, just because he wants to talk to some

humans and get off the planet, right?

He's, you know, he's incredibly lonely and
incredibly selfish.

So that's his first moment of like, you
know, kind of his character sort of comes

to the surface and we have a, you know,
hand-to-hand combat on the barren planet

and he takes off and tries to get back to
the, tries to get back to their ship so

that he can save at least himself and go
do the mission himself, right?

He's still paying lip service to the
bigger mission, but he's not willing to

give up his own aspirations.

And so then we get this again, wonderfully
symmetrical series of scenes about how

they, you know, about how

the Dr.

Man character tries to solve his problem
and then how Cooper is forced to survive

the aftermath.

So talk us through this, Drew, because I
know this is something I think we, we kind

of gets to the real heart of several of
the elements we've been talking about.

All right, so this is winding our way up
to the mountaintop.

We like to talk about going on a wonder
tour as a journey.

We're going on a journey, we're going on a
hike.

Maybe we've been on this hike before, but
we're gonna find something new.

Maybe this is a new hike for us.

Maybe we're going on this hike with new
friends, more to come on that in the

future.

But when we go on these hikes, there's
always something to see on the hike.

And for us, it's the mountaintop.

And

For me, the mountaintop in all of
Interstellar is the docking procedure

scene with Dr.

Mann followed by Cooper's docking
procedure scene.

So if you're an old Wondertor fan, you'll
know that we've talked about this scene

before in an episode about Interstellar in
our Curious Explorers series in our audio

only format.

But we bring it up in a different light
here.

So this scene where Dr.

Mann is

essentially in his hubris, trying to
escape, trying to manually do a thing that

he should not be able to do, which is dock
his lander with this overall spaceship.

It's a thing that we see at the beginning
of the movie, again, these paralleled

moments, we see Doyle sweating as he's
trying to perform the docking procedure

there.

So we know it's a hard thing to do,
obviously, because if you don't get it

perfectly right, if there's any...

uh, expanse in between the two materials,
you know, you're going to create a, you're

going to explode it or whatever, right?

I'm not a astrophysicist, though.

But he goes in there and they're trying to
tell him over the comms like, man, don't

do this.

If you fail, you're going to doom humanity
and man in his hubris turns off the audio

and just keeps going.

and just keeps thinking like, no, I'm the
guy.

I'm the guy who's gonna save it.

I always was the guy who was going to save
us.

You know, that's why they came to save me.

He has this himself at the center of the
universe mindset that is always the

problem for humans.

And what does he do?

He blows out the whole side of the ship
basically, and the ship goes careening off

towards the planet.

I mean, Brian, this is like a total
devastating moment.

You've made it through so much.

If you're Cooper and Amelia at this point,
you just lose Ram down on the planet.

You're chasing man who you thought had
this suitable planet for you, and then he

blows up part of the spaceship.

So he's totally betrayed them.

And so now they are in this debris field
and they're one link to both succeeding as

a mission and personally surviving is like
falling into the planet and they have not

a lot of options, right?

So again, we see Cooper go into like,
okay, well, there's only one possible

thing to do mode.

So we're gonna go do the thing, right?

And so they have this very dramatic
cinematic sequence where they're adjusting

the spin of their spaceship to match the
spin of the other spaceship and then

trying to dock it in the middle of all
this chaos so that they can then pull it

out in time.

But we see a couple of the elements we
talked about, right?

We've got this moment of decision.

We've got this moment of, you know, of
like, we're just going to have to go do

the thing.

And we've got also the Cooper giving up
personal control, right?

Like not having the idea that he's going
to be able to do it.

He's got to trust the technology.

He ends up using TARS to fly the ship.

Like you're going to have to do it for us.

I'll tell you what my goal is.

You're going to do the super fast
processing calculations to like make it

happen.

So, and then we get.

We get Drew's favorite line right out of
the whole sequences.

You know, what does Tars say?

What's his complaint?

What does TARS say?

TARS...

oh I don't have the quote directly in
front of me either way, but yeah, TARS is

like, that's not possible.

Like we can't do that.

And Cooper turns around and he's like, no,
that's the only option.

We are doing that.

It may not be possible, but it's
necessary.

So we're doing it.

Right, so we've got the human machine
leveraging, right?

He's using machines for the things that
they're good at.

He's using his own, like, this is the only
possible path, so we're gonna try it even

though it's a thread and needle situation.

But we've also got, in classic cinematic
style, we're being shown that they had

a...

kind of a purity of purpose, right?

Their intention is not for just
themselves, right?

Their intention all along has been trying
to maximize the big picture, trying to

make humanity's survival as much as
possible.

Whereas you said Dr.

Mann was all centered around himself, like
humanity should only survive if I get to

do it.

And that's clearly not one of the
leadership lessons we wanna talk about.

So what do we...

that we've taken away from so many movies,
Brian, is sacrifice is critical to growth

for humans.

The thing that we sacrifice will determine
the direction that we grow.

If we're willing to sacrifice from our
own, if we're willing to sacrifice things

that we have to pay for emotionally or
cognitively or physically, then generally

the results can be positive on the world
if it's done with the right approach.

But if we choose to instead sacrifice
things that are easier for us, like man,

sacrificing the other team members in
order to be able to escape himself, then

humans become the worst version of
humanity.

And we see the power struggles and the
loss of freedom and equality and all of

these things all start to ramp up and it
all happens very quickly.

So I think one thing that we can take away
from this, mountaintop scene here, as we

see the...

kind of docking scene failure and the
inverted docking scene success is that

what we sacrifice will determine the
direction of the result essentially we

cannot be willing to sacrifice our
character that's the one thing that we say

on wonder tour above all else sacrificing
our character basically ensures that in

the long term things will be worse for us
and will be worse for the things around us

and sacrificing character

is something that Dr.

Man has clearly made a habit of, which is
also, to me, seems like, and we could have

a whole argument about Dr.

Brand here, but the more I've watched this
film, the more I have concerns with Dr.

Brand's character, because at first you're
like, oh, he's well-intentioned.

He hid it from them because it was the
only way.

You're like, but the more you think about
it, you're like, did he really hide it

from them because it was the...

the only way or did he hide it from them
because he had a fixed mindset and he

thought that was the only way?

Did he really give everyone else a fair
shake or did he just grab for the controls

just like Dr.

Mann and say, I've got this one, you trust
me.

And how is that contrasted then Brian with
what we see on the opposite side coming

out of our mountaintop here with the
Cooper, you know, taking his Rover into

Gargantua.

Yeah, so then the following scenes here we
see some real long shot trial by Cooper,

just the same as Murph is sort of opening
herself up to the possibility that the

universe might have more going on than she
understood and that she's going to have to

get through her own emotional hangups and
her own kind of preconceived mindset to

solve the problem.

Cooper's got sort of the same thing where
he does, he does, he has a sacrifice

moment and he has a, you know, exploring
some new element of the universe that he

was, you know, that he was open to.

But what I want to call it back to is kind
of our original, our original thesis here,

right?

About these intractable forces, about what
do you do in the, in the face of huge

risks or huge, you know, things that you
can't do anything about.

and about how do you balance the emotion
and the rationality of your response to

that.

Right?

Because we see you can argue that Dr.

Brand's failure mode is he's too rational
about it.

Like we're going to solve this problem
with this sending the human eggs into

space and we're going to pretend to solve
the problem with the equations but that

was never going to work.

Like I'm hyper rational, I just don't
think it's going to work so I'm not going

to keep trying.

Whereas Dr.

Mann is too emotional about it when he
gets over it.

Now I personally have to solve this
problem.

I have to have ownership of it.

Characters that freeze in a situation or
freak out in a situation or who say, oh,

we're just going to stay here and farm
until we run out of food.

Like that's too far in the emotional
direction.

But this balance of I use the emotion to
motivate me.

I use the emotion to clarify why I'm doing
this.

But then I use the rationality to

navigate a path however narrow it might
be.

Right, I use the emotion to say, it's
necessary for us to survive, so we're

gonna try.

And I use the rationality of, but I can't
pilot the ship that precisely, so I'm

gonna have to have the robot do it.

Right, that's the blend that we're looking
for.

Right, and Murph's emotional intelligence
of like, you know, I desperately want

humanity to survive, I desperately want,
you know, my father to have meant

something, right?

I'm gonna use that to motivate me.

but then I'm gonna sit here and I'm gonna
work out the Morse code, like I'm gonna

sit here and stare at this problem until
I've got another way that it can go.

That's a really difficult balance to walk
when you get presented with something.

I think, and this will maybe be a good
time to bring in some practical

applications, like what are situations
where you might feel like this, even past

the giant wave situation, now we're into
the existential threat situation or the

all is lost, everything I've ever believed
is bogus situation, right?

Yeah, so I think bringing in now like a
business example is helpful here because

as we look at that balancing of
rationality and emotions, you can, I think

you could also look at that because
machines are wholly rational.

You're not rational in terms of rational,
like in cold dead truth, this is the way

things are, but rational in terms of their
inputs.

They can only be rational based on their
inputs, that's it.

They're just purely logical.

Even...

Even our current form of generative AI is
purely logical, essentially, right?

It is a model driving everything.

So it can only be a form.

The output can only be a redistributed
form of the inputs in some way.

Um, as we then look at though, humans,
humans have gut feel.

Humans have the ability like Cooper did in
that moment to say, chase down that ship

right now, otherwise we're doomed.

I don't care if you don't think there's a
chance that we can catch it.

Like I have this emotional, I have this
underlying whatever love is that

transcends time and space that's that is
dragging me to complete this mission that

won't let me let this mission go and it's
pulling on my emotions in a positive, you

know, he's taking the stress in a positive
way to pull him towards his objectives.

Now

This is a movie, so obviously it's like
really big and fantastic and stuff, but I

think we can see this in an everyday
situation with just the way that we

interface with technology.

There's a lot of leaders that see
automation, that see data science, that

see, you know, new applications and
services and all these different things as

what we need to do, right?

That is the what.

As long as that is the what though.

What is the purpose of it?

Again, going back to what we talked about
in episode one, how are we going to do it?

So let's say in this example, you want to
have a better way to run your business.

So you want to have more control, you're
gonna start pulling all of this data from

all of these different systems and inputs
and maybe external data sources that tell

you things about the environment and all
these other things.

And you're going to pull them all in and
put them in some sort of a dashboard to

give you a better way to run your
business.

So I think Brian, you and I have both had
experiences doing things like that before.

And you can end up on the very rational
side of that spectrum where you're like, I

just want the machine to tell me what to
do and I'll just pull the trigger

basically.

Just have the machine tell me what to do,
have it process as much data as we have

access to.

And then the other side of that is like,
maybe considered to be a more old school

or gut feel mindset that's like, but I
don't trust machines.

Maybe I should just.

Maybe we should just whiteboard this up or
like maybe, you know, just, just have it

give me this information I'm already
getting, but I just want it automated now.

And really isn't the answer somewhere in
the middle where there's still room for

those emotions to influence the way that
we respond to things, but we leverage

machines to their best extent.

We leverage rationality, like machines as
almost like a rationalizer in this

situation to be able to tell us, okay,
well,

Tars is telling me what the rational
solution is.

My brain is telling me what the emotional
solution is.

As the human who actually understands the
purpose and mission, how do I find the

line?

Right.

And this is, you know, in this movie, we
see the decision-making is inherently

residing with the humans, right?

We don't, you know, Tarz is, Tarz is AI to
the extent that he can have a conversation

with you and can solve problems really
fast, but he's not AI in the fact that

he's the captain and everybody else is
doing his bidding, right?

You know, it's, you know, we're still
putting humans in charge and this

conception of it.

And that's not, like you said, that's not
necessarily the way all businesses are

looking at it, but you can.

You can use technology in a couple of
ways, right?

You can use like, oh, I'm hearing the
voice from my team that says there's a

problem.

Let's get some data and find out if it's a
real problem.

Let's find out how big the problem is.

Let's see if we can detect, does it look
like another problem that we've had

before?

Right, so that's emotion triggered.

Then let's add a rational layer and see if
we can verify what's going on.

Or if it's just like, oh, this actually
looks like the last six times this

happened, it's fine, right?

There's the other way around, right?

Which is the, there's way too much data
that we could ever possibly digest.

we should use the machine to sift through
all that and find the outliers or find the

things that look like they're about to get
bad.

And then we'll dispatch some humans to go
look at it for real and do a much more

nuanced and fine grained and intuitive
understanding of it.

And then we'll decide what those problems
are.

Those are both valid methods.

But in the presence of a complicated
world, in the presence of infinite amounts

of data, in the presence of so much stuff
happening, and it's hard to tell which

things are which, just doing it with just
humans running around like brute forces,

maybe.

is not going to be your most effective
technique.

And just like do whatever the machine
learning model tells you to do is also

highly risky.

So we have one example we talked about in
the pre-show, right?

The famously the real estate company
Zillow a couple years ago was involved.

They had incredible amounts of data about
real estate transactions, and they were

able to make forecast models of what they
thought houses were going to be worth.

So they started this thing called Zillow
Offers where they were algorithmically

identifying and making offers on houses.

so that they could buy them and then flip
them in six weeks.

They were, oh, we forecast this house is
worth more money than it's currently

asking for.

We forecast that it will be worth more
money in three months and so we should

jump on it and then we'll turn around some
money.

And this worked really great for like two
or three years until there was a global

pandemic and a following recession and a
complete, you know, crippling of the real

estate market.

And hey, guess what?

The machine learning model was not trained
on something like that because they didn't

have any data that looked like that.

And they ended up with an $800 billion
loss in one quarter.

We're going to wind this business down and
we have 7,000 houses we're sitting on that

we have to sell in a hurry.

This was an inappropriate deployment of
machine learning, not just for

forecasting, but actually for real-time
decision making at a huge scale.

without sort of appropriate evaluation of
the risk, without appropriate evaluation

of like, well, how much trouble could we
possibly get in if we're wrong?

So what was the purpose?

The purpose there was, you know, the
purpose there was make a bunch of money in

the real estate market, but it was only
valid in one set of conditions.

And as we see in this movie, right, you
know, sometimes the conditions look

terrible or sometimes the conditions are
outside of your experience and you have to

be able to navigate that.

And we aren't.

We aren't at a point yet with machine
learning where that's a thing that we know

how to do.

Yeah, I mean, overall, AI and machines in
general are constrained by what they have

seen before.

Specifically, when we're talking about
like ML AI, it is constrained by its

inputs.

If you do not have a model for what
happened in during COVID and your Zillow

offers model, it will not know how to
respond to that situation.

Similarly, when we look at how

we talked about at the beginning how Murph
responded to a intractable force.

And we can kind of weave all of this
together a little bit.

It's not gonna be a nice tight package.

Is it usually like that on Wondretor?

But you can see what Murph does.

And again, Murph has the ability as a
human who has this great sense of purpose

to look past the previous inputs.

The previous inputs, AKA, if we just were
to totally simplify it down to.

Dr.

Brand's equation would suggest we cannot
rectify what we know about gravity with

relativity.

That's the case.

Plan B.

She didn't take that for an answer though,
because she looked at it and she was able

to look past it and she said, but there's
things I don't understand it can still

learn.

I'm gonna keep learning, I'm gonna keep
understanding.

And that's where it's like, you wanna
partner with machines to crew.

Now again, you have a use for machines.

I have a thing I need to learn.

let's go work with machines to learn that
thing.

Let's see what they can teach me.

Let's constantly validate if it actually
applies to our purpose and if it's wise to

implement it.

And that's what we're gonna get into here
more.

It's not all gonna be just AI and machine
learning just because it's 2023 and these

are like things that we talk about
currently, but that's what we're gonna get

into more in our future episodes here.

Yeah.

All right.

So let's get started on some key takeaways
here, because as, as we've been talking

through, I think we've got a couple of
recurring themes, right?

And we, you know, we, we started with
intractable, you know, external forces and

we ended up with emotions and rationality
inside our own little brains.

Um, but a couple of things that I've been
thinking about as we're talking, right?

You know, one is that, you know, in the
face of these, you know, these massive,

whatever it is, you know, the wave of
social change or the wave of the COVID

disruptions or the wave of, you know,

economic upheaval or gravity and time in
this movie's terminology, right?

You know, the...

We talked about anchoring to purpose.

We talked about anchoring to what is your
purpose and using emotion as a way to make

sure that you're anchored to that human
part of the purpose.

But then we also talked about using that
emotion as a motivator to get moving,

using that emotion as a motivator to
examine the world.

But then in the face of these intractable
forces, like look for the anomalies.

Look for the thing that doesn't make sense
about the story.

Look for the thing that you haven't yet
tried that might be an out.

Look for the one possible way this could
succeed and maybe you gotta go try that

one.

Those are things we can do.

Those are things we can, in the face of a
big change, in the face of a problem

that's happening in our external world, we
can look for, all right, well, is my

purpose still valid?

Like, how do I feel about that?

How does that motivate me to get moving?

How do I not make it about me?

And then what are the weaknesses in the
thing that everybody says about like, Oh,

we're all dude because X, Y, and Z.

All right.

Where's the weak point in that story or
where is the piece of that I can use?

Okay, great.

I'm going to ride that wave right now.

I'm going to, I'm going to, you know,
great.

The spaceship is spinning.

Congratulations.

Spinning is the new plan, right?

You know, right.

You know, I'm going to take advantage of
that motion.

I'm just going to synchronize to that and
then we'll try to move forward.

Right.

So those are all things that we, those are
all strategies that again,

analogies that we use in our head, I'm in
one of those situations, okay, how do I

move forward from this crisis?

And those non-obvious solutions, because
they're non-obvious.

take time to develop, they take time to
expose, they take diverse experiences to

tease out oftentimes.

And that's what we see here, right?

We can have multiple scientists bashing
their head against this problem for

decades and not see a solution versus, I
mean, Murph had the solution right in

front of her the entire time and she
couldn't see it.

And oftentimes I think that we do see
that.

In movies, things are too obvious, but
that actually does happen a lot of the

time in real life, right?

Like the non-obvious solution was sitting
in front of us the entire time, we just

didn't see it because we were biased by
our past experiences, just like AI gets

biased by its past data that it's looking
at.

We just can't see this because it fits
outside of our mental model for how that

thing should operate currently.

So to be able to get past it, it just
requires diverse perspectives, it requires

diverse experiences, it requires many,
many iterations.

And...

in the case of Murph, to not give up,
never to give up on that just because she

tried a million obvious and non-obvious
solutions to no end.

Well, and if you zoom out far enough, it's
obvious how all these learning stack on

each other.

There are so many things that we
understand about the universe and the

world right now that we did not understand
20 years ago, or 50 years ago, or 500

years ago, or 10,000 years ago.

If you zoom out far enough, well, of
course it's possible to learn something

new.

Of course it's possible to break through
these things.

Of course it's possible to split an atom
and break this sound barrier and have a

machine that plays chess.

Right?

Right.

So who knows what else is possible?

You don't know what specific thing is out
there, but all of those things are just

layers and layers of new understanding in
different directions.

And so just keeping that in mind keeps you
out of that constrained static mindset.

Right.

Keeps you out of the scarcity mindset of
like, well, you know, if people figured

all these things out, then other things
must still be available to learn.

Right.

This is we are not at the end of human
knowledge.

We're never at the end of human knowledge.

great, Ryan.

Man, another wonder tour in the books.

It's my pleasure to lead wisely with you
here.

Look forward to learning more next week.

Yes.

So thanks everyone for joining us once
again.

This was episode 102 and our second pivot
to video episode for this Lead Wisely

initiative by Wunder Tour.

We're really looking forward to our next
couple of episodes where we'll be

examining the same core question about how
should humans leverage and interact with

technology and how should we think about
that challenge as leaders?

We're going to be exploring that through a
couple other media properties.

As we go over the next couple episodes.

So we really hope you'll join us for
those.

We hope you enjoyed this conversation.

I'm looking forward to seeing you then.

And in the meantime, just remember, as
always, character is destiny.

Creators and Guests

Brian Nutwell
Host
Brian Nutwell
Brian Nutwell is an experienced product, process, and analysis leader. He loves connecting with other people and their passions, taking absolutely everything back to first principles, and waking up each day with the hope of learning something new. He is delighted to join Wonder Tour, to help discover pragmatic leadership lessons in our favorite mythic stories.
Drew Paroz
Host
Drew Paroz
Drew Paroz leads at the intersection point of people, data, and strategy. For Drew, nothing is better than breaking down problems and systems into building blocks of thought except using those blocks to synthesize fresh models. Drew is on a lifelong Wonder Tour to help take those building blocks into life change in himself and others.
Interstellar Pt. 2:  Are Emotions More Important Than Logic?
Broadcast by