What Am I Supposed to DO

January 8, 2024 | 1:25 AM ET

hi. this is a weird post. i had a coherent idea going into it, and i was very passionately writing it in the moment, but i very quickly spiraled into a sort of emotional mess as i went on. at some point i stepped away to do something else, and i came to a nice realization that changed my mind about what i was writing about. i felt bad when i was initially writing ("before" section), but i feel good now ("after" section). i don't want to proofread this (you can see the [TODO] portions where i say something incomplete or otherwise non-representative of my true ideas, but i don't really want to return to it), so enjoy my weird unfinished thoughts on a topic that gave me lots of distress. read the end ("after") to see a more clear picture of my feelings! :+)

Before

i've been having a problem over the past few weeks (well, longer than that, but its been an especially large problem lately) and i'm not sure how to resolve it. so i'm making this post to sort of talk things through. to preface: if you're struggling with things mentally, then you probably shouldn't read through this post.

the issue is: how do you justify ... anything. so i finished reading this book on string theory (named "the elegant universe") and it was very fun! super good book for anyone looking to learn more about physics development (especially those occurring around the 1980s, which is when string theory was becomming more popular). though it emphasized a few talking points that have got me thinking.

two things you may learn in elementary school are "everything in the world is made up of atoms" and "the sun is eventually going to explode in a billion years". if you think deeply about these things, you'll realize they're secretly saying from the perspective of the universe "our problems exist on a level of abstraction that is ultimately insignificant" and "none of the things you care about now truly matters." (note the importance of the "from the perspective of the universe" part, which i'll talk about more in a second.)

you probably don't particularly care about these two statements since you learn them in elementary school, and i didn't care about them either... until i read this book. those same two questions kind of got rephrased into "each type of subatomic particle is derived from the same basic stuff called 'strings'" and "one theory of the universe is that the big-bang occurred in the center of a black hole, so we can imagine the universe is simply an optimization program designed to maximize the liklihood of black holes forming because it leads to more universes (and thus more black holes) forming." for whatever reason, those statements hit me a little harder now than they did a few weeks ago.

so they got me thinking even more: i care a lot about things that really don't matter in the grand scheme of things, right? i mean, from the perspective of the universe, no problems of mine hold any significance. furthermore, the most significant people in the entire human existance (e.g., jesus, julias caesar, george washington, albert einstein, etc.) AT MOST have a symbolic longevity in the "human experience" of a few thousand years. but we're noticing that the further we go back in time, the more these humans are degraded from "people" into "concepts", which--i feel--kind of undermines the individualism aspect of these people. they got me thinking about what i want to do with my life; what motivates me, and if my motivations are really significant.

although, i can feel you rolling your eyes when you read this post. "but sophie", you may say, "your arguments don't make a lot of sense for any number of reasons!" let's go through the basic counterarguments.

first: "your actions are significant because they impact other people around you." so from the perspective of the universe, there is nothing that intrinsically separates a human from any other object (in fact, the notion of "object" is itself too blurry to hold any importance). the complexity of a human is negligible compared to the complexity of a galaxy. should a human be considered more important than a star? a star consists of a simpler collection of atomic constituents, but the movement of a star on an atomic level is arguably just as complex as the movements of signals in a human brain (and furthermore, with string theory, we can no longer say "the star consists of simpler atoms" since "all atoms are fundamentally the same"). if i have a lever that either kills a human or destroys a star, there is nothing intrinsically more important about the human than the star, so why do i personally care about the human more than i do the star?

in an ethics class i took last year, i wrote an essay about a similar topic to the above, and i noted that "the one factor that is fundamentally important about humans is their emotions. we can reduce the emotional state of a human to a number (e.g., +1 for happy, -1 for sad), and thus we can internalize the 'goal' of a human to 'maximize this value' and to increase the net happiness of the world". i went on to argue the difficulty of convincing an AI of this argument (which is something Dungeon Meshi explores a bit, if you've read it!): the AI would simply respond "in order to maximize happiness, i must maximize the chemical that leads to happiness, which requires forcibly restraining every human (initially, a large decrease in happiness, bad) and then pumping dopamine/oxytocin/etc. into everybody (a huge gain in happiness over an indefinite time period)! by forcing births of more humans, i can lead to MORE dopamine generators, which further maximizes this value!" this argument appears nonsensical, but how else can you convince an AI to make humans as happy as possible? it's difficult to communicate ideas like this on a fundamental level because humans operate on such weird zones of abstraction where this idea seems "bad", for some unexplainable reason. (in this case, you could have the AI maximize both short-term and long-term happiness rather than just one or the other, but it would still reguardless need a more concrete way to interpret happiness rather than just the existance of chemicals.)

second: "your actions may not be significant in the super-duper-long-term, but aspiring that is meaningless. you should instead aspire to be a good person in the short-term (e.g., your individual lifespan)." this is a nice idea, and its also incredibly manageable! everyone on earth can aim to be a good person, and work towards making themselves loved and cared about by others and mourned when they die. the problem of the "i wanna be like einstein" idea that i said above is that if everyone on earth has this same idea, then it will quickly lose its meaning. we can't view everybody as equally significant! i probably cant fit the identity and accomplishments of more than a few hundred people in my head, let alone a few MILLION or BILLION people! but this collides with another idea i mentioned earlier, how significant people are "summarized" in this fashion and reduced to "symbols" (e.g., how einstein is reduced to a symbol representing a "scientist", rather than that of an actualized, living person). so this counterargument--that you should try to be significant only within your lifetime, only to a few small handfulls of people--is nice.

but this runs into a similar issue: why does it matter? [TODO]

another problem is one that lots of people joke about: how would the world be different if 9/11 didn't happen? 9/11 led to the deaths of several thousands of people (i mean, not to mention the undoubtedly millions of others that were killed or otherwise personally affected by the aftermath), but america would have been at least slightly different had it not occurred. ignoring the many, many cultural, political, and sociological effects and instead focusing solely on airplane security as a tiny example, we can see direct changes to security practices that were implemented as a consequence of the event occurring. although it's known that lots of these practices are instances of "security theatre" (e.g., in that they are not meant to stop violence, but merely to be a deterrent and only appear as if it would work in stopping threats), its not difficult to argue that at least some airplane-related attacks have been prevented as a result. surely genuine terrorists could find a way to slip through the cracks, but realistically at least one threat was prevented due to these measures. as such, we can argue that there is at least some possibility that the world has improved as a result of 9/11 taking place. if it helps, you can imagine some butterfly-effect sequence of events not taking place due to some potential terrorist being stopped at the pre-check gates.

this exercise is meant to show that "if something bad can lead to good things happening (e.g., a small dip in 'global happiness' can lead to a large increase at a later time), then the inverse must be the case as well. this has been talked about many times by any number of politicians on every hot-button political topic, like "we can't give a universal basic income because it will prevent people from working", or "we cant implement harm reduction approaches like syringe access programs because they will cause more people to start doing harmful activities when they wouldn't have otherwise". (note that lots of these are a matter of personal opinion, which is a different discussion.) these conversations can boil down to one topic: "a person can work very very hard to improve the world, and they may be able to improve one small part of it, but there's always the possibility they inadvertedly harm the world much more than they initially intended."

now this is a bleak way of viewing the world! "we shouldn't try to be good because we could accidentally do bad!" you may simply respond to this argument with "why drive a car if you could get into a crash at any moment?" of course, the definite answer to this lies in calculating the risk exposure, e.g., a value indicating the benefit of an activity weighed against the probability of it occurring, which quantitatively shows that driving a car gives far more benefit when considering the likelihood of getting into a crash. reguardless, the problem we're running into is not "how do we maximize benefit"; it's why do we want to maximize "benefit" in the first place, or even more abstractly, what even IS benefit?

these are the problems i am encountering. i think these arguments so far have been way too abstract for any normal person to follow, so im going to relate it back to the real, concrete issues i have been thinking and the thought processes i have been working through. if you had simple answers to the above statements, then please consider the following statements (and give me solutions, if you have any!).

i've been wondering "what do i want to do with my time?" in last week's post i mentioned my desire to be a super good scientist; i wanted to change the world for the better, to discover facts about the universe that describe how it operates, to discover new paradigms for problem-solving that benefit everybody in the world, but this takes a lot of work to do. so there's two problems with this perspective: the first is that i could dedicate myself to this goal, working endless nights on reading papers, researching textbooks and attending lectures and seminars of every class i can find, and talking to every professor, politician, businessperson, and scientist i possibly can in order to accelerate my status in society to better position myself to reach this goal, and this would almost certainly lead to me becomming among the world's most successful scientists. it would take a level of dedication, motivation, willpower, and effort unbeknownst to any person on the planet, but i could physically perform each of these actions. the problem i would encounter is how miserable i would be if i definitely decided to follow through with this. this leads to the second problem: even if i did accomplish these goals, why would it even matter? i would have no friends, i would be a completely different person than i am right now, i would never socialize outside of work or do any fun activities, and i would be under so much stress that it would make every waking moment a nightmare. and despite all of this effort, it could mean that the world becomes a worse place than if i had never lived in it at all, despite my absolute best efforts in doing otherwise.

but my issue with the above is "if not that, then what?" what do i do with my time? it's difficult not to see most actions in life as big time-wasters. social media is a huge example of this: i have been able to justify social media usage as "interacting with some internet friends", but my usage as of late has been a big detriment to my mental health. i've been scrolling tumblr and twitter for hours, knowingly throwing away my time in order to avoid doing "productive" tasks like reading papers and textbooks, but it's been very difficult to tell myself to stop. my desire to see drawings of anime girls kissing or my need to bark like a dog online are both intrinsic aspects of my personality, and i'm scared i will be a different person if i give up these parts of me completely. from the context of the "scientist" viewpoint i shared previously, the most beneficial task i can do is to dedicate myself to reading papers and not indulging in social media use at all, but would i be the same person then? (why do i care about being the same person at all?)

the counterargument to this is probably "you should do what makes you happy", but this neglects the brain's ability to change as a result of my dedicated actions. i have the complete control over what makes me happy. [TODO]

After

ok, so bad news guys. after i wrote this far i went outside to throw some knives and then i ate some yummy salmon and ate a donut with some coffee and watched a movie with my parents. and while i was doing that i was thinking "what is my justification for doing this? isn't this irrational? why am i not working right now? isnt that also irrational? how can i justify anything i am doing?" and the explanation i came up with was "im doing this because it makes me happy."

this is echoed in that last paragraph, and really in the last section kind of as a whole, where i was circling back onto this idea. there are ways you can worm your way out of this interpretation but the core truth that allows any of life to be worth living is that it ultimately makes you feel some abstract feeling that you enjoy. the book i am a strange loop talks about the brain's existance on the level of symbols, and how causation and explanation can exist on a symbol-level despite the fact they're devised of smaller substrates like neurons and synapses, and i think that's a nice explanation of the justification for doing anything. "happiness" may be ultimately meaningless on the concrete level of atoms and strings, but it remains meaningful on the symbolic basis of emotions.

while i was doing all those things today, the thought echoed throughout my brain "im doing this because it makes me happy", over and over again. earlier today i was on my phone looking at social media, and i was doing it because it made me happy. if, in the moment, i could have looked to the future, i may have realized that prolonged usage of my phone would have made me unhappy, but complete removal of my phone would have had the opposite effect i was looking for. my brain unfortunately exists on the irrational level of abstract symbols, and i am a slave to their operation in a space devoid of the rigorous logic i may want to live in, but i must continue to live in them regardless! all actions i take can gain meaning when i internalize the phrase "im doing this because it makes me happy"; this expression does not need any further explanation or deconstruction into more basic ideas. if i decide i want to live life as an academic monk, abstaining from social interaction to further the pursuit of knowledge, then i can remember that im doing it because it makes me happy. i think thatsa nice way of viewing things.

unfortunately for the rest of the world, i am the only one that i ultimately am certain about. it is impossible for me to know if my actions will truly benefit the rest of the world, but i can focus on the things i can control instead of the things i can't. i want to have a partner because i want to hold someone else's hand, and i want to have lots of friends so they can think im smart and funny, and i want to work hard in my job so i get lots of recognition and awards, and im doing all of this despite the utter meaninglessness of it all because it makes me happy. thanks for reading; i hope you do things that make you happy as well.

- Sophie