March 2020 Retrospective
For the first time in this series, things very much did not go how I wanted this month. A couple of unexpected roadblocks popped up and threw my plans off. At some level this points at a fragility in my current system, which I definitely need to address (and may be the basis for my April goals). But even more fundamentally, this represents the first major failure in this monthly goal system, of which there will inevitably be more. If I try to imagine a world in which this initiative fails, the most likely reason is a failure to gracefully recover after being derailed. Thus this retrospective and next month’s followups feel particularly critical in determining the fate of this project.
Outcome of Goal 1 (On-the-Clock Exploration)
My initial plan for the month was to get started on a quick security-related project, and then spend a couple of weeks working on a machine learning project. Instead some coworkers and I flagged a security concern at the beginning of the month, and I ended up handling the response for a couple weeks. So I definitely hit my security-related target for the month, but since that pushed my initial project back, I didn’t have time to work on machine learning at all. In addition, I way underestimated the size of the “quick” project (curse you planning fallacy!) so I wouldn’t have had time for both even without the unexpected task. Thus my 60% prediction on spending 40 hours on security resolves to true, and my 70% prediction on spending 40 hours on ML resolves to false. I further predicted 80% likelihood that I would spend at least 5 hours on ML this month, and that didn’t happen either.
So now I need to figure out how to learn from this failure. There are 2 primary ways my predictions went wrong, either one of which is enough by itself to throw my planning off. Namely that an urgent and unexpected project suddenly came up, and my original project took way longer than expected. If I had initially broken down my 80% prediction that I would get to the ML project this month, I think I would have given something roughly like 10% chance of some new urgent thing coming up, and 10% chance of my current project going way over time estimates, with a few extra percent chance of something outside those (rounded together into predicted 20% chance of failure). These did turn out to be the relevant categories, but I didn’t have the foresight to call them out ahead of time so it’s hard to avoid hindsight bias. Regardless, I still think a notable portion of the 20% would have come from the prospect of something unexpected coming up, like a sudden priority shift due to the introduction of a new urgent project. Looking back over previous months, 10% seems like a pretty reasonable estimate of the empirical frequency of such shifts. I don’t have reason to believe that this particular shift was predictably more likely to happen this month than in all those other months, so I don’t actually think I should update that heavily off the fact that it happened now.
By contrast, I think my second failure - significantly underestimating the required time to complete my original project - is an area I should have done much better. In retrospect the empirical frequency of projects taking longer than I initially planned is something close to 100%. From that perspective, the fact that I was 90% confident in my existing timeline is laughable. Clearly I am knee-deep in planning fallacy. The three key questions on my mind right now, in increasing order of importance and generalization, are:
- How could I have come to a more realistic prediction for how long the project would take, given what I already knew?
- How can I improve my general ability to make accurate time estimates?
- How can I notice I’m making such a predictable error before it bites me?
The first is the simplest to answer. There are already some known techniques for sidestepping the planning fallacy and getting more reasonable predictions. Whenever I need to estimate how long a project will take, I should have a habit of re-making my predictions using a couple such techniques to get a sense for whether my initial time estimate is off. I can additionally take the outside view that nearly 100% of my time estimates end up being way under what the project actually takes, and so I should be much more careful and conservative. I expect that, had I actually used these techniques in my initial estimate, I would have been much more uncertain about my ability to complete the project in the given time. Probably giving an estimate of at most 50% (which would still be too high, though it is way better than 90%). It’s one thing to say “I should do [x]”, and another to actually build the habit of regularly doing [x]. The details of how to do that seem better suited to a future month’s goals, but for now I will add it to my short-list of “important future goal candidates” and try to keep it in mind.
I have some ideas for how to address the second problem. For one thing, explicitly describing a probability distribution over project completion timelines before starting work on a project. This would give some direct feedback on miscalibrations after every project, where I could observe how well my predictions match reality. Projects tend to have fairly long timelines, so I could get some faster feedback by doing a similar exercise for tactical subtasks within a project. Given that these piggyback off of actually doing time estimation, I think I should group it in with my response to problem 1. and consider it in a future monthly goal. There may be other possibilities here which aren’t immediately coming to mind, so spending some time doing additional branstorming would also be helpful.
The third problem is by far the most vague and difficult, extending beyond just project planning into any predictable error. Generalizing off of one datapoint is difficult, so I think my best response right now is to write down exactly what happened and why (conveniently done here) and keep an eye out for future instances.
Outcome of Goal 2 (Off-the-Clock Exploration)
This goal ran into a fun new failure, namely that COVID-19 shut down buses and offices this month. Since my productivity habits are centered around working on the bus, this threw a huge wrench into my schedule. In general I’ve always found it difficult to be productive from home, since home is the place I go to relax and have fun. Historically I’ve gotten around this by just finding other places to be productive, like buses or offices or libraries, but COVID-19 isn’t giving me that luxury. It’s home or nothing.
In addition to the reduced capacity, I also ended up spending a lot of my time doing research and trying to figure out how I should respond to the disease. This is time I endorse spending, but it left me with heavy competition for my even-more-limited-than-usual free time. I was able to keep up with the reading group for the first week, then I was just able to keep up with the reading the second week. Afterwards it became apparent that I just didn’t have enough time to keep up with everything on my plate, and I made the decision to deprioritize the FRAP reading group.
I’m not really sure how to update off of this. On the one hand, a failure is a failure, and I did not meet my 85% prediction that I would spend at least 3 hours per week. On the other hand, I don’t think I should be too hard on myself for failing to predict the spread of COVID-19 this month. To be fair I knew it existed and was concerned about it in February, but I didn’t expect it to become so prevalent in my country (like a chump). I was mostly relying on information from national and international authorities on this (like a chump) who were generally unanimous in saying this wouldn’t be a problem. I think I should add a bit of predictive emphasis on “maybe something really crazy will happen”, but other than that I’m not too confident that 85% was the wrong number.
Outcome of Goal 3 (Adding Time)
The outcome here is pretty similar to the goals above. I was successfully able to use the time on the bus home from work in a “productive meandering” manner, but then the bus stopped happening. During the time I had a bus to work with, I think this went quite well. I was able to look into whatever felt interesting in the moment, and it didn’t feel particularly draining or aversive. I was able to remember what the time was supposed to be used for and actually use it in that manner. But once I became stuck at home, this went out the window and I completely lost this habit. It was looking quite promising, so I want to emphasize re-establishing this in my new schedule, probably in my April goals.
The extended 2.5-hour weekend block is surprisingly hard to evaluate. I had a lot of weekend commitments this month, and ended up doing most of my time as either part of a much longer commitment (such as EA Global) or in a variety of small blocks doing things like researching COVID-19. This is something I want to re-establish along with the rest of my broken habits, but I don’t think there’s that much to say about it this month.
Other Takeaways
The main talking point here is that the anchor-points for all my habits vanished and now my adherence is shot. This applies to habits from previous months as well, such as before-work productivity blocks. At the beginning of the month I predicted 85% likelihood that I would continue to perform those habits in an endorsed manner. This is actually surprisingly low given historical data, so I don’t feel like I need to update too heavily on the unexpected failure mode.
Re-establishing everything with a new routine is top-of-mind right now, and will almost certainly be the primary emphasis of April.