Day 2 at Recurse Center

Stuff I thought

Got to find the balance between coding and planning, as well as figuring out how to learn from other people as well as help them learn.

I think, at the moment, I don’t have enough concrete problems to need anyone’s help with pairing. I need to focus on a project and get it to the point where I do meet challenges that call for someone’s help, or just another pair of eyes to help me work through it.

Talked to Yuri, who’s built this generative art app called Silk. Found out during dinner that the app actually has a widespread following — both school kids and drug users have emailed him about the app. It’s really pretty awesome.

Other than that, I didn’t really interact with anyone during the day. I realized that I feel somewhat intimidated — I see other people going around pair programming like four times a day, doing seemingly productive things.

But it’s okay to not be in the thick of things — it’s been 2 days! These people have been here for 6 weeks or more. I’m still figuring out what I want to work on for the rest of the week — no need to worry about not pair programming when I don’t have any challenges to pair for yet. These things will come with time. Like I’ve said to myself already — just gotta get in the groove of things.

Stuff I did

Random Errands I sort of went all over the place today. Spent some time reconciling my budget on YouNeedABudget. Ran off to a doctor's appointment that didn't pan out since I couldn't figure out what my vision insurance was.

Pair Programming Attended a pair programming workshop in the morning, where Alicia and Robert finagled us into (a) coming up with the solution to some programming problems, and then (b) ref*cktoring the code — or, in better-sounding terms, obfuscating it.

I paired with the wonderful Marianne Corvellec on the Hamming Distance problem in Python. Honestly, I’m not very good at pair programming — when I’m the driver, I don’t have very good intuition at how to let the navigator direct our progress, and when I’m the navigator, I’m too eager to jump in and fix some small problem when I feel it would be simpler for me to just do something. Note to self: don’t touch the damn keyboard when you’re not the driver. Don’t even look at it.

We played around with list comprehensions and lambda functions and byte conversions, and even presented our obfuscated code to the rest of the workshop. Very good experience. Left me pretty tired, though 😂.

Some LeetCode problems Also, Bryan Chu posted two LeetCode problems (383: Ransom Note, and 43: Multiply Strings), so I spent 30 minutes hacking out a solution to those two.

Ransom Note is very simple — just use a hash table and you can do it in linear time and space.

Multiply Strings is basically implementing a BigInt multiplication function — the simplest way to do it in a reasonable amount of time is to just multiply the way you would in 3rd grade, which is just multiply one number by each digit of the other, making sure to pad out the powers of ten, then sum up all the products.

I didn’t have time to optimize it because I had to leave for a doctor’s appointment, but on the way back I read a blog post by Matthew Crumley about multiplying BigInts. He showed an optimization on my solution where you can forgo storing the intermediate products (to sum later) and just sum the end result directly.

DeepLearning For my Subreddit sentiment classifier, which I have not yet described but will do so once I get to a workable stage, I thought I’d finally bite the bullet and go through some Deep Learning course.

I certainly won’t pay $200 a month to take Udacity’s, as much as I love Udacity and what they’ve done for me. Fortunately, Andrew Ng has released deeplearning.ai, which right now is not much more than a fancy pointer to the actual Coursera courses.

Anyways, apparently textual analysis is the 5th (and therefore last) course of the 5-course specialization, so it looks like I’ve got quite a bit of work ahead of me. But the lectures are concise, incredibly clear, and I’ve actually been flying through them. You can see some of my notes here.

So far, I’ve learned some of the use cases for deep learning and why it’s seen such a renaissance in the past several years. Massive amounts of data and increased computational power have allowed neural networks to actually perform certain tasks that old machine learning methods would have plateaued in performance at.

We learned how to define a logistic regression function, and how to then define a loss function so we can create the cost function for a set of training data. And, as of a few hours ago, I learned (for what seems like the 5th time in my 3 years of programming) what gradient descent is and how we use the cost function to do it.

Basically, the goal of the neural network is to create a mapping from input data to output labels — the simplest case I’ve learned so far is binary classification. Given an image of something, is it a cat? Or not a cat? That’s the label — cat or no cat. And the input is the image.

And this mapping is exactly what a function is, with a set of parameters to tune. The point of gradient descent is to find the optimal set of parameters for that function, so that our neural network is as good as it can be at classifying cat images.

Hanging out Got dinner at Soho Thai with Yuri, Ben, and Janice. Was a ton of fun getting to know everyone, and we got to listen to some of Ben’s war stories from his work at Cloudflare. May there be many more nights like this.