TensorFlow, Kaggle, and Diving Too Deep
One step forward, two steps back.
Last week I finished up Andrew Ng's famous introductory class on Machine Learning. As I mentioned in my last post, I thought I would move on to Google's Machine Learning Crash Course, which I did this week. To my surprise, I already knew most of the material contained in that course, and so I quickly moved through it.
Having finished that, I still felt the need for more practical experience, and so I went to Kaggle to see what I could get done. To get my feet wet, I decided to start with the introductory competition, which is Titanic: Machine Learning from Disaster. This is a fairly basic case, where you are provided a passenger list and data and asked to train a model which predicts the passengers who will ultimately perish.
I opened up an editor, and found myself completely at a loss. I simply had no idea how to start from scratch. I eventually found a tutorial and followed it, but I found that although I understood each of the steps that was being taken, I had no concept of how to come up with them myself. Then I came across this guide, which just blew me away. There was simply no way I could have come up with half of those possible solutions, let alone the optimal one.
I realized I need to temper my expectations a bit. I still plan on continuing my learning, albeit with a significantly higher emphasis on practical implementation so that I can retain the theory that I have already learned. As such, I will be looking to implement a few classics of Machine Learning on this very website! If I'm lucky, I'll have one up by the end of this week. I'm also starting to look at ML.NET so I can leverage what I have learned into my existing .NET background.
Learning any of the above too? Message me on LinkedIn, I'd love to connect and share stories!