This past weekend was a holiday in the U.S., but that didn’t stop Harpreet and the rest of the group from showing up and chatting about some really interesting topics in the world of data science and machine learning.
While I wasn’t able to attend (I actually decided to schedule a move into a new apartment on Sunday), I really enjoyed reviewing the session, and I’ve pulled out a couple highlights that struck me—an interesting applied use case, some advice for DS interviewing, and some technical advice on selecting and tuning hyperparameters (with the help of Comet ).
As always, there’s a lot more in the full session (which you can find on Harpreet’s YouTube channel), so be sure to check it out, alongside all of Harpeet’s other excellent content.
Applied ML for small-scale, precision agriculture
While I love all the philosophical and epistemological conversations we tend to have during these Office Hours, I was excited to hear this week’s conversation about a really interesting and applied use case for machine learning—precision agriculture for small-scale farming.
Guest Reema Gill was kind enough to intro and summarize a project she and her team are working on. Essentially, it’s an autonomous, sensor-based irrigation system that also relies on satellite data to optimize irrigation patterns.
I’ll let Reema and the group take it from there (see the clip below), but it was cool to learn about one of the ways ML systems can be applied to help solve big social and economic challenges.
Tips and resources for data science “coding interviews”
In software engineering, job interview processes almost always include some sort of hands-on coding challenge—whether live, take-home, or a hybrid.
The same is also true for data science roles. In this clip, the group helps differentiate DS coding interviews from what software engineers experience. Additionally, they highlighted some really helpful resources to prepare for these technical interviews, which I’ve listed below:
In addition to the applied use case and the job seeker conversations, we also had a bit of technical discussion in this past week’s session—specifically centered on how to approach hyperparameter selection and tuning.
While the problem at hand was tuning an object detection for a Kaggle competition, some of the advice given rings true for a general approach to hyperparameter tuning:
Start with a pre-trained model architecture when possible, and adjust from there (in this case, YOLO with a DenseNet backbone)
Check out the clip below for more tips and discussion on this essential technical problem when building performant ML models.
Enjoy the Conversations Above? Join Us!
We run these virtual Office Hours every Sunday at 12pm ET (New York, NY). Completely free to attend and participate, and we’d love to see any and all of you there, help address any questions you might have, and just hang out and talk all things data science and machine learning!
We recently launched The Comet Newsletter, which offers a weekly inside look at all things data science and ML, featuring expert takes and perspective from our team. We have big things planned for both Office Hours and the newsletter, so be sure to subscribe if you haven’t already!
Notes from the eight session of a brand new Office Hours series: Seven Simple Steps to Standardizing the Experiment with guests Dr. Doug Blank, Jacques Verre, Dhruv Nair and Michael Cullan.
Notes from the seventh session of a brand new Office Hours series: Seven Simple Steps to Standardizing the Experiment with guests Dhruv Nair and Michael Cullan.
Notes from the sixth session of a brand new Office Hours series: Seven Simple Steps to Standardizing the Experiment discussing data with guests Tiffany Fabianac and Dr. Doug Blank.