Learning has been a little slower since the last update. I spent a lot of time with my family celebrating the life of my grandmother who passed away. She was the nicest and kindest woman I’ve ever known. Living to the age of 99 I can’t recall a single instance where she said something unkind about someone else. I’m happy I was able to say goodbye in the end. I brought P with me for our last visit and my grandmother got so much joy from seeing days before the end. Those are a memory I’ll treasure.
Now that I’m back to work the amount of time I can realistically set aside each day is limited so I need to maximize the effectiveness of that time. I’m still progressing down the AI engineer path, but need to be a little more deliberate in my learning path.
I’ve decided to narrow my current focus on how to effectively use LLMs like ChatGPT, Claude and Bard to build applications then branch out into PyTorch, Hugging Face, and building the models on my own.
Most of my dedicated learning has been working through the deeplearning.ai catalog on LLMs. Being able to work with LLMs seems like it is going to be a foundational skill of the future so before I get into the deeper concepts I want to make sure I have the basics covered. Specifically I’ve worked through the ChatGPT Prompt Engineering for Developers and Building Systems with ChatGPT courses. I’m also in the middle of the LangChain for LLM Application Development course.
For me learning isn’t just rushing through courses to say they’re completed. I learn by applying the course material to different use cases. After the first two deeplearning.ai courses I built an AI personal trainer that connects to my Peloton account to suggest a workout-of-the-day. I’ve been tracking the progression in the GitHub repo. As I learn the project evolves just a little bit. I’m already planning to incorporate LangChain into the project when I finish the course.
Aside from the courses I’ve discussed I’ve also been thinking about embeddings and their use cases. I saw this talk from Simon Willison about embeddings, which reminded me about a post from Amelia Wattenberger about how to get creative with embeddings. LLMs are expensive, but you can generate embeddings for massive amounts of text for pennies, so I’ve been thinking about how I can leverage embeddings for a project.
That’s the update. I’m looking forward to building more of the Peloton trainer and getting LangChain added into the mix. With the limited time available I’m not putting timelines on when this will be done. Instead I’m relying on good note taking and issue tracking to make steady progress.