Thinking out loud about latest AI developments


Hi Reader,

How's it going for you?

This newsletter is not meant to deliver AI news. But today, let me share with you some recent developments that made me pause a bit.

If you’ve been keeping up with AI, you might’ve seen what Google's Veo 3 can now do with video.

That's Google DeepMind's new model Veo 3. It can now generate video with synchronized sound effects, ambient noise, and even dialogue.

It's amazing, exciting, and honest, somewhat disturbing...

I can’t help but wonder how long until AI can generate long-form, in-depth videos like the ones I create on YouTube, and replace me forever 🥲.

Is Google using Youtube videos to train their text-to-video models? I don't know for sure, but my guess is yes. High-quality and rich data like videos is a treasure trove for AI model training.

It's already known that big AI companies have used transcripts from Youtube videos to train models. I wouldn't rule out some videos being used too (hopefully with creators' consent).

Good thing to mention: Google says that all videos by Veo 3 will be watermarked and pass through safety checks. Let's hope that this will be done rigourously and responsibly 🧐.

... otherwise it's going to cause a lot of problems.

Meanwhile, there have been other worrisome developments regarding security. You may have heard of Anthropic's newest model "achieving" a new level of security concern, along with a newfound ability to blackmail.

Although I'm not a tech pessimist, it is hard not to wonder what might happen if things spiral just a bit too far.

The AI-news "fastfood diet" 🍟

I was talking to a friend recently. She told me she felt totally overwhelmed by AI news on social media. It seems like everyone is going full-speed with AI developments, sharing the latest models, capabilities, demos - while she seems to be standing still and falling behind (as a "techy person").

As a content creator, I completely understand the FOMO feeling.

Many tech creators are trying maximize their publishing volumn, publishing faster, louder, more frequently. No wonder the audience starts going crazy and feeling anxious - "Am I missing something today?".

Honestly, I feel the pressure to keep up too. I feel the pressure to bring you the latest AI tools, use cases and roadmaps with my videos. But it's like running on a treadmill that keeps speeding up. Eventually, it will probably burn me out (and you out too!).

Of course, you don't want to ignore AI today. Doing that is like Kodak ignoring digital cameras, or BlockBuster brushing off online video streaming.

But I've noticed something: most of the flashy news fades in the background within weeks. Only what really matters tends to stick around.

So I'd say it's safe (and smart!) to focus on the good old stuff. If it's still relevant today, it's are probably important too.

Cleaning up my AI content "diet" 🧘‍♀️

So a while ago, I decided to change my AI consumption habits. Here's what I did:

  • Unsubscribed from all the AI-news newsletters: Honestly, I don't care about Microsoft partnering with which AI start up last week or how much money Apple is pouring into its new AI glasses.
  • Stopped srolling LinkedIn and social media for updates: Sure, some posts are really useful. But too much of anything, even good things, is never good. I found myself jumping from post to post, hoarding info without really digesting it.

Instead, I'm going back in time and turning back into:

  • Long-form essays and blogs: Like this blog from Ben Todd, which I find very well-written and informative.
  • Books: Still the pinnacle of timelessness. And doesn't need to cost much. I got a membership at my local library and found some true gems there.
  • Hand-written notes and mindmapp: After trying many AI tools, nothing beats good old pen and paper for organizing my thoughts.
  • Hand-crafting: Not AI or learning-related, but working with my hands helps me unwind and get a more balanced perspective - the digital world is not the only side of the reality we live in.

What I love doing is experimenting, building, and solving problems with AI. So I'll keep sharing the interesting use cases and what I learn through my videos. But I don't see myself covering AI at any speed other than the speed of my own interest.

Slowing down helps me think more clearly and go deeper. And I think eventually, everyone will be better off that way.

By the way—I’m currently working on a new video about building knowledge graphs using LLMs. It’s been super fun to explore, and I can’t wait to share it with you.

Until then, take care and have a great week ahead 🤗.

Thu

P.S.: Work with me:

If you want to dive deeper with Python and build real-world AI projects this year, check out what I have for you👇.

You'll join a community of 250+ learners who are building their projects while getting direct access to me and supporting each other along the way.

🔗 Learn More


Thu Vu

Say hi 🙌 on Youtube, LinkedIn, or Medium

Thu Vu

Join 6,000+ data professionals who are advancing their careers by learning from real-world projects, gaining first-hand experience, and accessing top resources delivered to your inbox every week.

Read more from Thu Vu

Hi Reader, How's it going for you? Yesterday, I had the pleasure of chatting with Pavel Klavík, the founder of OrgPad, my absolute favourite mindmapping tool. I've been using it for everything from project planning to learning new topics. I used this tool to create this mindmap for my video on learning Python: Python mindmap. Link: https://orgpad.info/s/sbwXdJ9N5wc Pavel has an interesting backstory – he used to work at Google, improving their search engine. Then he quit his well-paying job...

Hi Reader, I hope you're having a great week! 🗒️ My notes on AI agents I've been diving into AI agents lately as part of my Python for AI Projects course (I will share some updates with you on this in the coming day 😊). During this process, I put together a monster 20+ page note on AI agents that I wanted to share with you. Here's the link. Whether you're just curious about AI agents or want to build your own, I think you'll find this useful. AI agents are everywhere right now! Some people...

Hello Reader, A few weeks ago, DeepSeek made headlines by being the most powerful open-source reasoning model. I made a video recently to test its performance against OpenAI's o1 model. Spoiler: It's pretty good, but not without flaws! But there's another important implication of DeepSeek: Open-source models are making it easier (and cheaper) than ever to build useful tools—like study buddies, workflow automators, or even your own AI twin. Andrew Ng believes that even though a lot of...