Thinking out loud about latest AI developments


Hi Reader,

How's it going for you?

This newsletter is not meant to deliver AI news. But today, let me share with you some recent developments that made me pause a bit.

If you’ve been keeping up with AI, you might’ve seen what Google's Veo 3 can now do with video.

That's Google DeepMind's new model Veo 3. It can now generate video with synchronized sound effects, ambient noise, and even dialogue.

It's amazing, exciting, and honest, somewhat disturbing...

I can’t help but wonder how long until AI can generate long-form, in-depth videos like the ones I create on YouTube, and replace me forever 🥲.

Is Google using Youtube videos to train their text-to-video models? I don't know for sure, but my guess is yes. High-quality and rich data like videos is a treasure trove for AI model training.

It's already known that big AI companies have used transcripts from Youtube videos to train models. I wouldn't rule out some videos being used too (hopefully with creators' consent).

Good thing to mention: Google says that all videos by Veo 3 will be watermarked and pass through safety checks. Let's hope that this will be done rigourously and responsibly 🧐.

... otherwise it's going to cause a lot of problems.

Meanwhile, there have been other worrisome developments regarding security. You may have heard of Anthropic's newest model "achieving" a new level of security concern, along with a newfound ability to blackmail.

Although I'm not a tech pessimist, it is hard not to wonder what might happen if things spiral just a bit too far.

The AI-news "fastfood diet" 🍟

I was talking to a friend recently. She told me she felt totally overwhelmed by AI news on social media. It seems like everyone is going full-speed with AI developments, sharing the latest models, capabilities, demos - while she seems to be standing still and falling behind (as a "techy person").

As a content creator, I completely understand the FOMO feeling.

Many tech creators are trying maximize their publishing volumn, publishing faster, louder, more frequently. No wonder the audience starts going crazy and feeling anxious - "Am I missing something today?".

Honestly, I feel the pressure to keep up too. I feel the pressure to bring you the latest AI tools, use cases and roadmaps with my videos. But it's like running on a treadmill that keeps speeding up. Eventually, it will probably burn me out (and you out too!).

Of course, you don't want to ignore AI today. Doing that is like Kodak ignoring digital cameras, or BlockBuster brushing off online video streaming.

But I've noticed something: most of the flashy news fades in the background within weeks. Only what really matters tends to stick around.

So I'd say it's safe (and smart!) to focus on the good old stuff. If it's still relevant today, it's are probably important too.

Cleaning up my AI content "diet" 🧘‍♀️

So a while ago, I decided to change my AI consumption habits. Here's what I did:

  • Unsubscribed from all the AI-news newsletters: Honestly, I don't care about Microsoft partnering with which AI start up last week or how much money Apple is pouring into its new AI glasses.
  • Stopped srolling LinkedIn and social media for updates: Sure, some posts are really useful. But too much of anything, even good things, is never good. I found myself jumping from post to post, hoarding info without really digesting it.

Instead, I'm going back in time and turning back into:

  • Long-form essays and blogs: Like this blog from Ben Todd, which I find very well-written and informative.
  • Books: Still the pinnacle of timelessness. And doesn't need to cost much. I got a membership at my local library and found some true gems there.
  • Hand-written notes and mindmapp: After trying many AI tools, nothing beats good old pen and paper for organizing my thoughts.
  • Hand-crafting: Not AI or learning-related, but working with my hands helps me unwind and get a more balanced perspective - the digital world is not the only side of the reality we live in.

What I love doing is experimenting, building, and solving problems with AI. So I'll keep sharing the interesting use cases and what I learn through my videos. But I don't see myself covering AI at any speed other than the speed of my own interest.

Slowing down helps me think more clearly and go deeper. And I think eventually, everyone will be better off that way.

By the way—I’m currently working on a new video about building knowledge graphs using LLMs. It’s been super fun to explore, and I can’t wait to share it with you.

Until then, take care and have a great week ahead 🤗.

Thu

P.S.: Work with me:

If you want to dive deeper with Python and build real-world AI projects this year, check out what I have for you👇.

You'll join a community of 250+ learners who are building their projects while getting direct access to me and supporting each other along the way.

🔗 Learn More


Thu Vu

Say hi 🙌 on Youtube, LinkedIn, or Medium

Thu Vu

Join 6,000+ data professionals who are advancing their careers by learning from real-world projects, gaining first-hand experience, and accessing top resources delivered to your inbox every week.

Read more from Thu Vu

Hi Reader, Back in 2019, I decided to start a computer science bachelor degree, specializing in Machine Learning and AI. I was 27 at the time, and I felt lucky to find an online program that let me study while keeping my full-time job. I thought, “Great, I’ll finish it in 3 years.” Fast forward 6 years, with plenty of ups and downs in between, and I’m just now finishing the last module 🥲. If you asked me today whether I’d do it all over again, I’d hesitate. On one hand, the degree gave me...

Hi Reader, OpenAI dropped GPT-5 a couple of days ago, and the internet lit up. Some are calling it brilliant. Others are calling it a major let down after all the hype. After sifting through dozens of tweets and blog posts, here’s my takeaway: GPT-5 isn’t AGI - not even close. It’s more of an incremental upgrade than a godlike breakthrough over previous models. It still makes silly mistakes, has a glitchy “routing mechanism,” and struggles with understanding and generating images. Case in...

Hi Reader, If everyone can do vibe-coding, it's tempting to think coding is officially DEAD. But here's the truth: Vibe-coding is not a free lunch. I recently listened to a Lex Fridman's podcast with DHH, a respected programmer. They had a very interesting discussion on vibe-coding, I'd encourage you to check it out. In the past few months, some of you may have dabbled in vibe-coding. It’s a pretty exciting time - anyone can build functional apps with AI-powered coding tools like Cursor,...