It’s strange to think that Google NotebookLM is only three years old.
But it did originally come out in May of 2023, and more formally, near the end of that year. Since then, it’s been wowing the crowds, astounding all bystanders, including my friends, when I generated one of the platform’s signature “live podcasts” around random materials from our shared past.
NotebookLM’s rise has been meteoric. I don’t think it’s hyperbole to say that. So what are people doing with this technology three years out?
A Collection of Use Cases
When I researched the top reported user purposes for Notebook, I came up with a “top 5.” I’ll just briefly go over each one:
Research and summarizing – getting clear summaries, key points, and explanations for whatever you’re working on (I’ll come back to this later.)
Writing and content creation – turning sources into blog posts, reports, or outlines grounded in your own material.
Learning and studying – asking questions about your notes, creating study guides, and understanding tough topics faster, with Notebook as your indispensable “study buddy” (in addition to any human ones who may be around.)
Meeting and doc analysis – letting Notebook take a look at meeting docs to extract action items, insights, and decisions.
Idea generation and synthesis – this one kind of speaks for itself. Like vibecoding, it involves “letting the machine be creative,” which has garnered its share of controversy.
The Summary Product
Going back to that first point, as I’d promised, I came across this article on Medium, where Clare Spencer talks about putting one’s own articles into NotebookLM to essentially use AI to critique the use of AI, given that the main subject is Spencer’s reporting on GenAI tools.
The upshot, it seems, is that Notebook took the topic, GenAI in journalism, and produced a set of infographics that, while rather targeted to the subject at hand, contained severe typos that compromised the quality of the results. You can read all about it here, and let me know your thoughts in the comment section.
The eventual summary, fed by Spencer’s observations, included this:
“Successful implementations rely heavily on robust human oversight and structured training, designed to teach journalists the technical limitations of LLMs and to prioritize active fact-checking of all AI output. Ultimately, the industry is grappling with how to strategically apply this technology while enforcing stringent editorial standards to safeguard accuracy and public trust.”
In other words, the platform isn’t quite ready for prime time.
Take Your Time, Notebook!
Here’s an interesting testimony from a Redditor who figured out how prompt engineering can help with the quality of what Notebook produces.
Able_Orchid_3818 writes this:
“Hey everyone, I’ve been experimenting heavily with NotebookLM and found a workflow that drastically improves the quality of the outputs. If you just dump your files and ask for a summary, you are losing a massive amount of valuable information. Here is my step-by-step method to get deep, comprehensive, and highly structured knowledge out of NotebookLM.”
What’s in this multi-step strategy? First, there’s the “index” method, described thusly:
When you upload your sources, do not start asking questions right away. Instead, give NotebookLM a comprehensive prompt asking it to index your sources into main topics, outputting only the topic titles.
The rest of it goes like this: you feed the index back into Notebook, and ask it to explain, not summarize. Then there’s the “One-by-One” Deep Dive, an elective addition, and the last part, which Able_Orchid identifies using the word “patience.”
“Go into the Custom settings and add a prompt like this: ‘Take your time researching. Dive deep, do not rush, and be patient in your analysis and reading.,’” the poster writes. “It might sound weird to tell an AI to ‘take its time,’ but giving it this instruction grants the model the conceptual leeway to generate much longer, highly detailed, and meticulously analyzed responses. Try this workflow next time you have a messy batch of notes or audio files.”
It’s an interesting take: the idea that by telling the model to slow down, you command a deeper, finer grained result. It seems strange to those who don’t understand the power and complexity of these systems. To me, it makes perfect sense.
That’s a bit about how people are using NotebookLM in 2026, a year that is unrolling rapidly. Stay tuned for more.







