How the AI Wave Has Changed Me
January 18, 2026 · 4602 words · 22 min read · #Random Thoughts
Tonight (this piece was written on the evening of January 15th) I had dinner with a friend at a spring pancake restaurant across from the office — haven’t been there in a while. Last time was after a project closed, eating with colleagues. That project’s been gone almost eight months now. One of my teammates said: “In 2025, you and I were in contact for at least eight months.” 2025 brought a significant shift in my work — more project-based, more centered on people.
The work itself doesn’t challenge me. I tend toward introversion, a little socially awkward with strangers — but I open up, especially once I know people. And I’ve found that I genuinely love talking with people: across departments, across roles, across business lines. Swapping notes on how work gets done. Understanding different business models. Talking about ordinary life.
After dinner, I noticed a mural on a nearby wall — a starry sky — and photographed it for our “Four Seasons” group chat. I titled it “Starry Sky by the Road.” What struck me wasn’t that someone had painted the sky on a wall. It was more that someone had hidden it inside the wall — the edges look like concrete being peeled back, torn open, and instead of rebar inside, there’s deep blue cosmos. Behind the hard shell of the city, a wider world. I hope that’s true for all of us.
Probably the photo I’ve liked most recently.
That same day, @Qiangu messaged me: “Wow, you’re such a luminous and soulful blogger — don’t let your readers down.” He’d shared @J.sky’s post, which described me that way. Honestly, a surprise. “Luminous and soulful” isn’t a phrase I’d ever heard applied to a blog — and J.sky writes tech posts, which makes it even more unexpected. I went and left a comment: “As a so-called ’luminous and soulful blogger,’ I’m a little embarrassed — 2025 was a busy year, barely updated. This year I’ll try to maintain a better rhythm. Happy New Year!” Got a reply: “You need to post something. It’s been a while.”
Fair. I really had gone a long time — over a year — without sitting down to write properly. I don’t know if time just moves too fast, or if everything feels too urgent. AI is everywhere in our lives. Work pulls at my attention from every direction. There are always family things to handle. Who am I? What am I doing?
On March 28, 2025, I had a conversation with @Dahua and pulled together a small group of people who were still writing regularly — a chat called “Between the Lines.” I’m lucky to have found that group. Even luckier that I get to read their honest shares every day. I’m not particularly good at managing a community, but what I’ve learned from “Between the Lines” is that you don’t have to. Everyone grows at their own pace. When you feel like sharing, you show up. If there’s ever a second group, I’d call it “Letters Like Paintings.”
A lot has changed this past year. Some things haven’t. Let me try to sort it out.
AI Has Become Genuinely Useful — Even Good
I’ve long thought there’s a meaningful gap between a tool that’s “useful” and one that’s “good.” Useful means it can do things. Good means it’s woven into your life, your workflow, maybe even your inner life. Like that wall mural — I don’t need to look at the cosmos every day. I just need to occasionally be reminded that behind the city’s hard shell, there should still be a crack, a little breathing room.
AI is the same way.
I got a ChatGPT account on the third day it launched, in 2022. I was amazed — it could produce content that matched what I had in mind, and fast. No search capability then, no file uploads, no image generation or coding. Still, it felt remarkable. Now, three years on, it’s a completely different thing. Its upgrades have made it genuinely useful — and for me, even good.
I’m not an AI specialist. I can’t fully parse the three pillars — algorithms, compute, and data. The application layer is already more than enough to keep me occupied. As @Yuyi once put it: “You need to top up your technical vocabulary every now and then, just to keep the mind sharp.” My own relationship with AI centers on three things: Chat, Knowledge Bases, and Agents — plus Coding as a kind of infrastructure. The focus has shifted: from Chat, to Knowledge Bases, to Agent. Coding is the foundation (and my current obsession).
To put it simply:
- Chat solves “question and answer” — it’s task-based
- Knowledge Bases solve “what’s mine” — it’s personalization
- Agents solve “do it for me” — it’s project-level
- Coding solves “I can string all of this together”
Chat Is Not Giving Commands — It’s Building Context Together
Chat means using a large model’s conversation interface to get work done or find answers — Q&A, document processing, code generation, image generation, deep research, and so on. Two things have made a real difference for me here: understanding context and letting the LLM store memory about me.
Talking to AI requires input before output. I keep marveling at the elegance of tokens — they make it explicit that both input and output have a cost. Even a simple greeting costs something.
I’ve come to accept a fact: input is a form of responsibility. If you want high-quality output, you have to frame the question clearly. “Clearly” doesn’t mean “long” — it means appropriately granular. Too coarse, and the AI can only respond in generalities. Too fine, and you trap yourself in infinite detail, like recursing forever inside your own prompt.
And context matters in AI conversations the way it matters in human ones. Here’s a frame that helps me: don’t treat AI as an employee. Treat it as a collaborator. With an employee, you give instructions — commands. With a collaborator, you need to be on the same wavelength. That’s how you get genuinely useful advice in return. And since AI is the product of pre-training, the more focused your input, the more aligned the output will be with what you actually need.
My most common mistake: I think I’ve explained something clearly, but I haven’t. Especially at work, where my brain is already fragmented by group chats and notifications, it’s hard to tell a complete story in the moment of asking. So I gave myself a small rule: write three lines before you ask. Not to look polished — just to pull out one thread from the tangle. Once I’ve done that, I often find I’ve already half-convinced myself, and the AI fills in the rest.
My general prompt structure:
- What’s my current situation (background)?
- What outcome do I want (goal)?
- What can I provide (constraints/inputs)?
- What else might I need to add?
Let AI Remember Me — But Don’t Let It Trap Me In My Own Memory
The second thing: build a consistent voice in your AI conversations so it can remember who you are. Most major models now offer memory features — Gemini, ChatGPT, and others. When used well, this dramatically reduces the time you spend re-explaining yourself at the start of every session. That said, I have a real tension with this.
On one hand, I love being understood without having to repeat myself. The more an AI “gets” me, the more convenient things become. On the other hand — is this just another filter bubble?
My current workaround: use two model windows simultaneously for anything important. One has memory of me; one doesn’t.
- The window with memory is like a long-term partner — knows who you are, what you love and hate. High efficiency.
- The window without memory is like a stranger consultant — doesn’t accommodate you, more likely to give you advice you don’t want to hear but probably should.
I’ve added a small principle: for any important decision, do at least one “de-memorized” pass. Otherwise, you’ll keep becoming more and more like yourself. That sounds fine — but it can be dangerous. Because you’re not always right. I need the occasional neutral answer.
Pick the Model That Fits the Scene
Even different versions of the same model will produce noticeably different results. Some models are like engineers — rigorous logic, but not particularly attuned to your emotional state. Some are like humanities majors — beautiful expression, but you’ll need to impose structure. Some are like senior product managers — they’ll give you a plan that sounds reasonable, until you push them: What are the boundaries? What’s the cost? What are the risks?
The model doesn’t make the result. The match between model and task does.
Since the second half of 2025, Gemini has become my primary model — both for its capabilities and for the Google ecosystem. I can ask it about specific files in Google Drive or specific notebooks in NotebookLM directly in the conversation. Very efficient.
I still use ChatGPT, Claude, and Doubao. It depends on the situation.
The Secret to Making AI Good: Keep Coding
AI has become good to use, and Coding deserves a lot of the credit — especially Claude Code. Over the past six months, I’ve used AI Coding to build all kinds of small tools. Most of them haven’t been published anywhere; they mostly just optimize my own workflow. In that sense, Coding has become infrastructure for me. A foundation. Down the line, I think everyone’s workspace will be built on top of Coding, and each person’s workflow will look different from everyone else’s.
I like the word “infrastructure.” Infrastructure isn’t showing off. It’s making your future walk smoother.
My experience: don’t think of Coding as a programming tool or something for developers. Think of it as a way of translating your own problems — whatever you’re currently struggling with — into a format the tool can work with. You describe the situation, the constraint, the goal. The tool proposes a solution that can actually run and actually solve the problem. If it doesn’t, you iterate. You keep talking until it’s solved.
Or more bluntly: Coding is how I negotiate with the world. When the world doesn’t give me the tool I need, I make one. Even an ugly little script has value as long as it runs. Programming is writing. Writing is a project. Programming is the same. I recently came across a line about programming I loved: “Programming should be a fluid form of expression, just like writing.”
This made me think about the small things I’ve been building lately — a plugin for syncing Feishu multi-dimensional tables, a batch converter for ePub to PDF or Markdown. The point was never to become a professional developer. It was to express my needs through AI, and solve my problems.
Syntax Is Punctuation; Projects Are the Story
When we learn to write, we don’t fixate on spelling and grammar. Knowing every character doesn’t mean you can write a good essay. Knowing programming syntax doesn’t mean you can build a good product. Writing is about telling stories and communicating ideas. Programming is about expressing logic and creating value.
If you’re grinding through isolated algorithm problems just to learn syntax, you’re memorizing a dictionary to learn how to write — tedious, and almost impossible to reach flow. This is why many people (including past-me) give up on programming early: too focused on whether the punctuation is right, forgotten what story we’re supposed to be telling.
The thing writing and programming have most in common: you have to let yourself produce a bad first version. Get it running, then optimize. Get it written, then polish. Close the loop, then refine.
Deadline / Goal / Value: The Three Elements of a Project
What is a “project”? A project is a unique, temporary undertaking with three essential elements:
- A clear deadline. Give yourself a finish line.
- A concrete goal. Not “I want to learn Python” but “I want to build a bot that auto-scrapes news.”
- Value created. It has to be useful — even if only to yourself.
You can also build in a feedback mechanism. Projects without feedback are hard to sustain. Learning without feedback is hard to grow. Life without feedback is hard to improve. When I build small tools, I deliberately design feedback points:
- How many minutes did this save me today?
- Did this help me earn anything?
Small feedback, but it accumulates. It creates a sense of agency. And that sense of agency is what keeps my anxiety at bay.
The Painful Part Is Discovering the Need
I shared on Jike a while back: to solve the problem of “turning web pages into e-books,” I explored an Epubkit plugin together with @Dahua. That’s a classic micro-project. We didn’t need to learn e-book encoding standards from scratch. I just needed to understand how the tool worked and how to run the workflow. The problems you encounter while doing a real project — those are the knowledge that truly belongs to you. That’s the project-based approach to building infrastructure.
Now we have Cursor and Gemini — essentially all-knowing programmers who can correct any syntax error or logic gap. This actually makes project-oriented thinking more important, not less. Once AI removes the syntax barrier, “knowing what you actually want to build” and “stringing requirements together with logic” become the real competitive edge.
I came across an article that put it well: “The era of ’everyone is a product manager’ is coming — and once everyone can implement their ideas, they’ll quickly realize that most ideas aren’t that good.” I agree completely. When execution becomes easy, what becomes scarce is: What problem are you actually solving? What are you willing to give up for it? Does this problem matter to you?
AI has flattened the barrier to entry. But it’s also lowered the cost of self-deception. It’s easy to produce a pile of things that look impressive but don’t close any real loop. So I hold myself to a stricter standard: every small project has to land somewhere in my actual life or work.
From Getting Answers on the Internet to Finding Them in Myself
To use knowledge bases well, you first need to understand your data. There are roughly three types: publicly available internet data; data produced through interaction with organizations or communities; and data that’s entirely private, like a personal journal.
When talking to AI, only content that comes from you — or relates to you — will actually help you solve your own problems.
If you feed it only internet data, the output will be generic, unpersonalized. You’re not adding a second layer of processing; you’re not organizing it around your own needs. That’s exactly where NotebookLM shines. It makes working with personal content remarkably simple. It’s the tool I’ve used most deeply in 2025.
What this is really about is something simple: AI isn’t best at knowing things — it’s best at organizing them. But what it organizes depends entirely on what you feed it. Feed it the internet, and you get the internet’s average. Feed it your own material, and it might actually produce your own insights.
For Knowledge Bases, I Choose NotebookLM
I like to think of a knowledge base as a middle platform, not a tool. A tool is point-to-point: I use it, solve a problem, move on. A middle platform is systemic: I deposit information into it, and that information compounds over time, generating new connections and new relationships.
I love that closed-loop feeling: read → organize → output → feed back → read again. When your system can self-circulate, you’re freed from the role of information porter. You can spend energy on the things that actually matter — judgment, decisions, communication, creation.
For example: I’ve collected many YouTube videos on project management, entrepreneurship, operations, and business. I put them all into a single knowledge base and ask it to surface connections between them — patterns across videos that might not be obvious otherwise.
I’ve also imported all of Luo Yonghao’s interviews into a dedicated notebook. Inside, I can analyze the full arc of his interview subjects’ careers, surface similarities and differences in tabular form, and ask cross-cutting questions: What’s common to all these people? How did each of them respond to difficulty? What questions did Luo himself keep returning to? All of it answerable through the knowledge base.
I’m not storing information. I’m storing relationships. Once a relationship appears, the world feels a little less chaotic. Finding relationships in information I care about — that’s a genuine pleasure.
In Organizations, a Knowledge Base Is a Management Tool
I once shared a view with a colleague: in a corporate organization, a knowledge base isn’t primarily a service for frontline staff — it’s a strategic tool for the organization itself. A knowledge base is fundamentally a management tool, not an efficiency tool.
This sounds counterintuitive, but I mean it. For example: a company could use a knowledge base to analyze the characteristics of its top salespeople — pull their call recordings and WeChat chat histories, identify their methods, and compare them against lower-performing tiers.
But if you use a knowledge base to build an online customer service chatbot for the sales team, you’ll get very limited value. It might raise the floor for underperformers slightly. But the real conversion skills — the ones that actually close deals — are non-structural: tone, timing, subtext. Those don’t live in documents.
The real pain point isn’t “stop new employees from making mistakes” (that’s a baseline, addressable by process and policy). It’s “how do we scale what our top salesperson does” (that’s growth).
Why is “helping the bottom 50%” low ROI?
- Training a 30-point performer to 60 points is possible. But firing 30-point performers and hiring 60-point ones is often cheaper than building and maintaining a large knowledge system.
- Sales teams typically follow a power law: 20% of people drive 80% of results. Serving those 20% (extracting their wisdom) has far more leverage than feeding the 80%.
Looking forward: what if the knowledge base became dynamic? It automatically analyzes recordings and chat logs from top performers, identifies their patterns (always anchor on value before discussing price, for instance), and auto-generates a new SOP pushed to the whole team. That’s actually serving the organization — replicating what works at scale.
This line of thinking leads me to something bigger: in the AI era, organizations will increasingly resemble systems rather than groups of people. The core of a system isn’t any particular role — it’s information flow and decision chains. Whoever smooths the information flow makes the organization faster. Whoever shortens the decision chain makes it more stable.
Coding Can Make a Knowledge Base Come Alive
When Claude Code first launched, I wasn’t particularly excited — mainly because I wasn’t familiar with it, especially compared to Cursor’s interface. When Claude introduced MCP, I still didn’t use it; it felt like something for developers, not for users.
But recently, Claude Skills changed my view entirely — to the point where I started learning Claude Code from scratch. An example I saw on X: using a Skill to automatically upload files to NotebookLM and generate articles from them. Essentially automating NotebookLM as a professional knowledge pipeline.
Another Skill I love is “Superpowers” — a complete development workflow and skill library that, before writing any code, first clarifies project goals, then breaks down product design and features, then produces a granular, executable plan. For Coding newcomers, it dramatically helps clarify requirements — much more aligned with actual software engineering than the “one-sentence requirement” I used to give.
MCP requires developers to provide it. Skills are documents — concrete descriptions of workflows or requirements. You can write one yourself; anything AI produces is something you can actually read. Easy to install, easy to use, and you can write custom ones for your own needs. Playability and practicality both went way up.
After getting NotebookLM running smoothly, I had a new idea: use NotebookLM as a content processing middle platform. My company uses Shimo and Confluence for documentation, and different departments use different ones. I used Claude Code to read project documents from both Shimo and Confluence, upload them to NotebookLM, and save the processed output back to Confluence or Shimo — completing a project closure report automatically.
I love Skills. Like people, assembling different capabilities is what creates comprehensive ability. Claude Code makes those capabilities external and concrete. In an organization, these Skills could be shared — anyone can call them — and efficiency multiplies. Collaboration becomes more fluid.
Learning May Be About to Turn Upside Down
Future learning may undergo a radical transformation. The transformation I’m imagining isn’t just “more online courses” — it’s a fundamental change in how learning works. Right now: videos, lectures, tests, assignments. But what if learning became more game-like, more immersive, integrated with world-model products and multi-modal capabilities? Visual, interactive, adaptive? I think it’s possible. And the efficiency might be genuinely higher.
One small sense I have: reading may become a much richer audio-visual experience. When you’re reading a novel or classical poetry, instead of pure text to decode, you could use multi-modal capabilities to understand what a passage actually looks and sounds like — anchored to environments and concepts you already know. Why not?
I’m excited about that possibility. It could turn “learning” from a form of pressure into a form of experience. But I’m also a little worried: the richer the experience, the easier it is to become addicted; the more addicted you are, the harder it is to sit quietly with a slow, difficult text. So in the end, it comes back to a more fundamental question: What are we actually learning?
- Knowledge, or how to think?
- Skills, or judgment?
- Methods, or ourselves?
On Working Differently
I rarely write publicly about work — on this blog, or anywhere else. Every person’s work situation is so specific that what’s true for me may be useless, or even misleading, for someone else. Different backgrounds, personalities, environments, roles. Even common communication norms vary by company. So I tend to stay quiet about it.
In 2025, my work shifted significantly toward coordinating people. As I mentioned above, I’ve found that the deeper pleasure of working with people is understanding them: their goals, their starting points, their professional needs. Once you understand all of that, you realize how genuinely difficult it is to achieve a shared goal across different perspectives. Product, engineering, QA — they all share the same surface goal (ship the thing), but their actual work, their metrics, their anxieties are all different. In some projects I’ve been involved in, those differences are even more pronounced. As the person coordinating the effort, the challenge is real.
What I’ve found more interesting, compared to before: previously I’d track a project mainly through tasks — move this, close that, push here. Now I think about it through people — who needs to be aligned, who needs support, who needs space. That’s more engaging, not because the friction disappears, but because talking with people reminds me that people are the most essential factor in whether anything gets done. You can’t just optimize the work. You have to respect the human beings doing it.
I increasingly believe: the real bottleneck in any project is never the tools. It’s the people. Tool problems are mostly solvable with time. Human problems require understanding, patience, communication, and sometimes structure.
My biggest change this past year: I’ve become more willing to invest time in alignment rather than just pushing forward. Pushing is moving the car. Alignment is making sure everyone agrees on the destination. Alignment is slow, even frustrating — but it prevents rework. It’s anti-entropic: spend a little more energy upfront, have a little less chaos downstream. (AI context and memory work the same way.)
What’s Stayed the Same
One more thing: the constants. My enjoyment of recording hasn’t changed at all — keeping accounts, keeping a journal. As of now, I’ve written more than 1,500 days of journal entries. These are among my most valuable outputs. I’ve fed them to AI, had it analyze and generate comprehensive reports. It’s a window into myself. In the journal, I don’t have to consider feelings or social dynamics — I can say anything. That raw honesty might be the one place where I’m completely myself.
My approach to information hasn’t changed either. I’ve returned to using an RSS reader (Inoreader), not to fight algorithmic recommendations, but to preserve my ability to follow the people and things I actually care about. The shift from “following topics” to “following people” has made the world feel more interesting.
The curiosity about new things hasn’t changed. The interest in AI, the learning and experimenting with Coding — these are ongoing.
I Started Recording With My Voice
Two weeks ago, I bought a recording card hardware newly released by Dedao Notes — it attaches to your phone and captures audio throughout the day, which is then transcribed. Every conversation, every passing thought, recorded and summarized. These collected sounds become text uploaded to my NotebookLM, where I can find connections between things I said that I wouldn’t have noticed otherwise.
I find the act of collecting deeply valuable. It fights forgetting. I write a journal every day, and now I can review that day through what I actually said — what ideas came up, what I was thinking. It’s not unlike having a camera running 24 hours (but more affordable and actually suited to how I live).
Extending this further: more collection supports better self-understanding, including recognizing patterns and problems in daily life that would otherwise slip by.
More recording and creating helps us know ourselves better. I love the phrase “fight forgetting.” People are fragile in ways we don’t acknowledge. We think we remember; we don’t. We think we understood; we just felt strongly in the moment. When the emotion fades, everything becomes fragments. Recording collects the fragments and gives them to a knowledge base for organizing. Slowly, you start to see your own behavioral patterns, expressive patterns, emotional patterns — even your blind spots.
As a note: most of this article — probably 80% or more — was written through voice input. Using a voice input app (Shandian Shuo), I can just start talking. When I stop, it transcribes everything. That’s a significant convenience the AI era has brought.
I end with an image: a tired evening, opening the voice input app, talking for a few minutes. Maybe incoherent, maybe repetitive, nothing that sounds like a finished piece. But I said it anyway. And that itself matters. Because expression isn’t proof that I’m impressive. It’s proof that I’m still alive, still observing, still willing to pick up the small things and hold them. That’s worth more to me than learning another tool.
Finally
The long gap between posts wasn’t because I wasn’t writing — it’s because I wasn’t making what I wrote public. I write 1,000-word journal entries every day. This blog, as a platform with a higher bar for publication, carries a real cost. And one of the bigger changes in how I record things is that I’ve come to prefer voice.
That said, the blog remains my first choice among all platforms. Which is also why I subscribe to others through RSS. A living person updating and expressing themselves — that deserves respect and patience.
See you next time. 👋
Author: DemoChen
Link: https://demochen.com/en/posts/20260115/
License: Unless otherwise stated, this work is licensed under CC BY-NC-ND 4.0. Please credit the original when sharing.
Support: If you found this helpful, feel free to become a Sponsor — grateful for the connection.