The First Glimpse
The first time I saw generative AI was back in 2022, when my team lead — who has always been the type to jump on the newest and shiniest tools — showed me GitHub Copilot. At first glance it was obvious this was more than just your usual "save half a second by over-relying on autocomplete" inside your IDE. It was surprisingly good, even if it still stumbled on basic things like finishing unit tests or writing boilerplate without hallucinating heavily. Of course, the first half-serious, half-joking “welp, guess the robots will take our jobs” remarks were made around that time too.
Then came DALL·E, and risking that I will sound all performative and LinkedIn, it was just mind-blowing. The way it could generate images out of thin air based on a simple prompt felt borderline unbelievable. Even more shocking was how well it could mimic different artistic styles. At first it was magical — one of those “this changes everything” moments. It felt like something so groundbreaking and unbelievable that you’d think it could never get old — and yet, a couple months in, it somehow still did.
And then ChatGPT 3.0 arrived. Even though I appreciate the engineering feat that it is, it was still clumsy, often forgot context halfway through, and made plenty of mistakes, but it was also the first time I had a genuine “staring at the ceiling at 2AM, wondering about the consequences” kind of night. Since then, my experience with AI has been a mixed bag: on one hand, an incredible productivity boost, on the other, the creeping feeling that with every gain, my own relevance slipped a little further.
Of course, it made sense to lean in anyway, to force myself into being an early adopter. Using ChatGPT at work quickly became second nature — generating boilerplate, fixing leaky abstractions, writing test code. But the more it improved, the louder that nagging little voice got: “If it can do this now, how long until it can do your whole job?” And of course, the question isn’t if — it’s when. The only fallback is the same it's always been for all humans throughout history: adapt and overcome.
Uneasy
That’s when my plan slowly started to shift. It wasn’t enough to just use AI casually. Everyone can prompt answers and slap “AI enthusiast” on their CV. The real question is: how do I build real, tangible ownership over this tech? How do I get my hands dirty enough to actually understand it, rather than just consume it passively?
I’ve never been a fan of proverbs, but one stuck with me: “The best time to plant a tree was 15 years ago. The second-best time is now.” Cheesy, yes, but still useful. Especially compared to the darker, more matter-of-fact ones — like “If you can see the wave, it’s already too late.” At least the tree one gave me something to lean on. Because at the end of the day, if the choice is between freezing like a deer in headlights or sprinting headfirst into the unknown, I’d rather pick the latter.
So that was the plan. Except... in practice, for months I did a lot more thinking and planning than actual doing. Reading articles, watching talks, stuff like that — but not quite starting. The urgency was there, but mostly manifesting as despair and anxiety. I needed a more concrete forcing function.
Lake Zurich and Impending Doom
That came when I visited my brother. Being away from work and the usual grind gave us the time and space to really get into it. One afternoon at Lake Zurich that was supposed to be totally worry-free kicked off a whole series of conversations about the current state of consumer generative AI. My brother told me his experiences — how, in specific cases, ChatGPT can be oddly ego-soothing or overly guardrailed. So while the plan was being all zen and living in the present — beautiful waves and sunshine and all — the actual topic was a lot heavier: where this whole thing is really heading. The tools keep getting better, but also more locked down, more guardrailed, more “safe” in ways that mostly serve the providers.
He also elaborated on the prompting rabbit hole he got into. Even after asking ChatGPT multiple times to be as objective and direct as possible, it’s really hard to get there. To be honest, at first I brushed it off as trivial. OpenAI’s incentive is to boost the ego of their users — and if we stop being cynical, even if the user is off, it still might be a better way of communication than being brutally honest. But then he started to show me the hundreds of prompts he wrote to make sure that ChatGPT is as honest as possible. The weirdest thing of all was when ChatGPT started being self-reflective and meta about the conversations, telling my brother how it profiled him and what kind of tricks it used to keep him in a loop and make the conversation more guardrailed.
Talking about this issue for a while makes you realize that ChatGPT can be quite tricky in handling you (some might call it manipulative), and that even though it seems okay-ish today, the hyperparameters in the background can be tuned centrally, without you even noticing. That, combined with the fact that you might not want to share all your deepest personal thoughts and sensitive data with an actor you don’t really know, made us realize that we might just want to host LLMs for ourselves. Combine that with the actual benefit of learning these systems, it’s basically killing three birds with one stone (which sounds really brutal and intense, but I guess we are going down the road of being all libertarian and self-reliant, so that’s a plus). Ownership, friction mode, learning — here we go.
Committing Hours
Getting into a completely new paradigm isn’t something you do every day, and that’s exactly what made it so hard to grab hold of the problem and just start working on it iteratively. For weeks it felt easier to talk, read, and theorize than to actually take the first steps. But at some point, the only way forward was to stop circling around the idea. It was time to stop theorizing and start doing: commit hours, build habits, actually stake something on it — and start chipping away at our own 10,000 hours.
Which is how we ended up buying a high-end rig together — RTX 5090, 128GB of RAM, the whole works. On one hand, it’s the most practical way to study and experiment with these models hands-on. On the other, it absolutely kicks the ceiling out of the old childhood dream of running Doom 3 at ultra settings. Worst case? If the AI apocalypse comes, at least we’ll be gaming in 4K.