Erik Makela

2026-02-06-are-llms-worth

Are large language models worth it? Nicholas Carlini

The “echo chambers” of social media makes it easy to connect people only to those who agree with them. It now becomes possible for people in a society to live in different isolated realities, believing that different facts are true, and only ever interacting with people who believe what they also believe. I see the potential for LLMs to amplify this situation one level further. Instead of large groups of people interacting with other people who agree with them, we could easily have a scenario where a single LLM could push a particular narrative to millions of unsuspecting people. The algorithm is no longer the middle-man that plays the role of choosing what content to amplify—it now generates the content itself.

2026-02-05-so-much-better

Is it Really So Much Better Now? Chris Arnade

This growing “corrosion of the soul” isn’t simply a result of increasing technology, but of how we have chosen to use technology, which isn’t only to replace dangerous and repetitive work, but to also try and replace human interaction. To reduce them down to programmed rules that can be done by a machine. That is a mistake, because while machines might be far better than humans at interacting with and transforming nature (plowing fields, erecting buildings, digging for metals, etc.) they are not better at interacting with other humans. … You can’t, and shouldn’t, efficiency away the human touch. … Humans still seek connection even when their jobs don’t require it, even when efficiency says it’s unnecessary.

Technical Debt

(Via panny)

My observation is that “AI” makes easy things easier and hard things impossible. You’ll get your niche app out of it, you’ll be thrilled, then you’ll need it to do more. Then you will struggle to do more, because the AI created a pile of technical debt.

Programmers dream of getting a green field project. They want to “start it the right way this time” instead of being stuck unwinding technical debt on legacy projects. AI creates new legacy projects instantly.

This reminds me very similarly of GPT-3’s research paper in which they describe a pattern of “Few-Shot Learners”. However, I’ve seen overtime the trend that even with extremely large context models there is still an innate need to make “One Shot Execution” prompts for task delegation. Think of how subagents are needed to do a specific task. If you’re currently making some type of app or process, I still think that one shotting is the most efficient when it comes to utilizing it for yourself because the barrier for diagnostic is the lowest. I still think the methodology from Gas Towns will still be applicable in the future as it correlates somewhat to present work delegation models. However, the tradeoff between ‘time spent fixing issues’ vs. ‘getting a minimal viable product’ will all depend of the subject expertise of what you’re trying to make.

For example - Generate some type of video with Remotion Code Animation - One Shot Applicable Automate a complex workflow with poor documentation - Less One Shot Applicable

I find it very interesting that with the rise of skills that the documentation for “how” to do a certain activity has increased. For example, a popular repository called ui-ux-pro-max-skill sets standards to define what makes a website a certain design style and where you can find examples of it.

2026-05-02-data-poems

https://dr.eamer.dev/datavis/poems/ “Short stories I tried to tell with numbers.”