Erik Makela

List of "Strong" Opinions

HOAs are mid - I value my neighbor’s freedom to do what they want with their land than the place looking subjectively ‘pretty’

Good cheese is worth it - I recently bought an $11 slice of 20 month aged parmesan and it’s changed my entire outlook on cheese. There are literal crystals of flavor that develop. I am heavily suspect on how Babybel is so popular, I can hands down put their product as the reason I started to have a dislike towards cheese. I feel the analogy is similar to have fresh vs frozen fish.

more to come…

How AI Saved my Trash Coding and Research Project

My Experience

I recently finished my paper for a research project I started during my mid years in college. Funnily enough, I found the topic when trying to find a way to automate my own job of animal identification for biological sciences for Ella DiPetto. The job was simple, click through 1000s of images and try to spot an animal on the image. Needless to say, I did not stay on the job for long because I value new experiences and gaining knowledge (even though it paid decently for basically listening to music. You live and learn).

I am primarily in the business of finance/accounting and have had a heavy interest in computers for my entire time growing up. However, at-least during that time I was much more on the side of “computing” than “coding”. Before the prevalence of language models, it did not seem like a topic that was very approachable without some form of academic/class setting to learn. And for some reason younger me seeing all the 0s and 1s flying across the screen thought “real coding” was binary analysis and Assembly. Even though that’s technically true, there is value to the layers of abstraction you put on computing for achieving as specific task even though some fundamental understanding is lost.

In a research setting, there is identification of a problem domain or discovery, defining your output, setup of design, collaboration, and most importantly reproducibility. I discovered my “problem” when I understood that the current generation of models could not be used due to need for exact data and that humans, at-least for now, are uniquely primed for animal identification in environments. The entire dataset, which composed of some millions of images was reviewed once initially then 50% randomly the second time. Overtime, the problem became- what was the limitations of these visual identification models? For DiPetto’s environments specifically it was animals being missed or random objects being identified as animals.

However, I also decided to do some analysis on the mathematics of optimization of the image identification model. Needless to say, none of it worked and you should train neural networks by using training data and then optimizing it with proper machine learning techniques.

I did not know how to do that at the time. What I did know was the specific question I wanted to ask out of the data that I had. Before that even began, though, I had to understand how to get there for the values that I wanted. I spent a solid three days understanding how models were benchmarked (with hits and misses) and how I wanted to achieve that with the data that I had. Once I had that understanding, I then moved on to the data manipulation.

One problem, I barely knew how to code. Some HTML here, some CSS there, but nothing to the extent of actually making a logical structure of my thought.

All the way back in my softmore/junior (2018-2019) year of highschool I was putting out feelers for a senior project. One project that came up was helping a local nonprofit with some server setup involving SQL. I eventually moved to another idea because I found a better project but was thinking in my head “man, what if there was a way to plain english these SQL queries to make the output easier”.

During 2023 and back in college Mr.gippity (Chat GPT-4) was slowly becoming popular and started to get on my horizon. I had heard about GPT-3 and touched it once or twice but originally wrote it off as a fancy storyteller and not much else.

I started asking my questions on stack overflow - “Implementation of Cobb-Douglas Utility Function to calculate Receiver Operator Curve & AUC” and by the saving grace of user ‘L Tyrone’ I was given a general guideline and solution to build off of.

Until one day I realized you could ask LLM models how to code. Everything clicked. It wasn’t an immediate click as you still need good prompting to make things work but it was the accelerant I needed to continue my project. I was able to translate my user requirements into something tangible for data manipulation. I still looked at the input, checked the code, and checked the output. I was trying to publish a paper after all. Overtime, I slowly started to reach the marks of computing performance limitations, memory management, parallelism, and upgrading to my college’s high performance computing cluster for more CPU and GPU processing power. Each was discovered by my inexperience in knowing what limitations I had.

“Huh, why is my computer freezing and at 100% ram usage?” “Oh, I’m going to either need to wait 2 years or learn how to make things faster”

Through this entire process I knew exactly what I needed to do with my data and had a fundamental understanding of confusion matrixes, Receiver-Operating-Curve, Previsions Recall Curve, and Area Under the Curve values per the trapezoidal rule and all other mathematics that needed to be coded with my data. I also had an understanding of why I was performing this specific analysis and the ways that I needed to clean my data.

Sure, there’s nothing stopping me now from creating a few scripts of unzipping some JSON files, importing them, making some mathematical computations, and graphing the output with an AI language model. But I would have missed out on two very important factors:

1) The learning involved in the process 2) Was the process correct for the data analysis I wanted? Was the question I am asking correct in the first place? I still needed to have the right questions to approach the problem. I put this point because my paper was an entire representation of overthinking it the wrong way.

I had a process mapped inside my mind and used GPT-4 to piecemeal those processes into a useable function in R studio. Through this project and utilizing relatively “low power”...

Minecraft Server on Docker

I found an easy way to host a Minecraft server on docker.

You can go to https://setupmc.com/java-server/ to get your own customized setup. It has an interactive and easy to use UI that sets up the docker compose file for you. It uses the itzg/docker-minecraft-server container as the underlying dependency.

Additionally, I came across this post which showed some administrator panels I could use to administrate my running server through a website.

PufferPanel (free)

Pterodactyl (free)

Pelican (a fork of pterodactyl, also free)

hypothetical-syllogism-transitive-property

https://en.wikipedia.org/wiki/Hypothetical_syllogism (or transitive property)

pure hypothetical syllogism is a syllogism in which both premises and the conclusion are all conditional statements. The antecedent of one premise must match the consequent of the other for the conditional to be valid. Consequently, one of the conditionals contains the remained term as antecedent and the other conditional contains the removed term as consequent.

If P, then Q.

If Q, then R.

∴ If P, then R.

Photogrammetry 3D Object Capture Guide

Defining Objectives

What are my user requirements? I want to take photos of a sculpture. I know the maximum limit for a sand sculpting competition is 20x20 feet and while as I’d like to get as much detail as possible, I’m executing for the novelty.

What are my limits? Preference - free and/or open-source software, Hardware - GTX 1080ti sc2 GPU, (Storage N/A as I have many free Terabytes), Phone - iPhone 16 pro

Software Options

Best of Apps, - Modelar (a lot of export options), DepthEye (Export .ply files only), Hedges (special: ability to use front facing lidar), RTAB-Map (only exports in OBJ)

Computer Software - Mesh Room, Reality Capture, Blender, MeshLab

Limited number of captures or freemium - KIRI Engine (150), Polycam (150), RealityScan (300), 3DF Zephyr (50), Scaniverse

Workflows

The two workflows I can either do through the deduction of my options are:

a) Capture raw images and process them through Mesh Room or Reality Capture

b) Capture images through the Reality Capture mobile application for on device processing

If I was in a hurry, I would do option B but since I’m try to get a decent scan I’m going to go with option A.

Guidance for picture taking

How do I get the best scan? “The camera is a factor, but your technique is more important and most cameras will yield acceptable results. The “holy trinity” of photo settings for sharp pictures - camera speed faster than 1/30 of a second to avoid motion blur, ISO of less than 400 to avoid camera noise, aperture set to F8 or higher (smaller aperture) for decent depth of field (but too high will result in distortion). You can bend these rules, for instance you can take slower photos with a tripod, but it may affect your results.”

Results - Coming October 2026