n0tls

Step 0


I always approach working with code by asking questions, it's how I interact with most people when they come to me with some kind of problem to solve. This has translated to the very first thing I need to ask myself when I suddenly come up with some 'genius' idea that I can get Claude to do. "Does this need an LLM?"

I've been piloting Claude as my main interface into my work and home systems for the last 9 months and have found myself asking that question pretty consistently, and trying to come to some understanding of what are these things actually good for? In the end, I think the translation of NLP into tool/api calls is core to the function that answers that question for me.

Here's a free one that went through my mind in a flash of lightning. If you're a random engineer, or developer out there finding yourself trying to automate your small businesses internally, asking a coding assistant to do the task each week manually is probably a very real solution that will exist out there for some company, and they will continually open the tap to the LLM token supply.

In thinking about this it made me wonder what the LLMs currently prefer to do, do the task, or suggest to write a script to automate that thing for the user. It seems like that kind of decision should be tracked by some kind of index, an LLM greed index? It is somewhat related to token usage, and how efficiently you can go from premade test suite -> prompt -> write tests to implement prompt given tests that can't be changed. See how many tokens that takes to accomplish and overall cost to the end user. The tasks would need to in my opinion be things akin to the LLM deciding to consistently do a task itself one time vs writing a script. Is it up to the user or the model hosters to include that in a system prompt/user prompt file?

What a silly tunnel to go down. Anyways, I'll see if something like that already exists or not.