The origin story of my programming language
Whenever the programming language I am building comes up in a conversation, the most asked question is “why?”. I found myself the other day getting asked about it again and figured it’s time to write it.
At some point I read somewhere, either in a tweet or blog post (can’t find the source right now), that your product being marginally better doesn’t beat the inertia people have against switching over. Your product needs to be ten times better.
And so I thought… What does it take for a programming language to be ten times better?
That thought propelled the work on the programming language I later named “tenecs”. The name is a word play on “10 x”, where the “x” means mathematical “times”. The file extension is “.10x”.
I didn’t necessarily have an answer for what ten times better could look like, but I thought it started with testing. I have professionally written way more web service backend code than anything else, so that’s where my focus started. Not discarding the idea of it being a general purpose programming language, but it’s easier for me to identify the pain points and potential improvements in that area.
Whenever I’m changing existing code, to fix a bug or add a new functionality, it’s way more pleasant and productive when I can easily change and/or add new tests related to the change and be confident I’m done with it. The “being done with it” is fuzzier topic to attempt to tackle, but making it easy to add and/or change tests is something that I think could be tackled.
So my first idea was: why do I write unit tests at all? If I have a function that I can unit test, I could have a command that goes through code and gives me unit tests for each code path. In order for the function to be unit testable I realised I can have forcefully deterministic unit tests by making sure side-effecting functions always come from arguments. So, knowing it can be built, came the question: would these generated tests be good? Would I want to keep them? Maybe they give me an accurate overview of how the system behaves technically but are too detached from what the system is meant to achieve? I tried exploring this idea without building a prototype by talking with some people I consider to be very smart and have vast experience on this topic, but we didn’t come to much of a conclusion. Would have to build to see.
I started working on it. Let me give you a concrete example of my early efforts, taken out of a unit test of the test generator:
package example_package
import tenecs.test.UnitTest
// the function we feed to the test generator
logPrefix := (isError: Boolean): String => {
if isError {
"[error]"
} else {
"[info]"
}
}
//
// The output of the test generator: two tests
//
// 1. Test named "[error]", that checks that output
_ := UnitTest("[error]", (testkit) => {
result := logPrefix(true)
expected := "[error]"
testkit.assert.equal(result, expected)
})
// 2. Test named "[info]", that checks that output
_ := UnitTest("[info]", (testkit) => {
result := logPrefix(false)
expected := "[info]"
testkit.assert.equal(result, expected)
})
In the process of building this I kept thinking about the potential of tooling enabled by the constraint of having side-effecting functions always come from arguments. So, even though I have built some stuff in the original testing direction, I am now investing on building the language also with other tools in mind.
I don’t want to turn this post into a list of everything I’d like to achieve, and the test generator is definitely still on the list, but I’ll let you the testing idea that currently excites me the most.
Test generation out of a trace
Generating tests out of the implementation won’t necessarily produce realistic test scenarios. But what if we could record real code runs and turn them into tests? You can’t have more realistic than that. We could have a tracing tool, somewhat like jaegar or honeycomb, but with a button that turns a trace into a unit test. With this tool I’m imagining you could:
Take an error out of production and as soon as you find it on the tracing tool, you can have a unit test that reproduces it.
Have another person (maybe QA or PO) do some business-oriented testing and you can take those to be unit test cases you want to maintain.
As soon as you finish writing your new feature and run the code to check that it behaves as intended, you can press the button to get that exact scenario as a unit test. Easier than having to write the code for that yourself afterwards.
I still have a lot of work to do on this hobby project, but hopefully after reading this you’re also a bit excited by the idea or at least understand my excitement around it.