If you have or are building a software product with the intention of making it last over time, it would be wise to consider the constant evolution that AI is going to make in the coming months and years.
One of the areas where AI (especially LLMs) is making the most progress is in code creation skills. I believe it is inevitable that, as the months and years pass, more and more code will be written by AI, and we humans will primarily take on higher-level tasks, thinking and trying to clearly communicate to the AI what we are trying to achieve and finding ways to ensure that the AI is doing what we want.
We must recognize that we are gradually losing control over what code is being written, increasing our uncertainty about the whole system.
How can we mitigate these problems?
We need to design software that accepts the possibility that an LLM might write parts of the code, that it can understand how that module will relate to others, and that an error in that code can be easily identified and not unnecessarily affect the rest.
In short, we need to adhere to good software development practices, something that is not new but is sometimes overlooked or done halfway.
Some points on how good software development practices can facilitate the incorporation of AI:
Testing: Having high coverage gives us confidence when losing some level of control over certain parts of the code. If the AI writes certain code fragments, it’s essential to have them well-tested, both that fragment and the fragments that interact with it.
Modularization and Encapsulation: Good modularization and encapsulation of code will allow the software to be more flexible and reduce the risk if the AI writes certain modules partially or entirely.
Abstraction: For example, if we need to use internal or third-party APIs, we should have an abstraction layer to standardize how our code consumes that information. We must avoid having the functionality of our code depend on the response to a request from someone outside the application.
Documentation: Documentation helps humans understand the code, but it also benefits LLMs. We need to think about documenting the code so that eventually LLMs can read it and gain valuable information about its behavior that might not be entirely clear in the code.
Evaluations: If we have models or access to Machine Learning functionalities (API calls to OpenAI, for example) within the project, we need to constantly evaluate their performance with every change that might affect it.
Logging: Monitoring the solution in a production environment is key, both to ensure proper functioning and to identify possible improvements or features to introduce.
Leave a Reply