Styles of AI for Developers
In late 2025, AI is dominating the software development zeitgeist. IPOs and startups are dominated by companies with AI themes. Software developers are being asked to adopt AI tools in the hunt for a silver bullet to increase productivity. And generative AI is being used by students and people who should know better to generate reams of human-like text rife that is grammatically sound, yet often rife with factual errors.
As a software developer, how can you meaningfully put AI to use? What are the various ways AI can benefit you and your users?
Call LLM API
Numerous large language models (LLMs) now provide APIs you can call. With the right prompts and inputs, many text processing and generation tasks can now be accomplished that would have required extensive custom development previously. LLMs can be asked to summarize text, convert data to or from text-like formats (JSON and XML), generate text, and synthesize answers based on their encyclopedic training datasets.
Calling such an API requires crafting a prompt and becoming familiar with the nuances of the LLM you are using. Is it sycophantic? Has it been trained in the domain you need answers in?
You also need to tune parameters that control repeatability. Temperature may need to be lowered to guarantee testing runs match production. The model version may need to be locked. Moving to a newere version will involve extensive testing if the output is critical or highly visible.
Testing is vital and challenging. Unlike a traditional codebase in which you analyze and test the code paths thoroughly, different inputs may cause different weights to dominate, causing outputs that were untested and unplanned. A large number of inputs will need to be tested and there is some reliance on the vendor's ability to test and block undesired outputs, such as offensive or false statements.
Use in Coding (Traditional)
Since the dawn of search engines, developers have been searching for programming information. Looking for documentation on APIs, sample code, and answers to questions is as old as the web. Sometimes the information was relevant but not directly applicable, like documentation on specific API parameters or return values. Other times the question was not exactly what was needed, but close enough to be adapted or provide other insight. And as sites like stackoverflow appeared, you could ask questions directly and often receive an answer, vetted through a voting system.
It is possible to use LLMs and their chat interfaces in much the same way. You can ask them how to use an API. You can ask them how to use specific libraries to accomplish a goal. You can ask them about a problem you are having. The quality of the answer will depend on the amount of relevant data with which they were trained, and also, in part, to how you structure your question. The experience is similar to using a search engine in many ways, with a couple key differences:
- An LLM will almost always provide an answer to your question, even if it has to hallucinate one.
- An LLM will generate lengthier responses like accessing an old BBS over a dial-up modem, so you'll have to be patient if your response is several pages long.
However, this is not always the most productive way to make use of LLMs. The next section describes agentic use of LLMS, in which you let the agent directly manipulate your codebase in response to a prompt.
Use in Coding (Agentically)
In the traditional model, you ask a question, see generic code that is related to your problem, and you (the developer) translate this learning by making corresponding edits in your codebase. In agentic coding, the workflow changes dramatically. You give the agent access to your codebase, and send it to work like a developer, giving guidance but letting it make the changes. In specific circumstances, agentic development can be a more direct, lower-effort path from problem statement to working code.
This works best under several circumstances:
- The change is in a language included in the LLM's training data.
- The change is similar to other changes included in the LLM's training data.
- The codebase has high coverage from a fully-passing unit test suite.
If all of these are true, you can ask an agentic AI to make a change to your code. Depending on your AI tooling, it may ask you first a few questions, review a plan with you, and then come back with a commit to review.
And review you must! I have yet to see an agentic workflow handle even the simplest of cases without guidance and rework. Depending on how close it is or how unable it is to correct its work, you will likely have to make some changes yourself. If you are committing the code, you must take responsibility to ensure it is high quality.
Train NN
For all the excitement and power of a Large Language Model, there are many cases where more traditional AI approaches are a better fit. In particular, a neural network you train yourself may offer improved, cost, control, accuracy, and repeatability.
Before LLMs, deploying AI in production often involved training a neural network yourself. If your data is numeric or involves a small number of quantized values, and you have a fair amount of training data (hundreds or thousands of examples of inputs and the output value you hope to predict), you can train a neural network yourself. You could train yourself using existing neural network libraries like DeepLearning4J or Neuroph. Or you could use a system like AutoGluon.
However you build it, by creating your own neural network you gain control and likely save significant cost. Indeed, you may be able to run predictions on commodity CPUs, saving the time and cost of renting GPUs.
Generate Docs, Diagrams, Images, and More
Many LLMs can now generate documentation, diagrams, and more. By providing context via a prompt or via an agentic AI workspace access, you can request the AI generate documentation in various formats and diagrams. Furthermore, with many AI tools to generate images from a text prompt, you can add more visual interest more easily.
As with AI coding assistance, checking the AI output is required. Errors in documentation are annoying for users and can deter people from using your product or API. And if you are contributing your changes to an open source project, confirm their policies allow AI-generated inputs before submitting a change.
Caveats
Remember, any data you include in a prompt on an LLM website is being sent to the LLM owner. Frequently you are giving them permission to train on the prompt you provide. Don't sent proprietary code or text to a site you don't control!
To prevent this, use local models, or sites with which you have an agreement prohibiting training or redistribution of your proprietary data.
Last modified on 19 Oct 2025 by AO
Copyright © 2025 Andrew Oliver