5 ways to immediately improve your AI models

Andrew Tate
Andrew Tate
Guest Author

Jun 5, 2024

You’ve done it. You’ve copied the OpenAI API snippet into your code and are well on your way to AI domination. You deploy, sit back, and wait for AI to do all the work.

Except, of course, it doesn’t. Pre-trained LLMs are remarkably knowledgeable, but are also fallible: they can sometimes generate incorrect or inconsistent information. Trained on billions of parameters from across the web, they can give almost any plausible answer but not always the specifically correct answer. This is especially true when you’re looking for answers that require very specific contexts. A large language model (LLM) can tell you all about the reign of Queen Anne, but it doesn’t necessarily have the proper context to help you, your coworkers, or your customers with things specific to your business.

So an important part of adding AI to an app or a workflow is improving the LLM for your specific use case. Consider an off-the-shelf AI model as a starting point for AI enhancement rather than an immediate solve or full-on panacea. Here, we want to show you how you can start to enhance any model you use and how you can use Retool to do so.

1. Retrieval-augmented generation, or RAG

Patrick Lewis, the lead author on the first paper on this topic, apologizes for the name. RAG is a technique that combines the strengths of retrieval models and generation techniques:

  • Retrieval: The model searches through an external knowledge base to find relevant information related to the input prompt, which can help provide accurate and factual context for the generation process.
  • Generation: The model uses the retrieved information and pre-existing knowledge to respond more accurately to the prompt.

The “external knowledge base” here can be anything. In the original research, they used Wikipedia, but in terms of improving your models, this needs to be knowledge specific to your organization or product. This could be:

  • Your product or API docs, tutorials, or user guides
  • Customer support issues, chat logs, or FAQs
  • Company policies, guidelines, or SOPs

This allows the model to access a broader range of information and improve the accuracy and relevance of its responses. Importantly, it gives your models domain-specific knowledge, enabling them to perform better. Thus, RAG can significantly boost the performance of your AI models on tasks like:

  • Creating code snippets on the fly for user tutorials or answering specific product-related questions
  • Providing relevant responses to common customer inquiries by leveraging customer support knowledge
  • Assisting new employees in learning company policies
How RAG gives AI models access to specific information.
How RAG gives AI models access to specific information.

How Retool helps you with RAG

Retool Vectors allows you to store unstructured text from your knowledge base, whether company policy docs or product documentation, for use with AI models. Then, when generating an AI action in Retool, such as a chat or text generation, relevant information from your text in the Retool Vector will be included.

How Retool Vectors can train your AI model. Image courtesy of the author.
How Retool Vectors can train your AI model. Image courtesy of the author.

2. Fine-tuning

Ah, a much more sophisticated name that conjures up pianos and precision...

Fine-tuning AI models means taking a pre-trained model and further training it on a smaller, domain-specific dataset. This process helps the model adapt to your particular use case’s nuances and specific requirements, resulting in improved performance and more accurate outputs.

This is different from RAG. With RAG, you augment the model’s knowledge with external information during the generation process, but the model’s underlying parameters remain unchanged. In contrast, fine-tuning updates the model’s parameters to better fit your domain or task.

Fine-tuning can be particularly useful when you have a substantial amount of domain-specific data and want your model to understand the intricacies of your use case deeply. For example, suppose you’re building an AI-powered legal assistant. In that case, fine-tuning the model on a large corpus of legal documents, case studies, and contracts can help it better grasp legal terminology, reasoning, and document structures. (Of course, even with all the improvements you’re making here, check your policies—and work—for any kind of legal use case.)

For all its perks, fine-tuning does come with some challenges:

  • It requires significant, high-quality, domain-specific data, which can be time-consuming and expensive to collect and curate.
  • Fine-tuning can be computationally intensive, especially for larger models, which may require specialized hardware or cloud computing resources.
  • There’s a risk of overfitting, where the model becomes too specialized and loses its ability to generalize to new, unseen data.

Despite these challenges, fine-tuning remains a go-to technique for many AI practitioners looking to squeeze out that extra bit of performance and customization from their models.

How to make fine-tuning easier

Fine-tuning well is a process that requires a well-defined workflow. For example, you might have thousands of documents that need to be parsed and transformed into the right structure for fine-tuning inputs.


If we use OpenAI fine-tuning as an example, you need to:

  • Prepare your dataset by transforming each document into JSONL format with a prompt and completion.
  • Split the dataset into training and testing sets so you can measure your tuning.
  • Upload your training files.
  • Create your fine-tuned model.
  • Analyze your model metrics.

We’d be remiss to not mention that Retool Workflows can help with this process. You can pull your data in from external data sources, manipulate it into the correct format, and then use JavaScript to directly call the OpenAI APIs necessary for uploading, training, and analyzing your models. You can even use an AI action to reformat the data into JSONL format and to create the prompts you need to tune your model.

3. Prompt engineering

Again, good name. This time, making you think of craftsmanship, design, and construction.

Prompt engineering is the art and science of designing effective prompts that guide AI models to generate desired outputs. (Alternatively, our friends at Anthropic describe it as “an empirical science that involves iterating and testing prompts to optimize performance.”) Crafting the perfect prompt is a skill that requires understanding your AI model’s capabilities, limitations, and quirks. It involves carefully selecting the right words, phrases, and structure that steer the model towards generating outputs that align with your specific requirements.

For example, let’s say you’re building a content generation tool for social media marketers. You can guide the AI model to generate compelling social media posts that resonate with your intended audience and drive engagement by engineering prompts that include key elements like the target audience, desired tone, and specific call to action.

Some key considerations in prompt engineering include:

  • Clarity: Ensure your prompts are clear, specific, and unambiguous to minimize the risk of misinterpretation by the AI model.
  • Context: Provide sufficient context to help the model understand the background, intent, and desired outcome of the task at hand.
  • Structure: Experiment with different prompt structures, such as question-answer formats, fill-in-the-blanks, or role-playing. scenarios, to find the best for your specific use case.
  • Creativity: Inject creativity and personality into your prompts to make the generated outputs more engaging and memorable. (Just be sure you don’t sacrifice clarity or context in doing so!)

Prompt engineering is a critical skill in the AI practitioner’s toolkit—and engineering good prompts can transform a generic AI model into a much more powerful tool that delivers results tailored to your needs.

A fast track toward better prompt engineering

With prompt engineering, you need to control your experimentation. If you experiment with prompts in an AI playground or just use the regular UI, you can end up with ad-hoc prompts, and unless you’re being very systematic about your assessment of them, you may not understand what’s essential for your prompts—what’s working well, what isn’t, and what the LLM needs.

To get ahead of this, our team has spun up a couple versions of apps that allow us to test prompts side by side. (You can see a quick example of one we've used below.) This kind of app can allow you to tweak structure, clarity, context, and creativity until you have the best prompts. Then, you can transfer those prompts to your main workflows and models.

Examples of test prompt engineering in Retool. Image courtesy of the author.
Examples of test prompt engineering in Retool. Image courtesy of the author.

(You can use Retool to create your own and iterate on your prompt engineering approach. If you’re a Claude user, Anthropic also recently launched production-ready prompts in its console.)

4. Model switching

I came up with this name. (Patrick Lewis is right—naming stuff well is hard!)

Here’s the thing: you don’t have to just stick to one architecture. You can try different models to find the best use case. Model switching is testing and comparing different AI models to find the one that best fits your requirements and overall AI stack. AI models come in various flavors, each with its unique characteristics. Some models might be great at generating creative content, while others excel at analyzing data or understanding natural language.

The key to successful model switching is clearly understanding your specific use case and the desired outcomes. What kind of tasks do you want your AI to perform? What level of accuracy, speed, or creativity do you require? Answering these questions will help you narrow the pool of potential model candidates.

Once you have a shortlist of models, it's time to test them. Feed them sample inputs, evaluate their outputs, and compare their performance against your predefined metrics. Feel free to experiment with different settings, parameters, or fine-tuning techniques to see how each model responds.

Model switching is an iterative process that requires patience, curiosity, and a willingness to learn. It's not about finding the one-size-fits-all solution but discovering the model that fits your unique use case like a glove.

How Retool helps you with model switching

Retool allows you to easily switch between different AI models and compare their performance side-by-side. Retool's integrations with popular AI platforms like OpenAI, Anthropic, Cohere, and HuggingFace allow you to quickly connect to a wide range of models and test them with your specific inputs.

You can create a Retool app to select different models, adjust their settings, and evaluate their outputs in real-time. This allows you to experiment with different model configurations, fine-tuning approaches, and prompt variations to find the optimal combination for your use case. We’ve already done this for you with our LLM Playground:

Retool's LLM Playground where you can select and test different models. Image courtesy of the author.
Retool's LLM Playground where you can select and test different models. Image courtesy of the author.

Leveraging Retool's UI components and API integrations streamlines the model-switching process and allows you to make data-driven decisions about which AI model to use for your application.

5. Multimodality

A fancy name that makes you sound like a PhD, even if you're just talking about using pictures and words together.

Multimodality is precisely that–models which can generate both text and images. Imagine you're building a content creation platform that helps bloggers and marketers produce engaging articles. With a multimodal AI model, your users can provide a text prompt, and the AI generates a well-written article and relevant images, infographics, and even videos to accompany the text. It's like having a writer and a graphic designer rolled into one!

Or you're developing a social media management tool that helps businesses create eye-catching posts. A multimodal AI could take a short text input and generate a captivating image and a witty caption, making it easier to create shareable content that stands out in the feed.

The beauty of multimodal AI is that it allows you to create rich multimedia content with minimal effort. By leveraging the power of both text and image generation, you can create a more engaging experience within your AI-powered applications.

How Retool helps you use multimodality

With Retool, you can:

  • Generate images from text prompts using multimodal AI models and display them in your app interface.
  • Combine generated images with text fields, buttons, and other Retool components to create interactive and dynamic content creation tools.
  • Use Retool Workflows to automate generating images and text, making producing large volumes of content quickly and efficiently easier.

For example, you could create a Retool app that allows users to enter a text prompt, generate an image and accompanying text using a multimodal AI model, and then display the results in a sleek, user-friendly layout. Or you could build a workflow that automatically generates product descriptions and images for an e-commerce platform, saving time and resources while ensuring consistency and quality.

Bending AI to your will

The real magic happens when you combine these approaches: RAG, fine-tuning, prompt engineering, model switching, and multimodality. By leveraging the strengths of each approach and applying them to text and image data, you can create AI models that are not just intelligent but also intuitive, adaptable, and downright impressive.

A bonus: using Retool, you can get your AI models up and running in minutes and build exactly the applications your business needs.

Get started with AI and Retool today, or book a demo to learn more.

Thanks to Keanan Koppenhaver for extra inputs on this article.

Reader

Andrew Tate
Andrew Tate
Guest Author
Andrew is an ex-neuroengineer-turned-developer and technical content marketer.
Jun 5, 2024
Copied