Azure Prompt Flow: For prototyping your AI solution

Alternative way of using Prompt Flow for prototyping code-first AI solution

Stas(Stanislav) Lebedenko
5 min readMar 28, 2024
Title image

TL;DR; The prompt flow goal is to build, validate, and deploy LLM applications. Still, you can also use it for different purposes, if you need to codify your Python app from low code flow with Rag on Azure, compare the results of outputs with different LLMs and rags combinations, or check what will happen when you use different versions of the same document for RAG purposes.

Plan

  • Prompt flow basics.
  • Sample prototype description.
  • Working with Prompt Flow nodes.
  • Custom python code path.
  • Conclusion and links.

Basics

Azure Prompt Flow is a dev tool for developing LLM-based AI applications with the help of visual graphs and Python tools with easy debugging, prompt variants, performance evaluation with testing, and tuning of results. It also allows you to deploy your custom application to Azure in a matter of minutes.

This article is an addition to the excellent documentation of Microsoft Learn, so for jumpstart, please check this link.

While you can develop and test you application based only with a Prompt Flow and bits of Python, there is an opportunity to write your custom Python application as a separate branch in parallel and compare outcome results with multiple configurations of prompt and different LLMs.

Prototype

Working with workflows might require some adaptation curves, but the product keeps improving each month, and as of March 2024, you have validation triggered when you try to fix existing code. This article has its repository with code ready for you to experiment with; look at the links section.

To start with this sample, you need an Azure OpenAI instance with deployed GPT(3.5, 4, 4-Turbo) and ADA for text embeddings — also an Azure Search instance of Standard tier to save vectors.

Lets look into the concept of 3 flows that is doing the same things in slightly different way.

3 flow solution

So the proper flow is just a simple call to LLM without any embeddings and with a Prompt variant, so you try to get results the simple way. Sometimes, prompts are enough.

Central flow with RAG has the extra step of embedding your question to do a vector search along with Prompt variants. For this, you need to upload documents to the vector search beforehand.

Left flow uses custom Python code you fully control; even the pip install is done separately for convenience. Be aware that this will slow down overall execution.

So, what is the catch with this approach? Sometimes, you just need a Python code to move it freely between your applications, put it to an Azure Function, or even to the simple Azure Container Instance to avoid expensive computing from ML Studio.

I found this approach of experimenting with low code is convenient and easy; it also allows you to use tags on the final node to compare execution results.

Working with nodes

Please be aware that while Prompt flow is very interactive, you still need to look for validation and errors in several places and click the Save button occasionally.

The first level of validation is in the chat window; then, you need to click validation on a level of each and every node and keep in mind that you can edit flow via “Raw file mode,” which will speed up things via the text editor of the YAML file.

Fixing connections errors and nodes adjustment

Besides the validation errors in the chat, keep in mind that extra validation errors are displayed at the node level, so to make the sample code work, you need to fix each node one by one by validating inputs.

Validate and parse input button.

Inputs have different areas to click; yellow one will open a dropdown, and a white one with a pen will open a text dialog :) to edit variables.

Value adjustment via input

Custom code flow

The custom code path starts with a pip install node, but this step will look into your requirements.txt, which is accessible by clicking on files, finding them in the list, and opening them for edits.

File tab in Azure Prompt Flow

Switching to the node with the custom code node has a slightly different structure, but it is still easy to do with functions before the main code section marked with “@tool.”

and if we will look into the details of a “Summary” node

Summary node

We will see input parameters from three execution branches and some Open AI tags used to output data as is. You might ask why I use the LLM node for the dumb data outputs and why I’m a lazy developer, so I’m usually asking to compare outputs and check them against my custom data :). The main problem is that I need a better idea for this demo, and the production solution is under NDA :).

The conclusion

Prompt flow saved a lot of time for prototyping the LLM app by providing a seamless comparison of different execution variants and comparing them to my custom-written code.

--

--

Stas(Stanislav) Lebedenko

Azure MVP | MCT | Software/Cloud Architect | Dev | https://github.com/staslebedenko | Odesa MS .NET/Azure group | Serverless fan 🙃| IT2School/AtomSpace