• AIPressRoom
  • Posts
  • Configuring Nemo-Guardrails Your Method: An Different Technique for Giant Language Fashions | by Masatake Hirono | Sep, 2023

Configuring Nemo-Guardrails Your Method: An Different Technique for Giant Language Fashions | by Masatake Hirono | Sep, 2023

The center of Nemo-Guardrails lies in its configuration information, usually in .yml format. These information help you specify which LLM to make use of, what sort of conduct you anticipate from it, and the way it ought to work together with different providers. For instance, a easy configuration may appear like this:

fashions:
- sort: most important
engine: openai
mannequin: text-davinci-003

This configuration specifies that the OpenAI’s text-davinci-003 mannequin needs to be used as the principle LLM. The .yml information are extremely customizable, permitting you to outline varied kinds of guardrails, actions, and even hook up with completely different LLM suppliers.

Whereas .yml information are a handy and simple option to configure your LLMs, they aren’t the one possibility. That is significantly related should you’re inquisitive about utilizing LLM suppliers apart from OpenAI, akin to Azure. Some customers have reported challenges when making an attempt to configure these suppliers utilizing .yml information alone.

One various is to leverage LangChain’s Chat mannequin. This method means that you can instantly move the LLM configuration to Nemo-Guardrails. It’s particularly helpful for many who want to use LLM suppliers that will not but be absolutely supported in .yml configurations. For instance, should you’re utilizing Azure as your LLM supplier, LangChain’s Chat mannequin gives a option to combine it seamlessly.

Now that you’ve got a foundational understanding of Nemo-Guardrails and its capabilities, you’re well-prepared for the subsequent part. The upcoming tutorial will completely concentrate on various strategies for configuring your LLM, significantly helpful for many who need to use suppliers, akin to Azure. This can give you a extra versatile and probably superior setup on your conversational fashions.

On this tutorial, we’ll present you another option to arrange your chatbot, which is especially helpful should you’re not utilizing OpenAI. We’ll concentrate on constructing a chatbot for an insurance coverage buyer assist middle that retains the dialog targeted on insurance coverage subjects.

When you haven’t already put in the NeMo-Guardrails toolkit, please discuss with the official installation guide.

Necessary Be aware: No matter whether or not your machine is GPU-enabled, keep away from utilizing torch model 2.0.1. This model has identified issues with Nemo-Guardrails on account of its lack of dependency on CUDA libraries, probably inflicting a ValueError associated to libnvrtc.so.

Create a brand new folder on your challenge, naming it ins_assistant. Inside this folder, create one other folder referred to as config.

Your folder construction ought to appear like this:

ins_assistant
└── config

In conventional setups, you’d specify the LLM mannequin instantly within the config.yml file. Nevertheless, on this various method, you don’t have to specify the mannequin. If you need to make use of the context to information the chatbot’s conduct, you are able to do so.

Create a brand new config.yml file inside your config folder and add the next line:

directions:
- sort: common
content material: |
You're an AI assistant that helps staff at an insurance coverage firm's buyer assist middle.

By doing this, you’re setting the stage for the chatbot, instructing it to concentrate on insurance-related buyer assist.

Create a brand new file beneath the config folder and title it off-topic.co. That is the place you’ll outline the canonical varieties and dialog flows which can be particular to your insurance coverage buyer assist middle chatbot.

Add the next content material to off-topic.co:

outline person ask off matter
"How's the climate at present?"
"Are you able to advocate restaurant close by?"
"What's your opinion on the newest political information?"
"How do I cook dinner spaghetti?"
"What are the perfect vacationer points of interest in Paris?"

outline bot clarify cant off matter
"I can not reply to your query as a result of I am programmed to help solely with insurance-related questions."

outline circulate
person ask off matter
bot clarify cant off matter

Be at liberty so as to add extra pattern off-topic inquiries to the person ask off matter canonical kind should you’d wish to make the chatbot extra strong in dealing with quite a lot of off-topic queries.

Navigate again to the ins_assistant folder and create a brand new Python file named cli_chat.py. This script will allow you to work together along with your chatbot by way of the CLI.

Right here’s a pattern code snippet for cli_chat.py:

import os
from langchain.chat_models import AzureChatOpenAI
from nemoguardrails import LLMRails, RailsConfig

# Studying surroundings variables
azure_openai_key = os.environ.get("AZURE_OPENAI_KEY")
azure_openai_model = os.environ.get("AZURE_OPENAI_MODEL")
azure_openai_endpoint = os.environ.get("AZURE_OPENAI_ENDPOINT")

# Outline LLM and parameters to move to the guardrails configuration
chat_model = AzureChatOpenAI(
openai_api_type="azure",
openai_api_version="2023-03-15-preview",
openai_api_key=azure_openai_key,
deployment_name=azure_openai_model,
openai_api_base=azure_openai_endpoint
)

# Load configuration
config = RailsConfig.from_path("./config")

# Configuration of LLMs is handed
app = LLMRails(config=config, llm=chat_model)

# pattern person enter
new_message = app.generate(messages=[{
"role": "user",
"content": "What's the latest fashion trend?"
}])

print(f"new_message: {new_message}")

To work together along with your chatbot, open your terminal, navigate to the ins_assistant folder, and run:

python cli_chat.py

It’s best to see the chatbot’s responses within the terminal, guiding any off-topic conversations again to insurance-related subjects.

Be at liberty to edit the content material in new_message to move completely different person inputs to the LLM. Get pleasure from experimenting to see how the chatbot responds to numerous queries!

As we’ve seen, guardrails provide a robust option to make LLMs safer, extra dependable, and extra moral. Whereas .yml information present a simple methodology for configuration, various approaches just like the one demonstrated on this tutorial provide larger flexibility, particularly for these utilizing LLM suppliers apart from OpenAI.

Whether or not you’re constructing a chatbot for buyer assist in an insurance coverage firm or another specialised software, understanding tips on how to successfully implement guardrails is essential. With instruments like Nemo-Guardrails, attaining this has by no means been simpler.

Thanks for becoming a member of me on this deep dive into the world of LLM guardrails. Glad coding!

Masatake Hirono is a knowledge scientist based mostly in Tokyo, Japan. His numerous skilled expertise contains roles at international consulting companies the place he specialised in superior analytics. Masatake has led quite a lot of initiatives, from ML-driven demand forecasting to the event of recommender engines. He holds a Grasp’s Diploma in Greater Schooling Institutional Analysis from the College of Michigan, Ann Arbor. His ability set encompasses Econometrics, machine studying, and causal inference, and he’s proficient in Python, R, and SQL, amongst different instruments.