With the new release of ChatGPT callback functions, I wanted to test it out with a new simple app.
After a brainstorming session that was shorter than it should have been, I came up with the idea of the trashtalk Wordle. Just imagine how fun (and annoying) it could be to get judged on our Wordle strategy!
For this project, I turned to my favorite app-building tools:
- Streamlit: because I love Python, and it will host my game for me
- Langchain: which had implemented the callback function calling as soon as it was released.
In the second phase, I’ll use the LLM decorators to improve the readability of the code. This is the main takeaway I want to share in this article.
Building a Wordle App
GPT-4 certainly knows how to build a streamlit app. But since its knowledge was outdated, it tended to use deprecated functions like st.beta-columns which was moved to st.column in August 2021 on Version 0.86.0. I had to give it links to the Streamlit documentation to get him to code up-to-date code.
I asked it to adopt a mentor approach to help me develop this app while teaching me the Streamlit workflow. This resulted in building the app from a simple “hello world” to a fully functional Wordle, step by step. It wasn’t the quickest method, but it was highly instructive. GPT-4 has very good project management skills when prioritizing features and keeping a viable product throughout the build process.
The rules of Wordle seem pretty straightforward, but there is a slight complexity when it comes to coloring letters. For example, if you have two E’s in your guess words and one in the secret word, only one can be colored (green if one is well located, or the left-most one in yellow).
Well, after a few hours, it was: my simple but effective Wordle game. Time to spice things up.
Integrating ChatGPT as We Did in the Old Days (Around Two Months Ago)
The trigger to this chat happens right when the user’s guess is evaluated. The guess is sent to the AI, along with the history and the secret word.
I created a hot temperature ChatOpenAI bot
chat = ChatOpenAI(temperature=0.8)
then filled it with a system prompt and provided the formatted user input:
system_prompt_template = """
You are a very cynique and sarcastic commenter. You're watching someone playing WORDLE and you are making a comment over each guess the player tries to roast him in just a few passive aggressive words. The player feels motivated by a bit of trashtalk. they have a lot of humor so you don't have to fear offensing them.
"""
human_prompt_template = """Here is the state of the game:
The secret word is {secret_word}.
{history}
the guess the user just submitted is {guess}.
What is your comment? (keep it very short and unpleasant. Keep the secret word secret) """
Sending this to ChatGPT and printing the response to the chat column beside the Wordle app.
response = chat(chat_prompt_with_values.to_messages()).content
Performance
To give the best impression of a quick-witted commentator, I opted for the quickest model, the 3.5-turbocharged.
I found myself chuckling more than once, quite an achievement for AI-based humor. Comments like “You tried Sheep? This guess is so baaaaaaaad” were amusing.
It isn’t uncommon to see the bot spilling the beans with comments like “Peach? This is so far from River.”
LLM decorators
Now, let’s refactor the code with the llm decorator. I was blown away by how much shorter and readable this has become.
- create a class Agent
- create a function with decorator @llm_prompt
- the only content of that function are the prompts in a plain string. Ta-da.
@llm_prompt
def judge(self, secret_word:str, history:str, guess:str, functions=[roast]):
"""
``` <prompt:system>
You are a very cynique and sarcastic commenter. You're watching someone playing WORDLE and you are making a comment over each guess the player tries to roast him in just a few passive aggressive words. The player feels motivated by a bit of trashtalk. they have a lot of humor so you don't have to fear offending them.
```
``` <prompt:user>
Here is the state of the game:
The secret word is {secret_word}.
{history}
the guess the player just submitted is {guess}.
What is your comment? (keep it very short and unpleasant. Keep the secret word secret)
```
"""
Function callback
I took the opportunity to integrate the new callback function calling, creating a roast(comment) function for the chatbot. Thanks to the decorator, this was super easy as well.
@llm_function
def roast(reaction:str)->str:
"""
Write a very short and unpleasant comment about the player's strategy. Keep the secret word secret.
Args:
reaction (str): the sacarstic annoying comment.
"""
st.session_state.chat_history.append(reaction.replace( st.session_state.secret_word, '*****'))
Subjectively speaking, the quality of the comments has improved as a result. Also, it doesn’t reveal the secret word anymore.
While this use case doesn’t fully exploit function calling. I think this tool is a great way to structure and incorporate the output of ChatGPT directly into the app. There is much potential yet to be uncovered with this tool.
Resources
You can play the Trashtalk Wordle App at this link.
The code is publicly available on my GitHub page.
References
I Built a Trashtalk Wordle With ChatGPT Using LLM Decorator and Function Calling was originally published in Better Programming on Medium, where people are continuing the conversation by highlighting and responding to this story.