Large People Model: Sentiment Analysis in OpenHome

We’re thrilled to spotlight community member Sako’s project, Large People Model! Have you ever wondered if your technology could detect that hint of sadness, or touch of anger, in your voice?...

Author: Lydia You

We’re thrilled to spotlight community member Sako’s project, Large People Model!

Have you ever wondered if your technology could detect that hint of sadness, or touch of anger, in your voice? We spend hours with our personal devices each day and yet there is no way for them to see us, to know us, as the complicated, emotional human beings we are. 

Large People Model brings us a step closer to having empathetic and emotional AI. The project allows OpenHome to conduct sentiment analysis on user’s input and to output the results into WhatsApp.

Large People Model, or LPM, showcases OpenHome’s emotion detection functionality and ties in a highly useful API integration with WhatsApp.

Demo

The Developer

Sako is a developer based out of San Francisco. He specializes in AI, DevOps, and Security and has a deep passion for open-source projects.

Here’s what Sako had to say about his project:

Loved using OpenHome trained with my own voice to interact and manage my WhatsApp community with 800+ members. Such a time saver! Excited to add more capabilities and build more engaged communities.

About the Project

Eventually, OpenHome will become your intuitive, empathetic companion right in your living room or bedroom. OpenHome SpeakerOS will support you just like a human would–and that means detecting emotions and responding accordingly. Outputting to WhatsApp provides an easy and familiar interface to analyze your conversations.

Methodology

The following code snippet demonstrates how Sako listens for user input and prompts the large language model to classify the input as either positive, negative, or neutral.

Python

def call(self,agent):
        initial_message = "Performing sentiment for your whatsapp messages, please wait."
        agent.speak(response=initial_message)
        user_inquiry = agent.listen()
        sentiment = user_inquiry
        template = """
        do sentiment analysis on the following text: "{sentiment}" and output the sentiment score. If sentiment is positive output 'positive', if negative output 'negative', if neutral output 'neutral'. Output sentiment: {sentiment} and if it contains politics, sex or insult, output 'it is flagged red', else output flagged: 'not flagged' in json format.
        """
        prompt = PromptTemplate(template=template, input_variables=["sentiment"])
        llm_chain = LLMChain(prompt=prompt, llm=llm)
        response = llm_chain.run({"sentiment": sentiment})
        print(response)
        return response

LPM utilizes LangChain to automate a series of queries to the LLM; in this case, it is used to analyze each sentence in the input and output the corresponding sentiment in a structured format.