Open Source LLM based Web Chat Interface

Test this app for free
494
import os
import requests
import json
from fastapi import FastAPI
from fastapi.responses import HTMLResponse
from pydantic import BaseModel
import uvicorn

app = FastAPI()

#this is the list of free models in openrouter.ai as of Dec 7, 2023.
MODELS = [
    "mistralai/mistral-7b-instruct",
    "huggingfaceh4/zephyr-7b-beta",
    "openchat/openchat-7b",
    "undi95/toppy-m-7b",
    "gryphe/mythomist-7b",
]

class Prompt(BaseModel):
    text: str
    model: str
Get full code

Frequently Asked Questions

What are the potential business applications for this Open Source LLM based Web Chat Interface?

The Open Source LLM based Web Chat Interface has numerous business applications across various industries: - Customer Support: Companies can use it to provide 24/7 automated customer service. - Education: It can serve as an interactive learning tool or tutor for students. - Content Creation: Marketers and writers can use it for brainstorming and generating ideas. - Research and Development: Scientists and researchers can utilize it for literature review and hypothesis generation. - Human Resources: It can assist in initial candidate screening or answering employee queries.

How can this template be customized for specific business needs?

The Open Source LLM based Web Chat Interface is highly customizable: - Model Selection: Businesses can choose specific models that best fit their industry or use case. - Conversation History: The template can be modified to store and analyze conversation history for insights. - UI Customization: The interface can be branded and styled to match company guidelines. - Integration: It can be integrated with existing systems like CRM or knowledge bases for more context-aware responses. - Fine-tuning: The underlying models can be fine-tuned on company-specific data for more accurate and relevant responses.

What are the cost implications of using this template compared to proprietary chatbot solutions?

The Open Source LLM based Web Chat Interface offers significant cost advantages: - Free API: It uses OpenRouter.ai's free API, reducing operational costs. - Open Source Models: The template utilizes free, open-source models, eliminating licensing fees. - Scalability: As an open-source solution, it can be scaled without additional per-user or per-query costs. - Customization: In-house customization is possible without expensive vendor lock-in. - Infrastructure: It can be hosted on-premises or on cost-effective cloud solutions, providing flexibility in infrastructure costs.

How can I add a new model to the list of available models in the template?

To add a new model to the Open Source LLM based Web Chat Interface, you need to modify the MODELS list in the Python code. Here's an example:

python MODELS = [ "mistralai/mistral-7b-instruct", "huggingfaceh4/zephyr-7b-beta", "openchat/openchat-7b", "undi95/toppy-m-7b", "gryphe/mythomist-7b", "your-new-model/model-name", # Add your new model here ]

Make sure the model you're adding is supported by OpenRouter.ai. You'll also need to update the HTML to include the new option in the dropdown menu.

How can I modify the template to include system messages in the conversation?

To include system messages in the Open Source LLM based Web Chat Interface, you can modify the send_prompt function. Here's an example of how to add a system message:

```python @app.post("/prompt") def send_prompt(prompt: Prompt): # ... (existing code)

   system_message = {"role": "system", "content": "You are a helpful assistant."}
   conversation.insert(0, system_message)  # Add system message at the beginning

   response = requests.post(
       url="https://openrouter.ai/api/v1/chat/completions",
       headers={
           "Content-Type": "application/json",
           "Authorization": f"Bearer {OPENROUTER_API_KEY}",
       },
       data=json.dumps({
           "model": prompt.model,
           "messages": conversation
       })
   )
   # ... (rest of the existing code)

```

This modification adds a system message at the beginning of each conversation, which can help set the context or behavior of the model. You can customize the content of the system message as needed for your specific use case.

Created: | Last Updated:

This app will be a web interface that allows the user to send prompts to open source LLMs. It requires to enter the openrouter API key for it to work. This api key is free to get on openrouter.ai and there are a bunch of free opensource models on openrouter.ai so you can make a free chatbot. The user will be able to choose from a list of models and have a conversation with the chosen model. The conversation history will be displayed in chronological order, with the oldest message on top and the newest message below. The app will indicate who said each message in the conversation. The app will show a loader and block the send button while waiting for the model's response. The chat bar will be displayed as a sticky bar at the bottom of the page, with 10 pixels of padding below it. The input field will be 3 times wider than the default size, but it will not exceed the width of the page. The send button will be on the right side of the input field and will always fit on the page. The user will be able to press enter to send the message in addition to pressing the send button. The send button will have padding on the right side to match the left side. The message will be cleared from the input bar after pressing send. The last message will now be displayed above the sticky input block, and the conversation div will have a height of 80% to leave space for the model selection and input fields. There will be some space between the messages, and the user messages will be colored in green while the model messages will be colored in grey. The input will be blocked when waiting for the model's response, and a spinner will be displayed on the send button during this time.

Introduction to Open Source LLM based Web Chat Interface

Welcome to the step-by-step guide on how to set up and use the Open Source LLM based Web Chat Interface. This template allows you to create a web chat interface that connects with various open-source language models provided by openrouter.ai. You'll be able to send prompts to these models and receive responses, simulating a conversation. The chat history will be displayed on the web page, with user messages in green and model responses in grey.

To begin using this template, click on "Start with this Template" on the Lazy platform.

Setting Environment Secrets

Before you can interact with the openrouter.ai API, you need to set up an environment secret for your API key. Follow these steps to configure your environment secret:

  • Visit openrouter.ai and register for an account to obtain your free API key.
  • Once you have your API key, go to the Environment Secrets tab within the Lazy Builder interface.
  • Create a new secret with the key `OPENROUTER_API_KEY` and paste your API key as the value.

This API key will be used to authenticate your requests to the openrouter.ai API.

Using the Test Button

After setting up your environment secret, you can use the Test button to deploy your app. The Lazy CLI will handle the deployment process, and you will not need to provide any additional input at this stage.

Interacting with the Web Chat Interface

Once your app is deployed, Lazy will provide you with a dedicated server link to access your web chat interface. If you're using FastAPI, you will also receive a link to the API documentation.

To interact with the chat interface:

  • Open the provided server link in your web browser.
  • You will see a web page with a chat interface and a dropdown menu to select one of the available language models.
  • Type your message into the input field at the bottom of the page.
  • Choose the model you wish to converse with from the dropdown menu.
  • Click the "Send" button or press "Enter" to submit your prompt.
  • The conversation will update with your message in green and the model's response in grey.
  • While the model is generating a response, the send button will show a spinner, indicating that the process is ongoing.

Enjoy your conversation with the open-source language models! Remember, you can always switch between different models to explore various responses and capabilities.

Conclusion

By following these steps, you should now have a fully functional Open Source LLM based Web Chat Interface. This guide has walked you through setting up your environment secret, deploying the app with the Test button, and interacting with the web chat interface. If you encounter any issues or have further questions, please refer to the documentation provided by openrouter.ai or reach out for support.



Here are 5 key business benefits for this template:

Template Benefits

  1. Cost-Effective AI Integration: By leveraging free open-source language models through OpenRouter.ai, businesses can implement AI-powered chatbots without the high costs associated with proprietary AI services.

  2. Flexible Model Selection: The ability to choose from multiple language models allows businesses to tailor the AI's capabilities to specific use cases, enhancing versatility across different departments or client needs.

  3. Improved Customer Support: This chat interface can be easily adapted for customer service applications, providing 24/7 support and reducing the workload on human agents for common inquiries.

  4. Enhanced Internal Knowledge Management: Businesses can use this template to create an AI-powered knowledge base for employees, facilitating faster access to information and improving productivity.

  5. Rapid Prototyping for AI Applications: The template provides a quick way to prototype and test AI-driven conversational interfaces, enabling businesses to explore new product ideas or service enhancements with minimal investment.

Technologies

Maximize OpenAI Potential with Lazy AI: Automate Integrations, Enhance Customer Support and More  Maximize OpenAI Potential with Lazy AI: Automate Integrations, Enhance Customer Support and More
Streamline JavaScript Workflows with Lazy AI: Automate Development, Debugging, API Integration and More  Streamline JavaScript Workflows with Lazy AI: Automate Development, Debugging, API Integration and More

Similar templates

FastAPI endpoint for Text Classification using OpenAI GPT 4

This API will classify incoming text items into categories using the Open AI's GPT 4 model. If the model is unsure about the category of a text item, it will respond with an empty string. The categories are parameters that the API endpoint accepts. The GPT 4 model will classify the items on its own with a prompt like this: "Classify the following item {item} into one of these categories {categories}". There is no maximum number of categories a text item can belong to in the multiple categories classification. The API will use the llm_prompt ability to ask the LLM to classify the item and respond with the category. The API will take the LLM's response as is and will not handle situations where the model identifies multiple categories for a text item in the single category classification. If the model is unsure about the category of a text item in the multiple categories classification, it will respond with an empty string for that item. The API will use Python's concurrent.futures module to parallelize the classification of text items. The API will handle timeouts and exceptions by leaving the items unclassified. The API will parse the LLM's response for the multiple categories classification and match it to the list of categories provided in the API parameters. The API will convert the LLM's response and the categories to lowercase before matching them. The API will split the LLM's response on both ':' and ',' to remove the "Category" word from the response. The temperature of the GPT model is set to a minimal value to make the output more deterministic. The API will return all matching categories for a text item in the multiple categories classification. The API will strip any leading or trailing whitespace from the categories in the LLM's response before matching them to the list of categories provided in the API parameters. The API will accept lists as answers from the LLM. If the LLM responds with a string that's formatted like a list, the API will parse it and match it to the list of categories provided in the API parameters.

Icon 1 Icon 1
196

We found some blogs you might like...