by UnityAI

AI Image Generation API

Test this app for free
35
import logging
import os
from typing import Optional
from fastapi import FastAPI, HTTPException
from fastapi.responses import RedirectResponse, JSONResponse
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
from fastapi import Request
from pydantic import BaseModel
from abilities import apply_sqlite_migrations, llm
from models import Base, engine
import requests
from PIL import Image, ImageEnhance
from io import BytesIO

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

app = FastAPI()

# Mount static files directory
app.mount("/static", StaticFiles(directory="static"), name="static")

# Setup Jinja2 templates
Get full code

Frequently Asked Questions

What are some potential business applications for this AI Image Generation API?

The AI Image Generation API offers numerous business applications across various industries. For example: - E-commerce: Generating product images for catalogs or customized merchandise - Marketing: Creating unique visuals for ad campaigns or social media content - Game Development: Producing concept art or in-game assets - Interior Design: Visualizing room layouts and decor options - Fashion: Designing new clothing patterns or accessories

By integrating this API, businesses can streamline their creative processes and reduce costs associated with traditional image creation methods.

How can this API enhance productivity for creative professionals?

The AI Image Generation API can significantly boost productivity for creative professionals by: - Providing instant inspiration and visual starting points - Reducing time spent on initial concept sketches - Allowing rapid iteration of ideas - Enabling the creation of multiple variations quickly - Assisting in generating complex scenes or compositions that might be time-consuming to create manually

This tool can serve as a powerful assistant, allowing creatives to focus more on refining and perfecting their vision rather than starting from scratch.

What are the potential cost savings for businesses using this AI Image Generation API?

Implementing the AI Image Generation API can lead to substantial cost savings for businesses in several ways: - Reducing the need for extensive stock photo subscriptions - Decreasing reliance on freelance artists for initial concepts - Lowering production time and associated labor costs - Minimizing the need for expensive photo shoots or 3D rendering services - Enabling rapid prototyping and ideation, potentially reducing overall project timelines

By leveraging this API, businesses can create unique, high-quality visuals at a fraction of the cost of traditional methods.

How can I customize the image generation process in the API?

The AI Image Generation API allows for customization through the prompt parameter. You can modify the generate_image function to include additional parameters. For example:

```python @app.post("/generate-image") async def generate_image(request: ImageGenerationRequest): try: # ... existing code ...

       payload = {
           "inputs": request.prompt,
           "negative_prompt": request.negative_prompt,
           "num_inference_steps": request.steps,
           "guidance_scale": request.guidance_scale
       }
       response = requests.post(HUGGINGFACE_API_URL, headers=headers, json=payload)

       # ... rest of the function ...

```

You would also need to update the ImageGenerationRequest model to include these new parameters:

python class ImageGenerationRequest(BaseModel): prompt: str negative_prompt: Optional[str] = None steps: int = 50 guidance_scale: float = 7.5 size: str = "512x512"

This allows users to have more control over the generation process, including specifying what not to include in the image and adjusting the number of inference steps and guidance scale.

How can I implement error handling for API rate limits?

To handle API rate limits, you can implement a retry mechanism with exponential backoff. Here's an example of how you could modify the generate_image function:

```python import time from tenacity import retry, stop_after_attempt, wait_exponential

@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10)) def call_huggingface_api(payload): response = requests.post(HUGGINGFACE_API_URL, headers=headers, json=payload) if response.status_code == 429: # Too Many Requests raise Exception("Rate limit exceeded") response.raise_for_status() return response

@app.post("/generate-image") async def generate_image(request: ImageGenerationRequest): try: logger.info(f"Generating image with prompt: {request.prompt}")

       payload = {"inputs": request.prompt}
       response = call_huggingface_api(payload)

       # ... rest of the function ...

```

This implementation uses the tenacity library to automatically retry the API call up to 3 times, with an exponential backoff between attempts. This helps manage rate limits and temporary service disruptions more gracefully.

Created: | Last Updated:

API for developers to generate images using AI by sending requests with prompt parameters. Using groundbreaking technology to enhance the image before it even reaches the user. Works in seconds!

Here's a step-by-step guide for using the AI Image Generation API template:

Introduction

This template provides an API for generating images using AI. It allows developers to send requests with parameters to create unique images based on text prompts. The API uses the Hugging Face Stable Diffusion model for image generation and includes image enhancement capabilities.

Getting Started

  1. Click "Start with this Template" to begin using this template in the Lazy Builder interface.

Initial Setup

  1. Set up the required environment secret:
  2. In the Lazy Builder, go to the Environment Secrets tab.
  3. Add a new secret with the key HUGGINGFACE_API_KEY.
  4. To get your Hugging Face API key:
    • Go to https://huggingface.co/ and sign up or log in.
    • Navigate to your profile settings.
    • Find the "Access Tokens" section.
    • Create a new token with the necessary permissions for the Stable Diffusion model.
    • Copy the generated token.
  5. Paste your Hugging Face API key as the value for the HUGGINGFACE_API_KEY secret.

Test the Application

  1. Click the "Test" button in the Lazy Builder interface to deploy and run the application.

Using the API

  1. Once the application is deployed, you'll receive a server link and a docs link through the Lazy Builder CLI.

  2. To interact with the API, you can use the FastAPI documentation interface by opening the docs link in your browser.

  3. The main endpoint for image generation is /generate-image. You can test it directly from the FastAPI docs interface or use it in your applications.

Sample API Request

Here's an example of how to use the API in your code:

```python import requests import json

url = "YOUR_SERVER_LINK/generate-image" headers = { "Content-Type": "application/json" } data = { "prompt": "A serene landscape with mountains", "size": "512x512" }

response = requests.post(url, headers=headers, data=json.dumps(data))

if response.status_code == 200: result = response.json() image_url = result["image_url"] print(f"Generated image URL: {image_url}") else: print(f"Error: {response.status_code}, {response.text}") ```

Replace YOUR_SERVER_LINK with the actual server link provided by the Lazy Builder CLI.

API Response

The API will respond with a JSON object containing the URL of the generated and enhanced image:

json { "status": "success", "image_url": "https://example.com/path/to/generated_image.png" }

Integrating the API

To integrate this API into your applications:

  1. Use the server link provided by the Lazy Builder CLI as your base URL for API requests.
  2. Send POST requests to the /generate-image endpoint with the required parameters (prompt and optionally size).
  3. Handle the API response to retrieve the generated image URL.
  4. Display or process the generated image in your application as needed.

By following these steps, you'll be able to use the AI Image Generation API to create unique images based on text prompts in your applications.



Here are 5 key business benefits for this AI Image Generation API template:

Template Benefits

  1. Rapid Prototyping for Visual Content: Enables businesses to quickly generate custom images for marketing materials, product mockups, or website designs, significantly reducing time and costs associated with traditional image creation methods.

  2. Enhanced User Engagement: Can be integrated into applications or websites to allow users to create unique, personalized images on-demand, increasing user interaction and time spent on the platform.

  3. Automated Content Creation: Streamlines the process of creating visual content for social media, blogs, or e-commerce platforms, allowing businesses to maintain a consistent flow of fresh, relevant imagery without extensive manual effort.

  4. Customizable Brand Imagery: Offers the ability to generate brand-specific visuals by fine-tuning prompts, helping businesses maintain visual consistency across various marketing channels and campaigns.

  5. Innovation in Product Development: Provides a tool for product designers and developers to quickly visualize and iterate on new concepts, potentially accelerating the product development cycle and fostering innovation.

Technologies

FastAPI Templates and Webhooks FastAPI Templates and Webhooks

Similar templates

FastAPI endpoint for Text Classification using OpenAI GPT 4

This API will classify incoming text items into categories using the Open AI's GPT 4 model. If the model is unsure about the category of a text item, it will respond with an empty string. The categories are parameters that the API endpoint accepts. The GPT 4 model will classify the items on its own with a prompt like this: "Classify the following item {item} into one of these categories {categories}". There is no maximum number of categories a text item can belong to in the multiple categories classification. The API will use the llm_prompt ability to ask the LLM to classify the item and respond with the category. The API will take the LLM's response as is and will not handle situations where the model identifies multiple categories for a text item in the single category classification. If the model is unsure about the category of a text item in the multiple categories classification, it will respond with an empty string for that item. The API will use Python's concurrent.futures module to parallelize the classification of text items. The API will handle timeouts and exceptions by leaving the items unclassified. The API will parse the LLM's response for the multiple categories classification and match it to the list of categories provided in the API parameters. The API will convert the LLM's response and the categories to lowercase before matching them. The API will split the LLM's response on both ':' and ',' to remove the "Category" word from the response. The temperature of the GPT model is set to a minimal value to make the output more deterministic. The API will return all matching categories for a text item in the multiple categories classification. The API will strip any leading or trailing whitespace from the categories in the LLM's response before matching them to the list of categories provided in the API parameters. The API will accept lists as answers from the LLM. If the LLM responds with a string that's formatted like a list, the API will parse it and match it to the list of categories provided in the API parameters.

Icon 1 Icon 1
218

We found some blogs you might like...