Bunny AI - Image Generation

Bunny AI Image Generation provides a dynamic way to generate images on the fly using different generative machine learning models. This allows you to create truly unique dynamic images on the fly for things such as unique user avatars, blog content, profile pictures, or placeholder images.

2048

This article provides information on different engine types, prompts, and blueprints to help you easily generate images.

Image Generation

After enabling the Bunny AI image generation on a Pull Zone, images can easily be generated by crafting special URL path with the following format:

/.ai/img/<engine-name>/<blueprint>/<seed>/<prompt>.jpg

To use the default blueprint, you can use the phrase "default".

https://aidemo.b-cdn.net/.ai/img/dalle-256/default/12345/rabbit.jpg

📘

Token Authentication

To prevent abuse and attacks, AI generation paths are automatically protected using Token Authentication. The bunny.net panel will automatically generate the tokens for you. However, if you are generating these URLs dynamically, please see the Token Authentication documentation.

Supported Engines

Currently, Bunny AI supports two image models DALLE-2 and Stable Diffusion. Each of them offers multiple different engines that differ in their own unique prompt style and output, as well as pricing, resolution, and performance. The following engines are currently supported.

Engine NameCodeResolutionPrice /Image
Dalle-2 (256px) dalle-256256x256$0.016
Dalle-2 (512px) dalle-512512x512$0.018
Dalle-2 (1024px) dalle-10241024x1024$0.020
Stable Diffusion v1.5 (512px) sd15-512512x512$0.001
Stable Diffusion v2.1 (512px) sd21-512512x512$0.001
Stable Diffusion v2.1 (768px) sd21-768768x768$0.030

Pricing

The billing for image generation is done based on the number of generated images based on the engine, size, and quality of the image, as per the table in the previous section.

The same prompt and parameter combination will only count as a single generated image, and the result will be saved for up to 3 months.

Prompt Format

The prompt URL format will automatically convert hyphens into spaces when passing the phrase to the model. Additionally, the .jpg, .jpeg, .png extension is automatically removed from the prompt. For example:

bunny-eating-carrots.jpg

Is passed to the generative model as:

bunny eating carrots

This allows you to generate cleaner and shorter URLs to help with SEO and to make them easier to share.

Prompt Guide

Blueprints

Image blueprints allow you to simplify image generation with a set of pre-configured styles that can be applied with a single parameter. Using Blueprints, you can prepend or append a set of phrases to the phrase that was provided in the URL. This allows you to further simplify URLs and define simple and consistent styling for your images.

For example, we can configure an "avatar" blueprint with the following parameters:

1072

When applying the blueprint with the following URL, for example:

https://aidemo.b-cdn.net/.ai/img/dalle-256/avatar/bunny.jpg

The following prompt would be automatically generated within the engine:

cute pixel art of a bunny with a colorful solid background
256

bunny.jpg

256

bunny-2.jpg

256

fox.jpg

This presents a very powerful way to dynamically generate unique avatars for your users or maintain a complex style across your application or website.

Tip: When generating pixel art, make sure you are using DALLE-2, which has been generated for this type of style.

Advanced Parameters

Depending on the engine, a number of additional parameters are available to set via blueprint parameters. Some are unique to

ParameterDescriptionTypeEngines Available
Steps(Default: 90) Determines the number of steps to use when generating the image in Stable Diffusion. Increasing this might increase image fidelity, but will increase the image generation time.numeric (10-150)Stable Diffusion
Cfg(Default: 7) Determines how closely the image will match the entered prompt. Decreasing will give the engine more randomness and freedom when generating the output.numeric (1-15)Stable Diffusion
NegativePromptIf set, it replaces the default Negative Prompt list used for Stable Diffusion.stringStable Diffusion

Negative Prompts

Stable Diffusion supports a concept of a Negative Prompt. This allows you to specify which keywords should be avoided by the engine when generating the image, such as removing trees, anomalies, etc.

Due to recent changes in Stable Diffusion 2.1, Negative Keywords are crucial to producing good results with this model. Because of that, Bunny AI will use a default set of negative prompts when none are provided by a blueprint. These are weighted with a weight of -1.

deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra limb, 
ugly, poorly drawn hands, missing limb, blurry, floating limbs, disconnected limbs, 
malformed hands, blur, out of focus, long neck, long body, pixelated, 
((((mutated hands and fingers)))), (((out of frame)))

The default prompts are subject to change based on the evolution of the Stable Diffusion model. If you wish to maintain a consistent style, you can consider setting up your own blueprint with your own prompt.

Prompt Suggestions

To generate images, different models will behave differently. However, some general guidelines are shared between models. When generating images, try using a full descriptive sentence or list of words to describe an image. This can include colors, styles, and emotions.

The generative models support a variety of visual styles, such as

  • "digital art"
  • "traditional art"
  • "oil painting of"
  • "in style of Picasso"
  • "in steampunk style"
  • "in style of Murakami"
  • "abstract art of"

Additionally, you can try prompts to define the fidelity of an image, such as:

  • "high quality"
  • "photorealistic"
  • "4k"
  • "texture"

We recommend giving things a try to find a style and prompts that work for your specific use-case or style requirements.

DALLE-2

DALLE-2 is one of the original image-generative models made by OpenAI. Compared to Stable Diffusion, it supports slightly more descriptive inputs and allows you to generate abstract results, such as this happy robot:

512

A happy robot with flowers growing out of his head, clouds in the background, digital art

Works well for:

  • Works well for concept art
  • Pixel art
  • Illustrations
  • Abstract prompts
  • 3D renders

Does not work well for:

  • Photo image generation

Stable Diffusion

Stable diffusion uses a more direct way of promoting that is less descriptive and more relying on the keywords entered. It works better for photo-realistic prompts or less abstract digital art.
For example, we generated this Christmas bunny:

512

rabbit sitting in front of a christmas tree

Works well for:

  • Photo style generation
  • Photo-realistic images
  • Traditional and digital art

Does not work as well for:

  • Pixel Art
  • Conceptual art representation (eg: robot with flowers, digitalart)
  • (Stable Diffusion 2.1) Faces and hands

Stable Diffusion comes in two major versions (1.5 and 2.1), whereas 2.1 is based on an open-source dataset and might produce slightly less desirable results in certain areas, however, it excels in photo-realistic landscape scenes or buildings.