Bunny AI Image Generation provides a dynamic way to generate images on the fly using different generative machine learning models. This allows you to create truly unique dynamic images on the fly for things such as unique user avatars, blog content, profile pictures, or placeholder images.
This article provides information on different engine types, prompts, and blueprints to help you easily generate images.
After enabling the Bunny AI image generation on a Pull Zone, images can easily be generated by crafting special URL path with the following format:
To use the default blueprint, you can use the phrase "default".
To prevent abuse and attacks, AI generation paths are automatically protected using Token Authentication. The bunny.net panel will automatically generate the tokens for you. However, if you are generating these URLs dynamically, please see the Token Authentication documentation.
Currently, Bunny AI supports two image models DALLE-2 and Stable Diffusion. Each of them offers multiple different engines that differ in their own unique prompt style and output, as well as pricing, resolution, and performance. The following engines are currently supported.
|Engine Name||Code||Resolution||Price /Image|
|Stable Diffusion v1.5 (512px)||sd15-512||512x512||$0.001|
|Stable Diffusion v2.1 (512px)||sd21-512||512x512||$0.001|
|Stable Diffusion v2.1 (768px)||sd21-768||768x768||$0.030|
The billing for image generation is done based on the number of generated images based on the engine, size, and quality of the image, as per the table in the previous section.
The same prompt and parameter combination will only count as a single generated image, and the result will be saved for up to 3 months.
The prompt URL format will automatically convert hyphens into spaces when passing the phrase to the model. Additionally, the .jpg, .jpeg, .png extension is automatically removed from the prompt. For example:
Is passed to the generative model as:
bunny eating carrots
This allows you to generate cleaner and shorter URLs to help with SEO and to make them easier to share.
Image blueprints allow you to simplify image generation with a set of pre-configured styles that can be applied with a single parameter. Using Blueprints, you can prepend or append a set of phrases to the phrase that was provided in the URL. This allows you to further simplify URLs and define simple and consistent styling for your images.
For example, we can configure an "avatar" blueprint with the following parameters:
When applying the blueprint with the following URL, for example:
The following prompt would be automatically generated within the engine:
cute pixel art of a bunny with a colorful solid background
This presents a very powerful way to dynamically generate unique avatars for your users or maintain a complex style across your application or website.
Tip: When generating pixel art, make sure you are using DALLE-2, which has been generated for this type of style.
Depending on the engine, a number of additional parameters are available to set via blueprint parameters. Some are unique to
|Steps||(Default: 90) Determines the number of steps to use when generating the image in Stable Diffusion. Increasing this might increase image fidelity, but will increase the image generation time.||numeric (10-150)||Stable Diffusion|
|Cfg||(Default: 7) Determines how closely the image will match the entered prompt. Decreasing will give the engine more randomness and freedom when generating the output.||numeric (1-15)||Stable Diffusion|
|NegativePrompt||If set, it replaces the default Negative Prompt list used for Stable Diffusion.||string||Stable Diffusion|
Stable Diffusion supports a concept of a Negative Prompt. This allows you to specify which keywords should be avoided by the engine when generating the image, such as removing trees, anomalies, etc.
Due to recent changes in Stable Diffusion 2.1, Negative Keywords are crucial to producing good results with this model. Because of that, Bunny AI will use a default set of negative prompts when none are provided by a blueprint. These are weighted with a weight of -1.
deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, poorly drawn hands, missing limb, blurry, floating limbs, disconnected limbs, malformed hands, blur, out of focus, long neck, long body, pixelated, ((((mutated hands and fingers)))), (((out of frame)))
The default prompts are subject to change based on the evolution of the Stable Diffusion model. If you wish to maintain a consistent style, you can consider setting up your own blueprint with your own prompt.
To generate images, different models will behave differently. However, some general guidelines are shared between models. When generating images, try using a full descriptive sentence or list of words to describe an image. This can include colors, styles, and emotions.
The generative models support a variety of visual styles, such as
- "digital art"
- "traditional art"
- "oil painting of"
- "in style of Picasso"
- "in steampunk style"
- "in style of Murakami"
- "abstract art of"
Additionally, you can try prompts to define the fidelity of an image, such as:
- "high quality"
We recommend giving things a try to find a style and prompts that work for your specific use-case or style requirements.
DALLE-2 is one of the original image-generative models made by OpenAI. Compared to Stable Diffusion, it supports slightly more descriptive inputs and allows you to generate abstract results, such as this happy robot:
Works well for:
- Works well for concept art
- Pixel art
- Abstract prompts
- 3D renders
Does not work well for:
- Photo image generation
Stable diffusion uses a more direct way of promoting that is less descriptive and more relying on the keywords entered. It works better for photo-realistic prompts or less abstract digital art.
For example, we generated this Christmas bunny:
Works well for:
- Photo style generation
- Photo-realistic images
- Traditional and digital art
Does not work as well for:
- Pixel Art
- Conceptual art representation (eg: robot with flowers, digitalart)
- (Stable Diffusion 2.1) Faces and hands
Stable Diffusion comes in two major versions (1.5 and 2.1), whereas 2.1 is based on an open-source dataset and might produce slightly less desirable results in certain areas, however, it excels in photo-realistic landscape scenes or buildings.
Updated 10 months ago