Retro Diffusion Extension for Aseprite
A downloadable tool for Windows, macOS, and Linux
Rapid designing with AI
Create, change, and refine artwork in seconds.
What is Retro Diffusion?
This extension for the popular pixel art software Aseprite allows pixel art AI image generation from inside Aseprite. It also adds advanced features like smart color reduction, and text guided palette creation. In combination with the state of the art pixel model, you can design incredible pixel art pieces in record time.
!!! BEFORE PURCHASING ENSURE YOU READ THE COMPATIBILITY SECTION BELOW, THIS VERSION DOES NOT CONTAIN STANDALONE MODEL FILES !!!
I'm trying to work with itch to get the files available, but for now they are simply too large to host on the platform.
Feature chart
This chart outlines the differences between the full version ($65) and the "Lite" version ($20). Both versions are excellent for pixel art image generation, and come with advanced tools for cleaning up and editing pixel art.
One-time payment
No subscriptions and no credits required, just a flat upfront price. Don't worry about adding another monthly charge to the already way too long list.
Pay once, and get updates and support for no additional cost. How products should be.
Custom pixel art AI model
Retro Diffusion comes with its own pixel art model, which returns results astronomically better than any competing model or AI. If you have tried to get Dall-e 2, Stable Diffusion, or even Midjourney to create accurate pixel art before, you know that they just don't get it.
The best part is, this model has been trained on licensed assets from Astropulse and other pixel artists with their consent.
Select the pixel art model and watch as near perfect pixel art is made in seconds!
Check out my site for more images! https://astropulse.co/#retrodiffusiongallery
Consistency at any size
No matter what size or aspect ratio you're generating at, get consistent and creative results!
Convert any image to pixel art
Using the "Neural Pixelate" tool, easily turn images into pixel art versions with style accurate colors, or choose to keep the original colors!
Using a unique pipeline, Retro Diffusion allows you to control lighting, colors, and other image features from an ultra-simple interface.
These settings are applied in-generation, not as post processing effects. This allows the images to change and adapt to the color and lighting conditions.
Minecraft assets
Several models have been developed specifically for Minecraft style assets, making developing good looking mods and resource packs easier than ever!
All these assets took under an hour to create, with most of the time spent generating images.
Additionally, if you enable the "Tiling" modifier, you can create beautiful seamless block textures in record time.
Generate texture maps
You can automatically generate material texture maps for tiles and import them into your game engine to give assets depth and texture properties:
Game items
Generate creative and interesting game assets just by describing them, no more need for dreaded 'programmer art.'
Palettize
Using the Palettize feature, you can easily reduce the number of colors in an image, or change the colors entirely in just a couple clicks.
Color Style Transfer
Convert images to alternative color palette, and maintain the same style.
Built in styles
Retro Diffusion has over a dozen different pixel art styles available at the click of a button. In addition to the "Game item" and "Tiling" modifiers there is:
These modifiers can even be applied at different strengths, or mixed together to achieve different styles!
Pixel Art Background Removal
Quickly and easily remove backgrounds in one click.
Compatibility
Ensure you have the latest version of Aseprite.
On setup, please make sure to read through the installation instructions thoroughly, and that you are connected to a stable wifi network (mobile hotspots will not work).
Refer to the chart below for exact compatibility information:
Learn more about your hardware and if you meet the requirements here:
System Compatibility
Don't meet the hardware requirements? Use the website and don't worry about putting strain on your own computer!
https://www.retrodiffusion.ai/
* Linux support is not guaranteed. The number of distros, environments, and the commonality of user system modifications makes assured support next to impossible. Retro Diffusion has been tested on stock Ubuntu, Mint, and Fedora. Customized versions of these distros may not be supported. If you have any issues with compatibility on Linux be sure to contact me directly via Discord.
NOTE: The first generation will take a while, as it may need to install additional models.
Performance statistics
GPU:
Nvidia GTX 1050 TI: 64x64 at quality 5 in 2.5 minutes.
Nvidia GTX 960: 64x64 at quality 5 in 2 minutes.
Nvidia GTX 1660 Super: 64x64 at quality 5 in 2 minutes.
Nvidia RTX 3060: 64x64 at quality 5 in 5 seconds.
Nvidia RTX 3090: 64x64 at quality 5 in <2 seconds.
Radeon RX 6650 XT: 64x64 at quality 5 in 20 seconds.
Mac M1 Pro 64gb: 64x64 at quality 5 in 26 seconds.
Mac M2 Air 16gb: 64x64 at quality 5 in 50 seconds.
CPU:
Intel i5-8300H: 64x64 at quality 5 in 10 minutes.
Ryzen 2600X: 64x64 at quality 5 in 10 minutes.
Intel i7-1065G7: 64x64 at quality 5 in 5 minutes.
Ryzen 5800X: 64x64 at quality 5 in <4 minutes.
Aseprite not for you?
Check out the standalone models here!
Check out the image generation website!
Future versions
Any future versions or patches of Retro Diffusion will be given to previous buyers at no additional cost. Make sure to check your email for new versions!
The current version is 12.7.1.
Previews and updates
The best place to go for any information on the current state of Retro Diffusion, or previews of upcoming content is my Twitter profile: https://twitter.com/RealAstropulse
Contact information
The best place to reach me is by joining the Retro Diffusion Discord server: https://discord.gg/retrodiffusion
Alternatively, use the contact form on my website: https://astropulse.co/#contactme
Updated | 12 hours ago |
Status | In development |
Category | Tool |
Platforms | Windows, macOS, Linux |
Rating | Rated 4.0 out of 5 stars (8 total ratings) |
Author | Astropulse |
Tags | ai, Aseprite, extension, Pixel Art, plugin, stable-diffusion |
Purchase
In order to download this tool you must purchase it at or above the minimum price of $65 USD. You will get access to the following files:
Development log
- Retro Diffusion Update for July: Texture Maps & ModifiersJul 30, 2024
- Retro Diffusion Update for June: Palette Control & QoLJun 24, 2024
- Retro Diffusion Update for May: Prompt Guidance & Generation Size!May 19, 2024
- Retro Diffusion Update for April: ControlNet Expanded!Apr 30, 2024
- Retro Diffusion Update: ControlNet-Powered Tools!Mar 03, 2024
- Retro Diffusion January Update: NEW Composition Editing Menu!Jan 20, 2024
- Retro Diffusion Update: Live Image Generation Preview, "Quality" Setting, and mo...Dec 20, 2023
- Retro Diffusion Update: Prompt Translator, New Models, Background Removal, and F...Nov 06, 2023
Comments
Log in with itch.io to leave a comment.
I love the pixel art version of the photo of the red Toyota, really makes me think of the intro sequence photo on Test Drive 3 on MS-DOS...
Don't sleep on this at $45 on sale. Well worth it
Hi, I made a purchase and after using it, I found that it doesn't meet my current work needs. May I ask where I can apply for a refund?
I'd love to help it meet your needs- RD is quite flexible, but it does require some tinkering and learning to use effectively. Of course if you'd rather just refund, you can contact itch.io support for that.
Our project currently requires a Japanese animation style, not a realistic style, but it seems that this model option is not available at the moment?
The program doesn't have any animation tools, but I'm wondering what you mean by a japanese style? If you mean something more like a 'chibi' style you can get this through prompts, without the need to use the modifier models.
If I purchase the full version but don't meet the hardware specs, does the cloud version allow for free usage?
No, the cloud version still requires you to purchase credits. The hardware requirements are listed very clearly on the product page.
Hi, tried to send you a message via your website but the contact from seems bugged. I am trying to install this into Aseprite however I have Python 3.12 installed on my system and the extension fails saying it needs 3.10 or 3.11. Do I need to downgrade my Python install or can you update this?
Hey! You do need Python 3.11 (3.11.6 is the best specific version). It is unable to use python 3.12 at all, because the largest dependency is PyTorch, which is only available for <= Python 3.11
Yep, it runs completely locally. All the artwork used for training was either made by me, or was given with consent from other artists.
Any chance of this ever supporting the open source Pixelorama? It has extension support: https://github.com/Orama-Interactive/Pixelorama
Low chance, but Pixelorama has been on my radar for a while :)
Hi! How fast will generating medium sized pictures and generating pictures from examples work on mac m1 pro 16 GB RAM?
Ok I've managed to import the extension, get the github and python 3.11 installed. However, I'm unclear how to use this program properly. Pixelate causes an error. If I do image to image it at least makes something, but nothing usable. I'd just like to know how to use it and what is causing the error.
My setup:
Windows 11 Pro
AMD Ryzen 5 5600G with Radeon Graphics 4.2Ghz 6 core, 12 logical processors
64GB ram
Graphics Card: Radeon RX 6700 XT w/ 12GB GDDR6 memory
M2 and SSD drives, multiple terabytes worth.
ERROR:
Traceback (most recent call last):
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\image_server.py",
line 3513, in server
for result in neural_inference(
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\image_server.py",
line 2612, in neural_inference
for step, samples_ddim in enumerate(sample_cldm(
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\cldm_inference.py",
line 160, in sample_cldm
for samples_cldm in sample(
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\sample.py",
line 141, in sample
for step, samples in enumerate(sampler.sample(
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\samplers.py",
line 895, in sample
for samples in sampler.sample(
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\utils\_contextlib.py", line 35, in generator_context
response = gen.send(None)
^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\k_diffusion\sam
pling.py", line 267, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\samplers.py",
line 364, in forward
out = self.inner_model(
^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\samplers.py",
line 332, in forward
return self.apply_model(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\samplers.py",
line 319, in apply_model
out = sampling_function(
^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\samplers.py",
line 294, in sampling_function
cond, uncond = calc_cond_uncond_batch(
^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\samplers.py",
line 218, in calc_cond_uncond_batch
c["control"] = control.get_control(
^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\controlnet.py",
line 157, in get_control
control_prev = self.previous_controlnet.get_control(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\controlnet.py",
line 208, in get_control
control = self.control_model(
^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\comfy_cldm.py",
line 375, in forward
h = module(h, emb, context)
^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\cldm_models.py"
, line 3214, in forward
x = layer(x, emb)
^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\cldm_models.py"
, line 3160, in forward
return checkpoint(
^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\cldm.py", line
627, in checkpoint
return func(*inputs)
^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\ldm\cldm_models.py"
, line 3172, in _forward
h = self.in_layers(x)
^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\container.py", line 217, in forward
input = module(input)
^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\modules\normalization.py", line 287, in forward
return F.group_norm(
^^^^^^^^^^^^^
File
"C:\Users\cool_\AppData\Roaming\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\venv\Lib\site-packages\torc
h\nn\functional.py", line 2561, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float
The issue is that your GPU is not compatible with the software. AMD can only be used on Linux. This is explained in the compatibility section.
So I can't even run it in CPU mode without it crashing?
You can run text to image and image to image, but Neural tools and controlnet aren't compatible with cpu mode yet. I'm working on it, but no release for it right now.
Alright, so this is an attempt to make a fair and balanced review. Instead of telling you whether or not you should get it, I'll give you my first takes on this software as raw data and let you decide for yourself, cheers.
Pros
Cons
Hey! Thanks for the review!
Here's some tips for tiling and portraits:
With tiling, there are two options that do different things. The "Tile X direction" and "Tile Y direction" can be made visible by toggling "Show advanced options", these settings mathematically force the image to tile. This can be great when you need something to tile that normally wouldn't, but its also very heavy handed and can make weird stuff.
The second option is to enable the "Tiling" modifier. This is a specialized image generator that has been trained on tiling textures, stuff like Minecraft blocks. It works best at 16x16 or 32x32.
For portraits, typically putting something like "A close up portrait of ___" or "A headshot of ___" will do the trick. For example here is "A close up portrait of a fox monk" with all other settings at default values:
On speed and CPU usage:
AI image creation is actually one of the most complex tasks computers can do, when you boil it down, its essentially solving differential equations with billions of inputs and outputs. Way more demanding than any AAA game, or even most 3D rendering software, even for small pixel art images. This is why the compatibility section is so strict.
We've managed to get the requirements down to a 4GB GPU, which is pretty impressive given the size of the models and the complexity of the computations. You can find system requirements and some benchmark data above on the main page.
Again thanks for the review, and I hope it helps people with the decision to buy or not!
Thanks for the feedback, some very useful info. And it's a fascinating thing on the CPU usage, it makes a lot of sense given the nature of AI. Keep up the great work. :)
Not to use Astropulse's comments as a discussion board but could you point out some of the other models that handle sprite art well? The $65 seems pretty good when you consider API/token costs but I'm curious to see what is out there. I follow a lot of AI projects but admittedly none dedicated to pixel art
Fair enough. In retrospect I admit I really should have phrased my review a bit better. What I should have said is that I know of AI that can get close-enough to a pixelated style to create some very impressive images with some minor alterations. And that's my bad, my apologies. I've also softened my views on Retro Diffusion since I first wrote that review as the style has grown on me even if I don't use it. It does do something that is fairly unique in the AI field (at least as far as I know) and for that it is a great service that I hope to see further refined. If you feel that it's worth the cost, then don't let me dissuade you. By all means use it, I won't judge. I'm sure Mr. Astropulse would appreciate your business and perhaps use the money to make better models for us in the future.
Still, I didn't want you to go away empty-handed so I've got two quick examples below that I think can illustrate that other AIs can make some very pretty sprite art. Please note that I did shrink these down to 128x128 pixels in my (20-year-old) version of Photoshop using 'nearest neighbor' resampling to make them look more convincing as pixel art as the original images did have some odd artifacting. Still, I think it should help illustrate what these two can do for some very convincing sprite art with the software. With a few alterations these could be indistinguishable from the genuine article.
Anyways, best of luck to you. Cheers!
Knight Portrait: Created in Bing AI using a custom pixel art prompt I created.
Man with Barrel: created with Chat GPT requesting a picture of a man struggling to lift a barrel in a sprite art style.
Hey thanks for replying. I'll post this in posterity so I can stop tiptoeing around the idea publicly. @ you but also just @ the idea of using 20 year old photoshop which is the clutchest sht I truly idolize. Swiftkey dvorak and autocomplete is a curse and if it seems long don't read it (I already agree with you)
I was just in the mindset pricing some tools/tokens this last few weeks not even to generate new art but pixelate images or 3D models into pixel art in 'my' style so I didn't have keep stitching it. The pixelizer effect alone built into this is worth 1/3rd of the price or more and then you're just looking at a month of subscription costs if you trust this to produce a particular effect. You could also do it with cameras and frame grabbing in unity but the time to learn it alone in my experience has been frustrating. Even what you just did here takes either photoshop or a knowledge of their free online tools/GIMP or the great photoscape (see the rabbit hole if you're a complete amateur). If you're wanting to make 500, or in my hope 500,000,000 unique NPCs in a pixel art game this tool opens up whole new procedural dreams I've had for decades. I play elite dangerous, no man's sky and single player vanilla minecraft. give me any procgen in any methodology and I am very happy.
The big AI companies must be panicking to switch to real-time video because this kind of app is going to replace the need to ever buy tokens on someone else's server or get photoshop etc, especially if they let people copyright pixel art any much more before every pixel location style has been churned out by some asset seller and only homebrew is allowed with some kind of cultural tip/donation peer pressure to replace litigation over copywritten nintendo pixel art games. (Thank everyone who has released stuff on CC licenses so far, it may be the only hope and we need to reward all the existing pixel artists by buying their games or getting them art grants etc because there's no way that games as a medium can survive anyone hoarding a style of pixel art through copyright. It just needs to be referential and not profitable for someone who can straight rip it off or have a robot rip it off.)
Homebrewed, in house trained models are something else and I am so ready for them in a pajama sunday JRPG message board space. I call this sort of app a log cabin game. Power's out, there's no internet. Just a fire going and winter outside. You'll have to go to bed before the laptop battery runs dry. The scourge of the dotcom to web3 era has to leave us old ladies something. Once you disappear into the hills like I have these offline tools are a place to relive deep, retail nostalgia currently/ironically displaced by current smartphone surveillance craze
PS I love Bing image generator and use it to make all my out of the park baseball player photos. This might be even more archaic as a topic but it's lowkey unfair to MLBPA to have historically made sports stats a public event that can be reported in the public domain if you care about their rights as laborers/etc. I don't need to follow real life sports at their ticket/parking/satellite tv rates and am very close to just simulating batted ball baseball. vin scully voice gen, ken burns documentaries, none of which existed in real life or will ever be marketed to anyone but myself in private. I have to go to a gym or pay bills or I would already be retired in my 30s because my life is complete
Thanks, many people underestimate the power of older Photoshop versions. Sure, they don't have all the bells and whistles of the new ones, but I can still do a lot more on mine than most people can on gimp, including mass-automation. The downside is that if I have too much space on my hard drive the software assumes I have zero space on my hard drive and wont' boot up, so I had to install hundreds of gigabytes of games to get it to run. XD
Speaking of licensing. The weird part about AI art? You can't copyright it as-is. Companies cannot legally copyright works generated with their model. That's also why you don't see a lot of big companies (knowingly) using AI art because it could feasibly mean losing the rights to their IPs. That's why I see AI art more as the tool of the indie game dev and hobbyist rather than that of the big companies.
Also, here's a thought for you. There are some prompts that you can get on sites like prompt base to generate entire sheets of sprite art. That may help you in your quest to get 500,000,000 NPCs. I mean heck, even if you only pull about 3 sprites per sheet you're at least cutting those generations down to a third.
Anyways, take care and best of luck. Cheers!
Hi! This looks really cool. Can it do isometric assets?
Yep! There is an isometric model in the "Modifier" section :)
https://imgur.com/a/t17Azsa
Good to know. Is there a way to use a custom color palette? Without that, it'd be kind of hard to get a consistent style
Oh wait it says there is on the page, my bad. I just didn't see one in the web demo
Hi, I'd love to get a look at this tool, but being Brazilian is hard haha. Any chance you'll have an Easter sale?
There is a sale going on right now!
Sweet, just got it! Is there any way I could get the files for the standalone models as well?
Send me a message on Discord with proof of purchase and I'll send you a code to get the model files :)
thank you! Will do
Hi, what exactly do you mean by:
!!! BEFORE PURCHASING ENSURE YOU READ THE COMPATIBILITY SECTION BELOW, THIS VERSION DOES NOT CONTAIN STANDALONE MODEL FILES !!!
The compatibility section does not appear to discuss the model files best I can tell. If the model files can't be hosted on itch.io, how does one get them in order to run this 100% locally? Are they automatically downloaded upon the first generation run?
The model files required by the extension are downloaded automatically, but these files are only usable inside of Aseprite. If you need model files for use outside of Aseprite you can either purchase the extension through Gumroad, or contact me with proof of purchase and I'll send a redeemable code.
Is AMD GPU support for windows completely discarded or are you looking into it in the future?
What are the datasets you trained this model on, do you own the copyrights or have permission, if not, why are you selling it ?
All of the training data we use is given to us by artists for the purpose of training, a plurality of the dataset is even my own artwork.
They dont have permission
You can literally read my post above, stating that I had permission for each image we used in training.
Hi, I have 16gb ram. I'm using RTX 3050ti 4gb vram. Is 4gb vram enough for model? How much steps can I go with 4gb vram?
4gb is on the low end, but you will be able to make 64x64 images.
not gonna buy cause it would break me, but it looks very well trained congrats
Thank you! You should check out the site, its a bit more affordable and you can even try it for free :) https://www.retrodiffusion.ai
if i may suggest something, please make it so unless setting such as prompts, size and reference image are changed, it does not consume even more credits, cause i tried and i basically spended all of my credits and didnt got anything close to what i thought, and i cant reroll or tweak some simple values to try again.
Every time an image is generated it cost us on the server-side, even if no settings were changed, so we can't make that an option unfortunately. Sorry you didn't get what you were expecting. What were you trying to create?
Hi, this looks really cool. Seeing the storage on the hardware requirement, I assume the paid version is local. I have a few questions :
- Is there things that can only be done on the paid Aseprite version and not on the web version ?
- It seems the paid version gives you a license key that you can use to redeem tokens for the web version, right ? How many tokens ?
- 16GB RAM is recommended or a minimum ?
- I assume there's no refund policy, just in case...
- In your performance stats, what is the image size between "512x512" and "64x64 pixel model" ? (ex: Intel i5-8300H: 20 steps 512x512 (64x64 pixel model) in 10 minutes.)
- Does the paid Aseprite version has an "input image" feature like the web version ?
The thing is I have an intel-5 CPU, an integrated intel iris Xe GPU and 8GB RAM so I'm not sure I'd be able to run it locally. But I'd prefer to have a one time payment rather than constantly buying credits for the web version.
Hey! The paid version does run locally. The goal is for the web and local versions to maintain as many identical features as possible, right now the web is slightly behind but we're updating it this week.
There aren't any tokens with the local version.
16GB RAM is a minimum for CPU only generations. If you have a capable GPU you only need 8GB of RAM.
There isn't an explicit refund policy, but we provide refunds to anyone who asks.
The pixel art image size for a 512x512 generation is 64x64. Both the extension and the web version handle all the size conversions behind the scenes for you.
All versions have an image input option.
Based on the hardware you mentioned, you would not be able to run it. Intel iris Xe is not a dedicated gpu, but rather integrated graphics on your cpu. You must have a dedicated GPU with at least 4GB of VRAM. You can refer to this system requirements chart for more detailed compatibility requirements:
thank you for the answers