I'm checking in about Retro Diffusion's hardware requirements. I saw in the docs that Radeon graphics cards aren't supported on Windows.
I'm wondering if it's a different story if I'm running Radeon within a Linux setup through Windows Subsystem for Linux (WSL). Would Retro Diffusion work then?
Just curious if that's a viable option. Thanks for your time!
i know it says one time payment so i purchased it bout i had to make an account online and use credits from the webpage to generate anything in the aesprite extension
The extension is a one time purchase and it runs locally, all of the tools for that can be found in the Sprite menu, or in Help, Retro Diffusion Scripts.
It sounds like you may have been confused and used the "Retrodiffusion.ai" tab, this section lets you also use the website, which requires credits. This is why they are labeled "API Text to Image", because they are using the website API. But it is only an option, not something you need to generate with.
You can use all other tools in the extension without signing into the site, buying credits, or even having an internet connection.
No, they do not use the same models or the same code base even. Completely different products. The website runs models that are extremely large and can only run on server grade gpu clusters.
I had a few questions about this one. I already saw it coming last year, but I didn't look into it much anymore since I was learning to do AI on my own with image and video generation and so on.
I do have asesprite and I was wondering to buy this pack with the extension along with it, $50 for just the model vs $65 with asesprite extension, I am more using photoshop for my artwork then asesprite but might be nice is it possible to add it later on or ?
But I do read that the model on the website is different ( I presume just with features and animations not the art itself in the model or do you provide also those models updated with animation walk? or just using control net part on the website feature for the animation so it's all the same animation with the pose ?)
Or are you possible using Lora's as well and do you add them in the package or?
I rather do my own generation and I can make more and as much as I want ( costing credits on the website )
If I just use my own stable diffusion or auto1111 or comfyUI whatever then I can just use this model along with control net and make my own animations since it seems you are doing the same on your website?
I really like the animation that I seen but it has not been released so long and I was also hoping for more animations like death or climbing / dashing / dodge and also attacks ( I did read something about you was working on it? ) but I presume I could archive that with using control net and your model in my own stable diffusion?
I did found something named "voidless.dev" on a YT short yesterday showing vector artwork but doing animations with just a prompt, it seems nice maybe it could do pixel artwork as well but I need to test it out still.
Or do you think you might update the extension in the future for asesprite with animations coming along or that would be only "part" of your website and people need to figure it out themself how it works with control net ( I know how control net and such works )
I also read some where the animations was "max" 48x48 size on your website? i could get bigger ones when I try it on my own I guess.
The animations are not done using controlnet, they are made using their own specific model. This is also why other actions haven't been added yet, we dont have the data to train a model for it
If you are looking for results similar to the site, they can't be achieved locally. The extension does give you more control, and it has controlnet integration, you can load your own loras, etc. But it is not the same as the site, and it likely will never have the same full capabilities.
Hello, I have a question and it is if the complete model also includes the retro diffusion model because in Gumroad they are 2 different things: the retro diffusion extension for Aseprite and the retro diffusion model.
Hi, me and my brother who both work full time jobs and don't have much spare money, really want to get into indie game development. However, the biggest issue for us so far was sprites and sprite animations creation which takes forever with full time jobs and families, and since we are in 2025 I figured that AI should already handle this, especially considering that 2d pixel art should be easy, but I still did not see any realistically working models or services, they all do weird stuff and absolutely ignore half of what I mention. I am wondering if your plugin (full version) can accurately generate animation sprite sheets from a static sprite? I was satisfied with the result of chatGPT 4o generating me what I asked for from a hand sketch (attached). However, it could not properly (maintain scale, flip everything logically) do basic manipulations such as generating the left-facing model (thus flipping the arms, with the drill arm being closer to the viewer). Can your plugin properly do that, and then generate animation frames sprite sheet for movement, drilling and etc?
Contrary to how it seems, pixel art is one of the most difficult things for AI to do accurately, because it requires way more precision and accuracy than almost any other art form.
The plugin can't create sprite sheets like this, and certainly not without a lot of leg work. You're probably better off looking into converting 3d animations to 2d sprites, that takes a lot less effort.
I'm planning a photoshoot where a person will be modelling for a sprite sheet. Once I've compiled that into a sprite sheet, could I run that through this? Or would I need to run each image through the model?
For the whole sprite sheet, if that works, could I also change things like a sword into a mace? Or a handgun into a steampunk rifle for instance?
Hey! The models in this tool aren't quite capable enough for animation like that, and especially for changing a specific item while leaving other elements intact.
You're welcome to try of course, but it will be an up-hill battle :)
Joined the discord and found the solution on the forums, so here it is if anyone is having the same problem: - Uninstall python 32 bits (keep only the 64 bits) - On Aseprite go to Help->Retro Diffusion Tools->Open python Venv folder - Delete all the content of the opened folder - Run on Aseprite File->Setup Retro Diffusion again
It is updated frequently, about every 1-2 months. But it is not the same as the website, it uses completely different models. The models on the website can't run on normal gpus, they are far too large.
Retro diffusion uses its own virtual environment, so you need to do all of the python library management there, you can find the venv location in "Help" -> "Retro Diffusion Tools" -> "Open Python Venv Folder"
I purchased this tool. Can you please add a license to the itch.io page stating that it can be used for commercial content production? This is important.
Hey! The license is included inside the extension files itself, but I've now also added it as a "demo" here.
It's a bunch of legal jargon but it essentially boils down to the code and models are owned by Astropulse LLC, and not able to be used commercially, but the outputs of the code and models are owned by whoever creates them (you) and are able to be used commercially since you have the rights.
Reading below it appears this cannot be used to make sprite sheets, is there every going to be that capability or should I just git gud?
Right now I'm doing everything my little programmer fingy's allow me to, cutting up sprites into pieces and made a script to re-composite them in engine, this way I can have 1 head and a ton of bodies/hair/etc.
But if this could one day make sprite sheets that'd save a life.
Hi Astro, this project looks incredible! I am planning on purchasing it, but I was curious if the custom model you created would be accessible for me to run through separate programs after purchase? I use a lot of my own custom setups in ComfyUI, so I would get more out of the purchase if I could riff on it with my own variations.
I haven't heard of this issue before, would you join the community discord server so we can figure out whats going on more closely? https://discord.gg/retrodiffusion
I'm having a weird problem. When I try to run the install script, it detects my Geforce 1080 TI card as an "AMD GPU". I know cuda works as I've played around with other AI models using it. How do I fix this?
Edit: using windows 10 if that's of any relevance. I also have the latest geforce drivers installed.
Hey! There can be false detections sometimes because of how windows sorts graphics outputs, especially if your CPU is amd and has integrated graphics.
The comments here isn't a good place for troubleshooting, so if you'd join the community discord server and make a post in the errors channel that would be great: https://discord.gg/retrodiffusion
You would need to use Linux with an AMD gpu. Additionally only more recent AMD cards are supported, 7000 series and up, with the exception of the 6700 and 6800 which are also supported.
Looks like Python wasn't able to install the libraries it needed. Being an AI program it's incredibly complex, but I do my best to simplify it down for users. Unfortunately sometimes it needs a little help to get installed and running.
Itch.io comments aren't the best place for troubleshooting, so please join the community discord server so I can help you with this better: https://discord.gg/retrodiffusion
Any update on the Flux model locally?, I know you said it wasn't working on consumer hardware yet but since Flux came out people have found ways like Bitsandbytes and GGUF. Any chance of being able to take advantage of these and get it running on the Extension locally?.
Don't know about your other questions, but XL models seem to fail, so I assume it doesn't work nor support them, I have tried Pony XL and some others, they all fail.
I'd love to help it meet your needs- RD is quite flexible, but it does require some tinkering and learning to use effectively. Of course if you'd rather just refund, you can contact itch.io support for that.
The program doesn't have any animation tools, but I'm wondering what you mean by a japanese style? If you mean something more like a 'chibi' style you can get this through prompts, without the need to use the modifier models.
Hi, tried to send you a message via your website but the contact from seems bugged. I am trying to install this into Aseprite however I have Python 3.12 installed on my system and the extension fails saying it needs 3.10 or 3.11. Do I need to downgrade my Python install or can you update this?
Hey! You do need Python 3.11 (3.11.6 is the best specific version). It is unable to use python 3.12 at all, because the largest dependency is PyTorch, which is only available for <= Python 3.11
← Return to tool
Comments
Log in with itch.io to leave a comment.
I'm checking in about Retro Diffusion's hardware requirements. I saw in the docs that Radeon graphics cards aren't supported on Windows.
I'm wondering if it's a different story if I'm running Radeon within a Linux setup through Windows Subsystem for Linux (WSL). Would Retro Diffusion work then?
Just curious if that's a viable option. Thanks for your time!
This is not a viable option due to how the program needs to be run on the main system.
I was thinking about buying it for Minecraft textures but they look very noisy with bad shapes/outlines, the new updates fix those issues? Thanks!
Should run well on M1 MacBook Air 8gb?
No, that is below the minimum requirement. See the compatibility graphic on the page.
is this a one time payment or is it credit base
i know it says one time payment so i purchased it bout i had to make an account online and use credits from the webpage to generate anything in the aesprite extension
The extension is a one time purchase and it runs locally, all of the tools for that can be found in the Sprite menu, or in Help, Retro Diffusion Scripts.
It sounds like you may have been confused and used the "Retrodiffusion.ai" tab, this section lets you also use the website, which requires credits. This is why they are labeled "API Text to Image", because they are using the website API. But it is only an option, not something you need to generate with.
You can use all other tools in the extension without signing into the site, buying credits, or even having an internet connection.
Is locally deployed software the same as cloud based websites in terms of functionality?
No, they do not use the same models or the same code base even. Completely different products. The website runs models that are extremely large and can only run on server grade gpu clusters.
I had a few questions about this one. I already saw it coming last year, but I didn't look into it much anymore since I was learning to do AI on my own with image and video generation and so on.
I do have asesprite and I was wondering to buy this pack with the extension along with it, $50 for just the model vs $65 with asesprite extension, I am more using photoshop for my artwork then asesprite but might be nice is it possible to add it later on or ?
But I do read that the model on the website is different ( I presume just with features and animations not the art itself in the model or do you provide also those models updated with animation walk? or just using control net part on the website feature for the animation so it's all the same animation with the pose ?)
Or are you possible using Lora's as well and do you add them in the package or?
I rather do my own generation and I can make more and as much as I want ( costing credits on the website )
If I just use my own stable diffusion or auto1111 or comfyUI whatever then I can just use this model along with control net and make my own animations since it seems you are doing the same on your website?
I really like the animation that I seen but it has not been released so long and I was also hoping for more animations like death or climbing / dashing / dodge and also attacks ( I did read something about you was working on it? ) but I presume I could archive that with using control net and your model in my own stable diffusion?
I did found something named "voidless.dev" on a YT short yesterday showing vector artwork but doing animations with just a prompt, it seems nice maybe it could do pixel artwork as well but I need to test it out still.
Or do you think you might update the extension in the future for asesprite with animations coming along or that would be only "part" of your website and people need to figure it out themself how it works with control net ( I know how control net and such works )
I also read some where the animations was "max" 48x48 size on your website? i could get bigger ones when I try it on my own I guess.
Thanks for the reply so far.
The animations are not done using controlnet, they are made using their own specific model. This is also why other actions haven't been added yet, we dont have the data to train a model for it
If you are looking for results similar to the site, they can't be achieved locally. The extension does give you more control, and it has controlnet integration, you can load your own loras, etc. But it is not the same as the site, and it likely will never have the same full capabilities.
Hey this is really cool!
Can it do sprite sheets for animation?
Hey! The extension cannot, but the website can https://www.retrodiffusion.ai/
I purchased the product on Gumroad, how can I migrate to here?
No, but you will be able to move it to https://www.retrodiffusion.ai/ once we have the store set up there.
I am the same. Are the contents of the two stores the same now? If it is different, when can I migrate?:)
Store is still not set up, you'll get an email when it is.
Is there an expected time that will be completed next quarter or this year?
Hello, I have a question and it is if the complete model also includes the retro diffusion model because in Gumroad they are 2 different things: the retro diffusion extension for Aseprite and the retro diffusion model.
Hey! Yes it does, you can access them from "Help" -> "Retro Diffusion Tools" -> "Download Models" inside the extension.
Hi, me and my brother who both work full time jobs and don't have much spare money, really want to get into indie game development. However, the biggest issue for us so far was sprites and sprite animations creation which takes forever with full time jobs and families, and since we are in 2025 I figured that AI should already handle this, especially considering that 2d pixel art should be easy, but I still did not see any realistically working models or services, they all do weird stuff and absolutely ignore half of what I mention. I am wondering if your plugin (full version) can accurately generate animation sprite sheets from a static sprite? I was satisfied with the result of chatGPT 4o generating me what I asked for from a hand sketch (attached). However, it could not properly (maintain scale, flip everything logically) do basic manipulations such as generating the left-facing model (thus flipping the arms, with the drill arm being closer to the viewer). Can your plugin properly do that, and then generate animation frames sprite sheet for movement, drilling and etc?
Contrary to how it seems, pixel art is one of the most difficult things for AI to do accurately, because it requires way more precision and accuracy than almost any other art form.
The plugin can't create sprite sheets like this, and certainly not without a lot of leg work. You're probably better off looking into converting 3d animations to 2d sprites, that takes a lot less effort.
Hello there. Fantastic work with this!
I'm planning a photoshoot where a person will be modelling for a sprite sheet. Once I've compiled that into a sprite sheet, could I run that through this? Or would I need to run each image through the model?
For the whole sprite sheet, if that works, could I also change things like a sword into a mace? Or a handgun into a steampunk rifle for instance?
Thank you :)
Hey! The models in this tool aren't quite capable enough for animation like that, and especially for changing a specific item while leaving other elements intact.
You're welcome to try of course, but it will be an up-hill battle :)
Error when trying to generate something:
Importing libraries. This may take one or more minutes.
ERROR:
Traceback (most recent call last):
File "C:[blablablamyfolder]\Aseprite\extensions\RetroDiffusion\stable-diffusion-aseprite\scripts\image_server.py", line 6, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Catastrophic failure, send this error to the developer.
Hey, you need to run the setup script from file -> setup retro diffusion.
If there are errors, it did not work. For further troubleshooting join the discord server: https://discord.gg/retrodiffusion
Hi! When I try to setup the script, it gives me this error:
ERROR:
Traceback (most recent call last):
File "C:\blablablafolder\Aseprite\extensions\RetroDiffusion\python\setup.py", line 667, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
Install has failed, ensure requirements are installed.
If issues persist, please contact the developer.
Joined the discord and found the solution on the forums, so here it is if anyone is having the same problem:
- Uninstall python 32 bits (keep only the 64 bits)
- On Aseprite go to Help->Retro Diffusion Tools->Open python Venv folder
- Delete all the content of the opened folder
- Run on Aseprite File->Setup Retro Diffusion again
Is this version updated like the website or is it very old?
It is updated frequently, about every 1-2 months. But it is not the same as the website, it uses completely different models. The models on the website can't run on normal gpus, they are far too large.
Hello! I'm trying to make very simple pixel art characters like the ones found here:
https://www.epicmafia.com/roles
Is there any way to provide those sprites as a guide and make similar ones, with different roles?
This style will be difficult to get without training for it. I recommend checking this guide out, it tells you how to train your own "lora" model that you can use with retro diffusion to get the exact style you want. https://docs.google.com/document/d/1jBjn7xfGzGmRpvap43hMvbNq0DLCDJWa30JG-1Esx3o/...
Hi,
I'm considering buying this. Can I expect this to work tolerably well (and without any library issues obviously) on a 2024 M3 Macbook Air?
Unfortunately no, that model does not have enough memory to run this program well.
I've installed last torch: https://download.pytorch.org/whl/nightly/cu128
But still got error:
Any help?
Retro diffusion uses its own virtual environment, so you need to do all of the python library management there, you can find the venv location in "Help" -> "Retro Diffusion Tools" -> "Open Python Venv Folder"
Thanks for replay! I've tried to update manually all torch packages in local directory, but looks like there are some incompatibility with them.
Ah, thats unfortunate. We'll be supporting 50xx cards officially once pytorch has stable support, but until then you'll need to hold tight :)
I purchased this tool. Can you please add a license to the itch.io page stating that it can be used for commercial content production? This is important.
Hey! The license is included inside the extension files itself, but I've now also added it as a "demo" here.
It's a bunch of legal jargon but it essentially boils down to the code and models are owned by Astropulse LLC, and not able to be used commercially, but the outputs of the code and models are owned by whoever creates them (you) and are able to be used commercially since you have the rights.
Thanks. This is helpful.
hi! 👋 is there any upcoming sales?
Reading below it appears this cannot be used to make sprite sheets, is there every going to be that capability or should I just git gud?
Right now I'm doing everything my little programmer fingy's allow me to, cutting up sprites into pieces and made a script to re-composite them in engine, this way I can have 1 head and a ton of bodies/hair/etc.
But if this could one day make sprite sheets that'd save a life.
The extension probably won't be able to do sprite sheets any time soon, but on the website (https://www.retrodiffusion.ai/) we've nearly got a model released that can do walk cycles like shown here: https://x.com/RealAstropulse/status/1896924271659884854
Hi Astro, this project looks incredible! I am planning on purchasing it, but I was curious if the custom model you created would be accessible for me to run through separate programs after purchase? I use a lot of my own custom setups in ComfyUI, so I would get more out of the purchase if I could riff on it with my own variations.
Yep! You can access those models from inside the extension through "help" -> "retro diffusion tools" -> "download models"
Excellent, thank you for creating this. It's tough to find reasonably ethical models, so I really appreciate the work you're doing here.
i'm using Mac M3 MAX. My first locally generated image was okay-ish, but from second and on, all that's generated is just Color Noise. any idea?
I haven't heard of this issue before, would you join the community discord server so we can figure out whats going on more closely? https://discord.gg/retrodiffusion
I'm having a weird problem. When I try to run the install script, it detects my Geforce 1080 TI card as an "AMD GPU". I know cuda works as I've played around with other AI models using it. How do I fix this?
Edit: using windows 10 if that's of any relevance. I also have the latest geforce drivers installed.
Hey! There can be false detections sometimes because of how windows sorts graphics outputs, especially if your CPU is amd and has integrated graphics.
The comments here isn't a good place for troubleshooting, so if you'd join the community discord server and make a post in the errors channel that would be great: https://discord.gg/retrodiffusion
Will do! Thanks for the swift response.
Can i run this on my pc? i have Radeon GPU and windows system, should i be running it on linux for it to work?
You would need to use Linux with an AMD gpu. Additionally only more recent AMD cards are supported, 7000 series and up, with the exception of the 6700 and 6800 which are also supported.
Hi Astropulse, i have a RX 6600. It isn't supported, then? It should have 8GB of VRAM, if i'm not wrong.
It is not supported by AMD's AI drivers in pytorch
can this make animations?
It cannot make animations yet, though it is something we are working on.
When you say make animations, is this more like animations within Aseprite or Sprite sheets?
You can make animations by hand in Aseprite, but Retro Diffusion can't generate animations or animated spritesheets.
I see, would it however do well enough to recreate the same say character but in another pose? Or would AI completely create something new?
It creates something new, this model doesnt have the consistency to make the same characters in different poses.
Will this have a date when it stops working?
No, it runs locally on your own hardware. It will only stop working if you delete the files or other things it relies on.
best tool on itch.io ever!
i keep gettign the following error when terminal is attempting to install. any suggestions, didnt realize it would be this technical to install.
ERROR:
Traceback (most recent call last):
File "/Users/bean/Library/Application Support/Aseprite/extensions/RetroDiffusion/stable-diffusion-aseprite/../python/setup.py", line 536, in <module>
from PIL import Image
ModuleNotFoundError: No module named 'PIL'
Looks like Python wasn't able to install the libraries it needed. Being an AI program it's incredibly complex, but I do my best to simplify it down for users. Unfortunately sometimes it needs a little help to get installed and running.
Itch.io comments aren't the best place for troubleshooting, so please join the community discord server so I can help you with this better: https://discord.gg/retrodiffusion
can this create non square sprites? (16x24 for example)
Yep, it can do any aspect ratio you want. Some might look a bit weird, but anything less extreme than 1:3 should look fine :)
Any update on the Flux model locally?, I know you said it wasn't working on consumer hardware yet but since Flux came out people have found ways like Bitsandbytes and GGUF. Any chance of being able to take advantage of these and get it running on the Extension locally?.
nvm, I just saw your post on discord, I will wait patiently.
Thinking of getting it, is it compatible with XL models like pony?
I'm not asking about directly top down like the below question, but can you do 8 directional sprites for rpgmaker? what about Doom style fps sprites?
Don't know about your other questions, but XL models seem to fail, so I assume it doesn't work nor support them, I have tried Pony XL and some others, they all fail.
Hi, can I use this tool to generate top down pixel art characters (32x32)?
Something like this: Example #1
Hey! It's not specifically trained for this style, so it won't do it very well.
I love the pixel art version of the photo of the red Toyota, really makes me think of the intro sequence photo on Test Drive 3 on MS-DOS...
Hi, I made a purchase and after using it, I found that it doesn't meet my current work needs. May I ask where I can apply for a refund?
I'd love to help it meet your needs- RD is quite flexible, but it does require some tinkering and learning to use effectively. Of course if you'd rather just refund, you can contact itch.io support for that.
Our project currently requires a Japanese animation style, not a realistic style, but it seems that this model option is not available at the moment?
The program doesn't have any animation tools, but I'm wondering what you mean by a japanese style? If you mean something more like a 'chibi' style you can get this through prompts, without the need to use the modifier models.
If I purchase the full version but don't meet the hardware specs, does the cloud version allow for free usage?
No, the cloud version still requires you to purchase credits. The hardware requirements are listed very clearly on the product page.
Hi, tried to send you a message via your website but the contact from seems bugged. I am trying to install this into Aseprite however I have Python 3.12 installed on my system and the extension fails saying it needs 3.10 or 3.11. Do I need to downgrade my Python install or can you update this?
Hey! You do need Python 3.11 (3.11.6 is the best specific version). It is unable to use python 3.12 at all, because the largest dependency is PyTorch, which is only available for <= Python 3.11
Yep, it runs completely locally. All the artwork used for training was either made by me, or was given with consent from other artists.
Any chance of this ever supporting the open source Pixelorama? It has extension support: https://github.com/Orama-Interactive/Pixelorama
Low chance, but Pixelorama has been on my radar for a while :)
Hi! How fast will generating medium sized pictures and generating pictures from examples work on mac m1 pro 16 GB RAM?