I don't think MJ does this automatically on an ongoing basis. But the MJ developer team very much do something like that to improve the algorithm. I believe the entire Version 4 of MJ was trained on a "supercluster" of images that were handpicked and rated by humans as the best representative outcomes.
Would be kind of cool if you could have a self-reinforcing feedback look where MJ kept improving in real time based on user feedback though.
If you mean the yellow landscape with the spaceman, it was: "alien land with a bright yellow sky filled with stars, fantasy, biomorphic, cosmic horror, sci-fi --ar 3:2"
No idea Midjourney computed emojis 🤯.
Do you know if MJ samples photos it itself has created - i.e. do the 275,000 images a day its generating pile back into its own dataset?
I don't think MJ does this automatically on an ongoing basis. But the MJ developer team very much do something like that to improve the algorithm. I believe the entire Version 4 of MJ was trained on a "supercluster" of images that were handpicked and rated by humans as the best representative outcomes.
Would be kind of cool if you could have a self-reinforcing feedback look where MJ kept improving in real time based on user feedback though.
Can you share the exact prompts used to generate the first image from your thumbnail? I really like that style!!!
If you mean the yellow landscape with the spaceman, it was: "alien land with a bright yellow sky filled with stars, fantasy, biomorphic, cosmic horror, sci-fi --ar 3:2"