Omg, the double pregnancy though.
That’s normal, when I’m trying to get these things sorted. Pregnant males, far as the eye can see.
I’ve seen the Aliens films. I know how that ends.
I’ve been spotting more and more AI replicas.
For example this image that popped up in my article suggestions (sponsored).
It rubs me the wrong way.
I think the difference between this and other generations is that this one isn’t trying to create something new but rather very deliberately it tries to create royalty-free images out of copyrighted images.
I think that’s the difference that crosses the line for me. It’s one thing to teach AI how to draw and entirely different to teach it to replicate something.
I guess it’s the same difference as creating fanfiction vs plagiarism that uses the exact text but changes a couple of details and passes that as original.
At least fanfiction knows what it is.
Images like the one above make the case for using AI so much harder because people will bring it up as an example of how this technology is abused and I don’t disagree with that. Unfortunately we can’t stop people from crossing the line.
The ones that weird me out is a person can take a pic of you on the street and have an AI strip you…or do a faceover in a porn clip. This one is nothing like the full depravity out there.
The past couple of weeks I haven’t been using Midjourney as much and was starting to think that I should lower my subscription back to the lowest tier again.
And maybe inspired by this idea I went a little bit crazy yesterday. I don’t think I truly realized what task I had set out on until I was a few hours in and there was no end in sight.
I thought, well, while I have unlimited Relax mode right now, let’s get some mass use of it. I created a fresh channel on my discord to dedicate to the exercise:
use /describe on all of the favorite images I have on my laptop.
These are all the images I had accumulated over the years, some are inspirations for my characters or settings, some are simple background or decorative images or blog graphics. Also images of my characters I’ve created with Heroforge.
For whoever is not familiar, /describe allows you to input a single image and MJ tells you what prompt it would use to create that image. 99% of the time it doesn’t create what you input but sometimes the results are very interesting so it’s worth experimenting with.
I stayed up until maybe 4 am last night doing that… and I still didn’t finish.
Oops.
I still have to go through the results to see what’s usable but there were quite a lot of interesting images I could further evolve. I want to give “vary region” a more in-depth try.
Okay, here are a few random interesting ones.
My new favorite Ruby.
The cupcake!!!!!!!
Officer Al Amogordo was holding a cupcake in the Heroforge figurine. MJ decided that the cupcake was the most important part of that image. I can’t argue with the results.
And Cleo, the most gooddest girl
Testing a updated SDXL checkpoint (Realities Edge 3.0) with some old, very basic (dragged knuckles over keyboard), MJ prompts. This image quality is getting close to a six-megapixel digital camera.
Mid Journey has a zoom-out and region editing function. Similar function to Photoshop’s AI, and Stable Diffusion’s in-paint / control-net features (that I have no idea how to use).
Zoom out works great. I went crazy with it here
But most of all, I find that zoom out works great for creating more complex characters. You can start from a close up and zoom out for a larger scene.
Vary region works well most of the time.
But not if you only want to fix the hands. Omg, the creatures it creates, lol.
In general, the larger the area, the more coherent it turns out.
It works wonders if you want to change what the character is wearing.
For example
…My response to A1111 SDXL ruining an otherwise fine image (according to the preview window) with dislocated limbs, pixelated edges, 17th century teeth, or cthulhu hands…
After more fooling around with old MJ prompts in SDXL, I’m starting to get some Westworld vibes. How long until we reach the We’re not here yet stage of AI image or animation generation?
Oh, first season of Westworld is great. Season two and three meh and bin worthy.
The very first episode messed with me. lol
Exciting news. Upscaling (way overdue) and 3d?
Midjourney remains confident in its ability to achieve 3D with virtually no quality loss from 2D. The focus, for now, is on generating high-quality images rather than exporting meshes. However, the company stated that clean, AAA quality mesh exports could be possible in the future if the market demands it.
In other news, I just discovered MJ’s prefer command.
Why isn’t everyone talking about this hack???
In short, you set up your prompt as if it was a parameter, I even added previous generations as a reference
And then you can use it in all your prompts.
Omg, the possibilities!
Not only does it save time but you can safely create consistent characters and styles.
I’m going to have to experiment to find the most reliable prompts but this is a game changer.
Local, uncensored, install option for Mid Journey when?
PSA for SDXL: Remember to select Restore Faces in the A1111 Settings → Face Restoration, and the Settings → User Interface → Quick Settings List menus. The updated A1111 Txt-2-Img Web UI does not display the RF option / tab by default.
…I now have so many character portrait images to redo…
Yes, please!
I think they’re afraid you’ll build your own bot out of that and try to sell it as your own. Protecting their property. Unfortunately for us.