SpeakingColors

@SpeakingColors@beehaw.org

This profile is from a federated server and may be incomplete. Browse more on the original instance.

SpeakingColors,

If that’s the same criteria you use for looking for that someone, and you proceed with an open and courageous heart: it won’t be a dream.

And I would say that we have general artistic conventions of depicting elements the previous commentor suggested: smell lines, meat in teeth, etc… Their absence from the scene leads me to believe the commentor’s interpretation is far from the artist’s intentions.

SpeakingColors,

That’s kind of how the whole thing could/should fall apart. “Authority” gives a command and someone down the chain has to enact it, if that person or persons above the chain refused to act - there would be conflict, but hopefully the opportunity for realization that authority gave an objectionable command. The cynical take is that any number of enthusiastic appeasers would enact the command to engratiate themselves to the authority in the system, defeating the message of the refusal.

Unfortunately, much of American machismo includes stepping on others to get a better view, or to be viewed more prominently by favored authority. So to answer your question, there’s probably a dishearteningly large pool of people who would jump at the tasks Trump would dictate.

SpeakingColors,

It does say have to so I think you still can, but you don’t need to sleep to have energy for your day. It’s a magic pill with magic rules

SpeakingColors,

Creepy and anatomically correct are sometimes at odds

SpeakingColors,

I gather that the “conversation” point is whether being trans has an impact on judgement or something? Because differing from a perceived social norm is an incorrect choice people make…

I wonder if they would feel better if they found out the zodiac killer was cis?

SpeakingColors,

What a cheerful read. An arguably poetic detailing of what was before a rather simple and casual endeavor: your intentions for the sand and the intentions of everything it’s comprised of. Thanks for sharing it

City Guardian (AI assisted) (i.imgur.com)

I’ve been diving into AI assisted workflows and found an extreme font for creativity. My recent efforts have been towards RPG-style characters like you’d see in a D&D game, and this guy came from the idea of a royal guard of an ancient city, Egyptian/African-esque. The AI gave me a variation with just the shield and I really...

SpeakingColors,

Thank you! I had to look him up but I can see the resemblance with the shape and hair style

SpeakingColors,

I appreciate that! I’ve been shying away from posting stuff on here, as I don’t really know how people take AI art on not explicitly AI communities. For a while I had my own judgements on how the models get trained so I would understand. But thank you!

SpeakingColors,

I hear you, when this stuff was blowing up I couldn’t shake that it was trained off artists’ work that they didn’t consent to having in the datasets. Sure it’s similar to how human artists work (for music and art the prevailing recommendations for me, or any artist, was to consume material relevant to your art. For visual art they really just wanted you to constantly keep your head open for shapes and form) but it felt closer to plagiarism than inspiration. Some generations can be very close to an individual style (especially if the model was trained specifically off that) but I found that generations that omitted an artist ended up creating something compelling but not tied to one artist specifically - still undoubtedly a conglomeration of the multitudes it was trained on (including photography). It’s muddy water for sure, and the angle of AI replacing workers in general is still relevant - but I also think it empowers people like me who have the visual ideas but can use the help making them fully fleshed out.

The crux, for me, feels like “when you can see whatever you want, what do you want to see?” A lot of our AI woes are reflections of questionable human behavior (racist chat models, AI for war, deepfakes and dishonesty).

How do you feel about it?

SpeakingColors,

For sure! Often I’ll come in with a visual idea already, or will iterate on some with the AI giving inspiration. If I have the idea strongly I’ll sketch out the composition and elements I know I want - sometimes on real tricky poses like fingers I’ll take a photo of myself doing them. Throw that into stable diffusion with img-2-img to generate images based on my sketch/photograph to something more full featured or something I hadn’t thought of but really like (you can also set how “dreamy” the AI should be, how much it should vary from the input material).

There’s a lot of detail I could get into but the “assistance” is fleshing out a composition -> I go in and correct anatomical mistakes or elements I want to change specifically -> run it through again if it needs it.

SpeakingColors,

I replied to a previous comment about the “assistance” part which is sorta an abridged version of my workflow (“workflow” is also a term used in Comfy UI, a visual layout that processes the image sequentially through modules). It’s super fun I highly recommend it! Feel free to PM me anytime I’d be glad to help!

Really it was looking up terms and areas of Automatic 1111 I was unsure of and finding various sites and guides. Civitai has LOTS of guides often written by model makers or people with lots of hours in the field - it’s also my main resource for LoRAs and Models. But there’s tons of info on there. The most helpful ones where settings and workflows on actual image generation (I can definitely find some links for you there) to get quality results without too much “and if I change this, what happens?” But honestly I love poking around like that so I still spend hours tweaking just to see what happens xD

SpeakingColors,

Thank you! Essentially I’ll come in with a visual idea, some sketches already or I’ll do one with AI in mind (keep the lines simple so it doesn’t get confused). Generate a batch of images with img-2-img and cherry pick the ones that fit closest to the idea or are surprising and wonderful. Rework those for anatomical errors or other things I want to fix or omit -> send it back through img-2-img if it needs it or to inject detail -> upscale and put it as my desktop/phone wallpaper :P

(I’m using Automatic 1111 which is a webui for Stable Diffusion btw)

SpeakingColors,

Img2img is one of many ways to constrain the AIs efforts to your compositional desires, it’s rad. You can control the amount of “dreaming” the AI does on the base image to get subtle changes, or a radically different image based on the elements of the previous (sometimes to trippy cool results, often to horrendous mutations if the desired image is supposed to be humanoid xD).

Inpainting is another tool, it’s like a precise img2img on an area you mask. Hands are often the most garbled thing from the AI, so a brute force technique is to img2img the hands - but the process works a lot better if you help the AI out and manually fix the hands. So I’ll throw the image into photoshop, make a list (if I remember :P) of everything I need to fix, address them directly and then toss it back into Automatic 1111. Often the shading and overall style are hard things for me to get right so I’ll inpaint over my edits to get the style and shading back.

SpeakingColors,

Like, technology is cool. Engineering is cool. Building better weapons to kill each other and strike enough fear to garner compliance has never solved the impetus for their creation: the fear that a human will kill you, so you attempt to kill them first.

It’d be tight if a “rising china” or “rising anyone” meant that we as a global society would benefit from new helpful tech that gave people more choice and autonomy to improve their lives and the ones around them.

SpeakingColors,

I’ve gotten into AI assisted art in the past month. I would agree that a pure text-to-image approach does imply a lot of creative control given over to the AI tool - sometimes that results in happy accidents, sometimes that leads to very generic looking generations.

There are a wealth of tools and techniques at an artists fingertips (and free or cheap) that help constrain the generation to a visual thinker’s sketch or apply style to the image. Most AI platforms incorporate image-to-image and serviceable artworks can be generated from a very rough sketch of a composition. Text-to-image can be constrained with extensions like controlnet (automatic 1111, stable diffusion) where you can take a reference image or a black and white image of diffused shapes indicating depth and have the generated image tied such that you can have very predictable compositions.

Pure text-to-image, I see the writer’s point. However that’s really only scratching the surface of what can be done and not a fair assessment of “AI art isn’t suited for visual thinkers” in my opinion. Taking an AI output and tossing it into photoshop (or Krita) as a foundation to be worked on is also a valid path - you could then take that worked image and then do image-to-image on it and see what you get. To me, it’s more of a collaboration with the tool of AI rather than an all powerful genie. If I have a strong visual idea in my head, I sketch it, or even photograph me doing it and use that as a base for the AI to work with.

SpeakingColors,

I will add though, that the downside of AI assisted workflows is feeling less connected to the art - I didn’t spend the time with the image to feel out each piece and it’s quirks. The image appears, and I have to pour over it and touch it up to feel more ownership of the result.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • Durango
  • JUstTest
  • InstantRegret
  • cubers
  • GTA5RPClips
  • cisconetworking
  • ethstaker
  • osvaldo12
  • modclub
  • normalnudes
  • provamag3
  • tester
  • anitta
  • Leos
  • lostlight
  • All magazines