polystruct

@polystruct@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Is it possible to know the token impact in a prompt?

I understand that, when we generate images, the prompt itself is first split into tokens, after which those tokens are used by the model to nudge the image generation in a certain direction. I have the impression that the model gets a higher impact of one token compared to another (although I don't know if I can call it a...

polystruct,

Thanks! I initially didn't think that such extensions would be what I am looking for (they show which parts of the image are heavily influenced by the prompt), but they do give a visual clue as what parts are definitely important to keep in the prompt, whereas image areas that are hardly getting attention might tell me which parts of the prompt are less impactful.

And it gives me a few repo's to dig through the code. Older ones (archived/read-only) are fine, they are often easier to read than the more optimized (but also feature-rich) code that exists nowadays.

polystruct,

Currently Confluence. We do have a split documentation policy, where long-lived and broadly communicated information should be on M365 (SharePoint and affiliated services) whereas more technical or short-lived (project) documentation is on Confluence.

But even certain broad-use information is showing up on Confluence more and more given it's easier use (wiki and plugins like the draw.io support).

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • khanakhh
  • kavyap
  • thenastyranch
  • everett
  • tacticalgear
  • rosin
  • Durango
  • DreamBathrooms
  • mdbf
  • magazineikmin
  • InstantRegret
  • Youngstown
  • slotface
  • megavids
  • ethstaker
  • ngwrru68w68
  • cisconetworking
  • modclub
  • tester
  • osvaldo12
  • cubers
  • GTA5RPClips
  • normalnudes
  • Leos
  • provamag3
  • anitta
  • lostlight
  • All magazines