Dall-E, Midjourney, Stable Diffusion (etc) - who's playing, and with which?

  • Started
  • Last post
  • 7 Responses
  • Nairn

    I'm curious to know how many of you are playing around with this shit, and with which.

    I know some of you have been neck-deep in this for a while, so I'm curious to know of your experiences so far.

  • Nairn-1

    Me, MidJourney. I have a knot in my stomach with about 50% of the shit that comes out of it. Actually, 100%, but a seriously-tight knot with a smaller number than that. Nothing I've gen'd has been complete, but as a 'first' step to more interesting or refined things, a bloody big one. I'm seriously considering how to base a facet of my business (lasering) around this shit.

    I'm only using Midjourney because it's spastic easy to set up within Discord. I suspect I'll be trying the others fairly soon, seeing as they appear to be becoming easier to Just Hoy Shit At.

  • spl33nidoru0

    Midjourney for me too.

    I use it to generate image references for video treatments, especially specific moods, lighting, compositions, poses.

    I hate the self-referencing nature of the field I'm in, and avoid as much as I can to reference existing works that the client will want literally reproduced.
    Midjourney is hit and miss, but often feels less tedious than looking for an existing ref that closely matches what I have in mind.

    And when you're in a good spot, making increasingly precise or exciting variations of something close to your idea, it gets pretty exciting.
    I usually avoid making the most high res render of a visual i like, rarely a big fan of the excess of texture that is generated then.

    • diffusionbee is great. major new features implemented every week. a load more in betakingsteven
    • wrong post sozkingsteven
  • slappy1

    I was using Mid Journey to get some interesting results for pattern making, and it was actually pretty useful.

    Now I'm running Diffusion Bee locally, as it runs ok on M1 macs. The resolution is still pretty low (1024px max on each side). I don't use it that much for anything work-related, I'm more keeping an eye on its progress.

    I'm also playing around with beta.character.ai as I think it will be useful for generating copy for projects that don't have proper content budgets in the future.

  • CyBrainX0

    I started a little with Mid Journey but then tried Stable Diffusion which I thought was a lot better.

    • Also, MidJourney is starting to look a bit same-ish to me from what I see others doing.CyBrainX
    • yeah i felt that - it has a very certain style and selection of source imagery. maybe it's everyone copying each other tho.hans_glib
    • I've been playing a lot with mixing in source images in MJ and it's pretty impressive.Nairn
  • Nairn0

    The thing I don't like about MJ is that everything is public, unless you pay some exorbitant additional fee.

    I really want to play around with some photos of my daughter, but for obvious reasons don't want to add her photo for public viewing - and, also - I assume the above implies that images added in get indexed generally, to be considered for everyone else's gens.

    • Sounds like Diffusion Bee would be a good option, but I'm not a Mac so don't M1/2 and moreso, I don't have any decent GPUs.Nairn
  • grafician0

    Started MJ, then Dall E and for months now using SD.

    SD is the most useful overall. I have access to a no-limit in the cloud image, I can run any prompt I want.

    Conceptually SD is the best, Dall E is the most realistic, MJ only for fantasy "featured on artstation" things.

    They can all be useful depending on your prompt and needs, more realistic vs. more fantastic. All just for concepting, none of them can output production-ready usefull stuff.

    You need hundreds of tries to get something useful 90%. But from there you can continue in Photoshop etc. to re-create something for production.

    Copyrights aside, these will replace stock photos and Photoshop in the long run, but no jobs will be displaced as I see it.

    All bad with hands ofc, almost all output if cut, no full wide compositions even when you ask for them as all were trained on 512x512 images and that's the limit, even if you ask for wide/portrait composition.

    Overall, nice first try, we'll see next year what's up, but unless they re-train every model to bigger images, the size of the model doesn't matter, it will output the same shit - even with 100B images.

    Improvement will be to train with all images from Meta (instagram, facebook) then things would be way more realistic. Until then, same limite output.

    That's all for now.

    • https://imgz.org/iDH…Nairn
    • Also Dall E is very censored, while MJ is public and the only choice for private prompts it's SD.grafician
    • Technically, features and UX wise too, Dall E is the most advanced.grafician
    • Overall you get bored with these pretty quickly, it's just easier to get sketching or building something in Photoshop directly and you don't waste time promptingrafician
    • For art directors/designers these are just new tools.

      For everybody else, these are just fun for a while, then they skip to the next trendy thing
      grafician
  • canoe0

    Smart toys.