• Ilixtze@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    4 hours ago

    Who the fuck will want to read this all this useless trash?

  • bdonvr@thelemmy.club
    link
    fedilink
    arrow-up
    32
    ·
    21 hours ago

    So why would I bother with her slop instead of going straight to the slop machine?

    That’s under the extremely labored assumption that I’d bother with slop at all.

    • Kühlschrank@lemmy.world
      link
      fedilink
      arrow-up
      21
      arrow-down
      1
      ·
      22 hours ago

      Or it’s going to be AI written I might as well have AI write me a story instead of reading hers. At least it would be about something I like.

  • solrize@lemmy.ml
    link
    fedilink
    arrow-up
    49
    ·
    1 day ago

    45 minutes bah. With cat > /dev/null I can “read” her slop novel in 0.001 seconds. She’ll never be able to keep up either.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    26
    ·
    23 hours ago

    Cool, she can use her AI to read them too, cause no one else wants to.

  • yeehaw@lemmy.ca
    link
    fedilink
    arrow-up
    8
    ·
    19 hours ago

    And without all the training data to build off of, and the risk of copyright infringement, what do you really have at the end of the day?

  • altphoto@lemmy.today
    link
    fedilink
    arrow-up
    7
    ·
    20 hours ago

    Quick! I need a Samsung refrigerator operations manual in the tone of Bilbo baggings but nsfw. Don’t ask why! My life depends on it!

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    23 hours ago

    So… I write stories. Mostly it’s for therapeutic purposes, or getting sprawling fantasies out of my head.

    But I have severe attention issues. I’ll get stuck on the wording of one line for hours, get flustered, and then have executive dysfunction kill the whole day.

    Hence, I use pretrain LLMs to help me write, but not “bang out this chapter for me ChatGPT,” like you think. I keep a smaller completion model (one not yet finetuned to “chat;” all it can do is continue blocks of text) loaded locally, with an interface for showing the logprobs of each word like a thesaurus. It’s great! It can continue little blocks of text, and smash though days of agony. It’s given me plot directions or character dialogue I would have never thought of on my own.

    It doesn’t let me write quickly though. Certainly not like that.


    Hence, I really, really hate grifters like this.

    This woman is just a con artist, a spammer, openly boasting about it because apparently society has decided information hygiene doesn’t matter anymore. She’s abusing a dumb tool to flood a space with crap for her benefit.

    And it gives these tools a bad name. They’re the lighting rod, shielding the enablers.

    People rightly hate “AI” because assholes like this get praised abusing it. Now I feel shame using them, and paranoia someone will find out and make a snap judgement if I talk about it.

    • Buffalobuffalo@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      6
      ·
      23 hours ago

      Habitual sloperator’s anonymous. Seems like you have a reasonable application of an LLM, applied only when conditionally valuable. Alternatively you could ask a person to help and cut out the sloperation entirely. Wash those hands etc.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        22 hours ago

        I use pretrains light on slop, n-gram sampling, and a big banned strings list. And then I check the logprob synonyms on top of that, like so:

        Not that it’s particularly critical, as I’m actually reading and massaging really short outputs (usually less than ten words at a time). Better instruct models, which tend to be more sloppy, still aren’t so bad; nothing like ChatGPT.

        So yeah, I’m aware of the hazard. But it’s not as bad as you’d think.

        In fact, there are whole local-LLM communities dedicated to the science of slop. And mitigating it. It’s just not something you see in corporate UIs because they don’t care (other than a few bits they’ve stolen, like MinP sampling).

        • Buffalobuffalo@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          4
          ·
          18 hours ago

          It seems a sophisticated approach that minimizes broad suggestion. It probably improves your writing momentum and reduces stalling like youve shared had been detrimental. As an exercise in writing, or practice for personal reflection i see the merit. Teaching oneself or developing strategies best learned when applied… Alright. Functional writing on technical topics or news, potentially bearable.

          But like many comments in the thread, people dont want to read generated content. If there’s disclosure about using an LLM in a novel’s production i’ll have little desire to read it.

  • RainbowBlite@piefed.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    23 hours ago

    200 novels with 50,000 sales. An average of 250 sales per novel. So each novel is a failure but she fails at such a staggering pace that it looks like success.