• HereIAm@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      5 hours ago

      To be fair, I suppose it is a well known fact that countries do indeed fit inside itself

      • Simulation6@sopuli.xyz
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        5 hours ago

        Country usually has coastal waters. For example 12 miles off the coast of the US is still considered part of the US. So the country US does not fit into its physical land area.

        • lad@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          Maybe op asked for physical area to fit into country, since we’re specifying ambiguous part

    • RamenJunkie@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      42 minutes ago

      That population comment has me wondering now. Is there a country that the population of itself, would NOT fit inside.

      Like maybe some random tiny European country that produces a lot of people who go live in larger nearby countries, but they are still citizens of their home land. Then for some reason, they all decide to return home.

      I guess my point is, that actually could very easily be true, that there is a country that can’t actually contain its population.

      I mean, physically, probably not, but more like, with houses and shit.

  • chunes@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    5 hours ago

    DeepSeek does a little better with this.

    That’s a fun play on words! Since “the size of Japan” is just a measure of its own land area, it’s trivially true that Japan fits perfectly inside itself — just like any country or shape fits perfectly inside its own boundaries. It’s a tautology, but it sounds like a clever riddle at first listen. Nice one!

    • atopi@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 hour ago

      they both are nondeterministic

      both could give the right answer and then the wrong answer with the same prompt

      one try is not enough to say one model is better than the other

    • NostraDavid@programming.dev
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      5 hours ago

      This isn’t the thinking version, which is a LOT better than the instant model. I don’t use the instant version any more, due to hallucinations.

  • MousePotatoDoesStuff@piefed.social
    link
    fedilink
    ᐃᓄᒃᑎᑐᑦ
    arrow-up
    12
    ·
    9 hours ago

    China is the most populous country, as well as one of the largest, to achieve this.

    This was made possible due to the One China Policy (1 China = 1 China).

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    72
    arrow-down
    1
    ·
    14 hours ago

    A river was damned with water and turned millions of gallons of water in order to give us this digital abomination

  • nomad@infosec.pub
    link
    fedilink
    arrow-up
    16
    arrow-down
    24
    ·
    10 hours ago

    Faky McFakeface

    Its almost sad that people hate on ai for its power usage except when its used to mock AI and mislead the public.

    Over 100 upvotes on Lemmy and noone checks this shit? Sheesh you wanna believe, am i right?! ;)

    • Chozo@fedia.io
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      6 hours ago

      Open a new thread and ask again. You’ll get a different result. Open a third thread and ask again, you’ll get yet another result. That’s how LLMs work. You getting a different answer to your prompt doesn’t mean anything.

    • Thorry@feddit.org
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      We are in a memes community here, most memes are only tangentially related to the truth (if that). It’s a joke, it doesn’t have to be true, but if it is true (which there might be a chance of in this case) that makes it even funnier.

      But if you’d like we can just go by 4chan rules here and post: “Fake and gay!”

    • drcobaltjedi@programming.dev
      link
      fedilink
      arrow-up
      33
      arrow-down
      5
      ·
      9 hours ago

      Okay, your output is different given the same input… So what? It’s a well known fact that these LLMs are non deterministic. Theres a guy on youtube that asks chatgpt everyday to count to 200 until it doesn’t fuck up. Your output does not prove or disprove the authenticity of the original post.

      • arcterus@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        11
        ·
        7 hours ago

        Tbh them being nondeterministic is a big part of why they’re so unreliable. Like, maybe it’ll work fine for 9/10 people, but then there will be that one person whose home directory gets wiped for whatever reason. Or maybe it’ll do math right for those nine people, but then for that one person it’ll say 1 + 1 = 11.

        You’re basically gambling if you don’t verify the answers.

        • FishFace@piefed.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 hours ago

          Not really… Determinism would only help if you could copy someone else’s prompt and history 100%, which you generally would not be able to.

          Because maybe it always gets 1+1 correct, but fails 1+2.

          • arcterus@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 hours ago

            I’m referring to nondeterminism for the same prompt, since unless you start a session from scratch, it’s unlikely you’ll have the same history. If you give it a prompt, then depending on what you’ve told it previously, it may blow up in your face.

            • FishFace@piefed.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 hours ago

              Determinism for the same prompt means you can’t give it context through a conversation, which vastly shrinks its utility.

              That said, even that form of determinism can be unreliable: the example of arithmetic still works; you could have it completely deterministic, but if it only performs correctly on 80% of arithmetic problems, it’s still unreliable.

      • chunes@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        4
        ·
        5 hours ago

        In fact, if you give it this prompt 50 times and it only fucks up once, that clearly indicates that this is post is misleading. It’s also likely this post was faked with a different prompt than the one shown.

    • NostraDavid@programming.dev
      link
      fedilink
      arrow-up
      1
      arrow-down
      4
      ·
      5 hours ago

      Just use the thinking model, which severely reduces hallucinations. Only silly people use the instant model to mock a tool for doing what it does.