• 4 Posts
  • 485 Comments
Joined 6 个月前
cake
Cake day: 2025年8月30日

help-circle

  • No that’s a regular clip for mounting the cooler onto the cpu, it clips around those black things around the socket. That’s been the standard for decades and only recently has it gotten less common. I think the cooler is screwed onto the case with woodscrews directly into the plastic of the fan.


  • Thorry@feddit.orgtoProgrammer Humor@programming.devDIY
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    6 小时前

    You’d be mistaken, Intel hasn’t had a clip mounting system since socket 370 P3 days. Even P4 on 423 had 4 corner mounting systems and all of the Intel systems had them since.

    The cheapo aluminum coolers from Intel always had that rotated design to get a little bit more surface area in the same volume. With the age of this system Intel had copper pucks in the middle of their heatsinks. It wasn’t till later they went full aluminum. This is very clearly an AM4 motherboard as seen by the mounting.

    Like the other commenter pointed out, it’s an A320M-C board, it says right on it.


  • Z80 assembly, nothing is as fun as getting back to basics. After I learnt Basic on my home computer back in 1984 I quickly branched out to assembly. My home computer had a Z80 cpu running the show. I had this book which introduced the basics and explained how to combine Basic and ASM code, so you could do neat tricks without needing to go full assembly right away. Of course there wasn’t a compiler, just pages in the book showing the opcodes, their encoding and which code had what bit representation. So compiling was done by hand.

    I miss those simpler times.











  • Thorry@feddit.orgto196@lemmy.blahaj.zoneFlawless rule
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 天前

    It is actually illegal to publish the name, face or any identifying information about suspects in some countries. Just because you are suspected of a crime, doesn’t mean you’ve lost your right to privacy. And it only helps to further public outrage and mob justice. The media can tell their stories just fine without naming or showing the people who are suspected of doing so.


  • Funny how they say how long it would take to download the ISO. Back then sneakernet was our preferred means of sailing the highs seas. In the before before times this meant going to a mate (or a mate of a mate of a dude who used to live next door to my cousin) with a stack of floppies and copying over everything you needed. Later when floppies got cheap, so you would ask for stuff and through multiple friends of friends you’d get a floppy handed over. Put it in your pocket and then run home to try out the new goods.

    Later CD burning at home became a thing and CDs were already cheap. You’d show up at some dudes place, he’d have a bunch of CD spindles setup and machines for copying stuff. You’d give some of the software you had for him to copy and receive a bunch of CDs back. They usually had three kinds of CDs, the super premium ones he’d use for a master copy. The nice ones reserved for good folk and the spindles of the crappy ones which were good enough for most things. It was always a trade off between cost and quality, where crappy quality could mean failed burns which wasted time. Some people had special rigs setup for multiple copying at the same time, but the evil buffer underrun error was always lurking in the background.



  • Thorry@feddit.orgtoProgramming@programming.devWe mourn our craft
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    5 天前

    Writing code with an LLM is often actually less productive than writing without.

    Sure for some small tasks it might poop out an answer real quick and it may look like something that’s good. But it only looks like it, checking if it is actually good can be pretty hard. It is much harder to read and understand code, than it is to write it. And in cases where a single character is the difference between having a security issue and not having one, it’s very hard to spot those mistakes. People who say they code faster with an LLM just blindly accept the given answer, maybe with a quick glance and some simple testing. Not in depth code review, which is hard and costs time.

    Then there’s all the cases where the LLM messes up and doesn’t give a good answer, even after repeated back and forth. Once the thing is stuck in an incorrect solution, it’s very hard to get it out of there. Especially once the context window runs out, it becomes a nightmare after that. It will say something like “Summarizing conversation”, which means it deletes lines from the conversation that are deemed superfluous, even if those are critical requirement descriptions.

    There’s also the issue where an LLM simply can’t do a large complex task. They’ve tried to fix this with agents and planning mode and such. Breaking everything down into smaller and smaller parts, so it can be handled. But with nothing keeping the overview of the mismatched set of nonsense it produces. Something a real coder is expected to handle just fine.

    The models are also always trained a while ago, which can be really annoying when working with something like Angular. There are frequent updates to Angular and those usually have breaking changes, updated best practices and can even be entire paradigm shifts. The AI simply doesn’t know what to do with the new version, since it was trained before that. And it will spit out Stackoverflow answers from 2018, especially the ones with comments saying to never ever do that.

    There’s also so much more to being a good software developer than just writing the code. The LLM can’t do any of those other things, it can just write the code. And by not writing the code ourselves, we are losing an important part of the process. And that’s a muscle that needs flexing, or skills rust and go away.

    And now they’ve poisoned the well, flooding the internet with AI slop and in doing so destroying it. Website traffic has gone up, but actual human visits have gone down. Good luck training new models on that garbage heap of data. Which might be fine for now, but as new versions of stuff gets released, the LLM will get more and more out of date.



  • Good to hear it! I was afraid you’d just gone off the headline and not the contents, with the author being someone who works in the AI field and the article being pro-AI in my opinion. I apologize, you have obviously done your homework.

    I agree, it’s fucking crazy what this dude says. He’s like sure LLMs are flawed to the bone, but you just have to accept that and work around it. Just build a fence around it, and you can even build that fence using AI! I mean WTF…

    One of the reason I think the article is pro-AI is because of lines like this:

    This is not at all an indictment of AI. AI is extremely useful and you/your company should use it.


  • Have you actually read the article? It isn’t anti-AI it’s actually very much pro-AI. All it says is that there are a lot of companies duping people at other companies (that use AI) to sell their shit.

    He argues so called AI security companies sell solutions for problems that are inherent to the technology and thus can never be fixed. But by showing there is a problem and then offering the solution to that problem, people think they are actually fixing something. In reality it only fixes that one specific problem, but leaves open the almost infinite of other very similar issues.

    His argument is to actually handle AI security by getting someone that really knows what is what (how one would get that person or distinguish them from bullshitters is a mystery to me). Some issues are just a part of the deal with AI, so they have to be accepted and managed where possible. Other issues should be handled upstream or downstream and he argues AI could be implemented on those parts as well.

    I agree with his argument, it is total bullshit to show the flaws in LLM models and then claim to fix those with expensive software that doesn’t actually solve the core issue (because that is impossible). However in my experience this has always happened in the past more or less. I’m not sure it’s happening more now? Or because understanding of AI is so low usually, so it’s easier?