I eat words
Linuxoid
Matrix - @saint:group.lt
- 965 Posts
- 87 Comments
I eat words@group.ltto Technology@lemmy.ml•DeepSeek AI Models Are Unsafe and Unreliable, Finds NIST-Backed StudyEnglish12·4 months agoheh, like other models are safe and reliable ;-)
I eat words@group.ltOPMto (safe) Unsecure security@group.lt•“Localhost tracking” explained. It could cost Meta 32 billion.lietuvių kalba1·8 months agoheh, muditą :)
I eat words@group.ltOPMto (safe) Unsecure security@group.lt•“Localhost tracking” explained. It could cost Meta 32 billion.lietuvių kalba2·8 months agomanau, kad meta ne vienintelė tokia „gudri“ - kiti irgi naudojasi visomis įmanomomis spragomis, tik dar jų neaptiko/nepaviešino.
I eat words@group.ltOPMto (safe) Unsecure security@group.lt•29 Undocumented commands found in ESP-32 microcontrollers CVE-2025-27840English21·11 months agoI understand your point, but I would not imply that a backdoor has to be remote. Backdoors are essentially any alternative, often undocumented ways to access or gain privileges on systems. They don’t always result from malicious intent either - many backdoors simply “happen” when developers haven’t fully considered security implications. For the average user whose device contains such unintentional backdoors, the impact remains the same regardless of how they came to exist. Consider the times when vendors had default BIOS passwords - these created a nightmare for Uni IT staff (and others as well), even though they were not accessible remotely.
I eat words@group.ltOPMto (safe) Unsecure security@group.lt•29 Undocumented commands found in ESP-32 microcontrollers CVE-2025-27840English1·11 months agoFrom security perspective, do you think the wording changes a lot here?
I eat words@group.ltto Ask Lemmy@lemmy.world•Do you have kids? Do you want to have kids? Did you regret having / not having kids?41·1 year agono, no and no, but you will have to find an answer if your decision to have or not to have kids was the right choice in any case.
I eat words@group.ltMto Sysadmins for sysadmins@group.lt•Gedimas be interneto paliko 120 įstaigų: rizikas prognozavo, bet plano B nebuvolietuvių kalba3·1 year agoBūtų įdomu paskaityt tai kas ten iš tiesų įvyko ir kaip buvo tvarkoma, bet turbūt Cloudflare lygio post-mortem analizės tikėtis neverta.
I eat words@group.ltto Books@lemmy.world•Nonfiction readers. Do you feel guilty reading fiction?English1·1 year agoNot anymore, nowadays, I feel guilty reading non-fiction and understand Lindy effect on books much better (be it fiction or non-fiction).
I eat words@group.ltto Showerthoughts@lemmy.world•Super hero movies should have more scenes of them accidentally maiming people just because of the sheer amount of power they weild.English271·1 year agoThey cut all such scenes and pasted into The Boys, in a Mark Twain style “Sprinkle these around as you see fit!”.
I liked the book as well. The show had some similar feeling in some ways, but also had a distinct character for itself.
I eat words@group.ltto Books@lemmy.world•What book(s) are you currently reading or listening? August 27English2·1 year agoA Tomb for Boris Davidovich - Danilo Kiš
I eat words@group.ltOPMto Sysadmins for sysadmins@group.lt•Lessons learned from two decades of Site Reliability EngineeringEnglish1·2 years agoReread today again, with some highlights:
Lessons Learned from Twenty Years of Site Reliability Engineering
Metadata
- Author: sre.google
- Category: article
- URL: https://sre.google/resources/practices-and-processes/twenty-years-of-sre-lessons-learned/
Highlights
The riskiness of a mitigation should scale with the severity of the outage
We, here in SRE, have had some interesting experiences in choosing a mitigation with more risks than the outage it’s meant to resolve.
We learned the hard way that during an incident, we should monitor and evaluate the severity of the situation and choose a mitigation path whose riskiness is appropriate for that severity.
Recovery mechanisms should be fully tested before an emergency
An emergency fire evacuation in a tall city building is a terrible opportunity to use a ladder for the first time.
Testing recovery mechanisms has a fun side effect of reducing the risk of performing some of these actions. Since this messy outage, we’ve doubled down on testing.
We were pretty sure that it would not lead to anything bad. But pretty sure is not 100% sure.
A “Big Red Button” is a unique but highly practical safety feature: it should kick off a simple, easy-to-trigger action that reverts whatever triggered the undesirable state to (ideally) shut down whatever’s happening.
Unit tests alone are not enough - integration testing is also needed
This lesson was learned during a Calendar outage in which our testing didn’t follow the same path as real use, resulting in plenty of testing… that didn’t help us assess how a change would perform in reality.
Teams were expecting to be able to use Google Hangouts and Google Meet to manage the incident. But when 350M users were logged out of their devices and services… relying on these Google services was, in retrospect, kind of a bad call.
It’s easy to think of availability as either “fully up” or “fully down” … but being able to offer a continuous minimum functionality with a degraded performance mode helps to offer a more consistent user experience.
This next lesson is a recommendation to ensure that your last-line-of-defense system works as expected in extreme scenarios, such as natural disasters or cyber attacks, that result in loss of productivity or service availability.
A useful activity can also be sitting your team down and working through how some of these scenarios could theoretically play out—tabletop game style. This can also be a fun opportunity to explore those terrifying “What Ifs”, for example, “What if part of your network connectivity gets shut down unexpectedly?”.
In such instances, you can reduce your mean time to resolution (MTTR), by automating mitigating measures done by hand. If there’s a clear signal that a particular failure is occurring, then why can’t that mitigation be kicked off in an automated way? Sometimes it is better to use an automated mitigation first and save the root-causing for after user impact has been avoided.
Having long delays between rollouts, especially in complex, multiple component systems, makes it extremely difficult to reason out the safety of a particular change. Frequent rollouts—with the proper testing in place— lead to fewer surprises from this class of failure.
Having only one particular model of device to perform a critical function can make for simpler operations and maintenance. However, it means that if that model turns out to have a problem, that critical function is no longer being performed.
Latent bugs in critical infrastructure can lurk undetected until a seemingly innocuous event triggers them. Maintaining a diverse infrastructure, while incurring costs of its own, can mean the difference between a troublesome outage and a total one.
I eat words@group.ltto Not The Onion@lemmy.world•Parishioners Report Priest for Saying Jesus Died With ErectionEnglish11·2 years agoThis is what you get when are not sleeping during biology classes.
I eat words@group.ltMto (safe) Unsecure security@group.lt•Novel attack against virtually all VPN apps neuters their entire purposeEnglish2·2 years agonot a bug, but a feature :))
I eat words@group.ltOPto Open Source@lemmy.ml•GitHub - kevinbentley/Descent3: Descent 3 by Outrage EntertainmentEnglish24·2 years agoa source code of a game ;))
I eat words@group.ltto World News@lemmy.world•Mexico's president says his country is breaking diplomatic ties with Ecuador after embassy raidEnglish171·2 years agoi am all for normalizing raiding ambassies for [put the cause you support] as well
Moderates
Work@group.lt Arduino lietuviškai@group.lt Good lectures@group.lt (safe) Unsecure security@group.lt Sysadmins for sysadmins@group.lt Books@group.lt Robert Anton Wilson breadcrumbs@group.lt War in Ukraine@group.lt Matrix.group.lt support@group.lt bpf@group.lt Magick@group.lt Šiukšlynas / Dumpster fire@group.lt Movies@group.lt Nubo dūmas nuobodus | Spark behind the eyes@group.lt Group.lt support@group.lt
Moving repos is easy, but expect some sweat while moving actions and integrations. Also do backups.