Information Security

325 readers
1 users here now

founded 2 years ago
MODERATORS
1
 
 

The knee-jerk answer when an app pushes designed obsolescence by advancing the min Android API required is always “for security reasons…” It’s never substantiated. It’s always an off-the-cuff snap answer, and usually it does not even come from the developers. It comes from those loyal to the app and those who perhaps like being forced to chase the shiny with new phone upgrades.

Banks, for example, don’t even make excuses. They can just neglect to be mindful of the problem and let people assume that some critical security vuln emerged that directly impacts their app.

But do they immediately cut-off access attempts on the server-side that come from older apps? No. They lick their finger and stick it in the air, and say: feels like time for a new version.

It’s bullshit. And the pushover masses just accept the ongoing excuse that the platform version must have become compromised to some significant threat -- without realising that the newer version bears more of the worst kinds of bugs: unknown bugs, which cannot be controlled for.

Banks don’t have to explain it because countless boot-licking customers will just play along. After all, these are people willing to dance for Google and feed Google their data in the first place.

But what about FOSS projects? When a FOSS project advances the API version, they are not part of the shitty capitalist regime of being as non-transparent as possible for business reasons. A FOSS project /could/ be transparent and say: we are advancing from version X to Y because vuln Z is directly relevant to our app and we cannot change our app in a way that counters the vuln.

The blame-culture side-effect of capitalism

Security analysis is not free. For banks and their suppliers, it is cheaper to bump up the AOS API than it is to investigate whether it is really necessary.

It parallels the pharmacutical industry, where it would cost more to test meds for an accurate date of expiry. So they don’t bother.. they just set an excessively safe very early expiration date.

Android version pushing is ultimately a consequence of capitalist blame-culture. Managers within an organisation simply do not want to be blamed for anything because it’s bad for their personal profit. Shedding responsibility is the name of the game. And outsourcing is the strategy. They just need to be able to point the blame away from themselves if something goes wrong.

Blindly chasing the bleeding-edge latest versions of software is actually security-ignorant¹ but upper management does not know any better. In the event of a compromise, managers know they can simply shrug and say “we used the latest versions” knowing that upper managers, shareholders, and customers are largely deceived into believing “the latest is the greatest”.

¹ Well informed infosec folks know that it’s better to deal with the devil you know (known bugs) than it is to blindly take a new unproven version that is rich in unknown bugs. Most people are ignorant about this.

Research needed

I speak from general principles in the infosec discipline, but AFAIK there is no concrete research specifically in the context of the onslaught of premature obsolescence by Android app developers. It would be useful to have some direct research on this, because e-waste is a problem and credible science is a precursor to action.

2
-6
Crack this code! (self.infosec)
submitted 5 days ago by cypheset to c/infosec
 
 

Hey there. Recently I got such message RQAAL TDTJX KWISK QBJCB DNQSS ZYFLM. I need to decode it

3
 
 

The background is here. In short, an SSD with the “Apacer” brand froze itself into read-only mode, presumably due to reaching a point of poor reliability.

The data on the drive is useless. It was part way through installing linux when it happened. I would like to reverse that switch to make one last write operation (to write a live linux distro), which thereafter can be read-only.

I have heard some speculation that the manufacturer uses password to impose read-only mode. If true, then the password would be in the drive’s firmware. Does anyone know what Apacer uses for this password?

4
1
submitted 6 months ago by ianonymous3000 to c/infosec
 
 

aspe:keyoxide.org:3VP5CIVZ6MQ767ELCSBRCPSV4M

5
 
 

I tested using Google's Gemini as a helping hand in Linux log based threat hunting - and it is actually helpful, although not ready to take the security analyst's job (yet).

6
 
 

A blog post I made based on discussions at a conference last week - we need to teach smart things like self driving cars and ships to defend themselves against cyber attacks. This outlines how we should approach it.

7
 
 

cross-posted from: https://programming.dev/post/8121843

~n (@nblr@chaos.social) writes:

This is fine...

"We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group."

[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?

8
 
 

I am curious if anyone has advice on a good start to get into InfoSec. I just bought a car, used a separate phone number and somehow marketers found my actual number, so want to get a better handle on how to handle personal data.

9
 
 

Now ever since I got a label printer I made it a habit to... well... label everything. It's been the a gamechanger in organizing my stuff.

This habit includes having a tiny label with my street address and mail address on most any item that I loan away or tend to regularly lug around with me as a general reminder of ownership. I forget about and lose stuff all the time, so this gives me some piece of mind with most of my medium-value little gadgets. I believe (and have experienced) that people are generally decent and will return lost stuff to me if it's easy for them to find out to whom it belongs.

Now it has occurred to me that this practice might be detrimental when applied to a smart cards in general and my Yubikeys in particular. After all, shouldn't a lost Yubikey be considered "tampered with/permanently lost" anyway, whether it's returned or not? And wouldn't an Email address on the key just increase the risk of some immediate abuse of the key's contents, i.e. GPG private keys, that would otherwise not be possible?

Or am I overhtinking this?

10
21
submitted 2 years ago* (last edited 2 years ago) by pineapplelover@lemm.ee to c/infosec
 
 

cross-posted from: https://lemmy.kevitprojects.com/post/8452

What do you guys think about this?