corroded

joined 2 years ago
[–] corroded@lemmy.world 20 points 4 hours ago (2 children)

Not really. While I don't have the exact numbers, the output of an infrared LED is no higher (usually) than an LED in the visible range. My security cameras have an array of 10 or so LEDs.

So looking at a security camera would be roughly equivalent to staring at a light bulb.

[–] corroded@lemmy.world 10 points 1 day ago (2 children)

Why? If everyone does poorly, everyone should fail, provided the opportunity to learn was there.

[–] corroded@lemmy.world -1 points 1 day ago (6 children)

This has always seemed overblown to me. If students want to cheat on their coursework, who cares? As long as exams are given in a controlled environment, it's going to be painfully obvious who actually studied the material and who had ChatGPT do it for them. Re-taking a course is not going to be fun or cheap.

Maybe I'm oversimplifying this, but it feels like proctored testing solves the entire problem.

[–] corroded@lemmy.world 15 points 2 days ago* (last edited 2 days ago) (1 children)

I sincerely hope this is the truth. I don't give two shits what they recommend. I haven't had COVID yet, and I'm not about to get it. Let me and my doctor decide if I get the vaccine yearly.

Also, the government isn't paying for my COVID shots any more. I am. Even if it only decreases my chances of infection by a small percentage, let me make that choice.

[–] corroded@lemmy.world 24 points 2 days ago (2 children)

I'll pay for good software. Developers deserve a decent wage, too. I'll pay a lot for really good software. I'll buy new versions of the tools I use often.

What I will never ever do is subscribe to software, no matter how good it is. Software is not a service and should not ever be sold as such.

[–] corroded@lemmy.world 147 points 3 days ago* (last edited 3 days ago) (19 children)

Mathematically, I think it's hard for people to truly understand the obscene wealth that some people have accumulated. 100 billion doesn't sound that different than 100 million.

100 million is magnitudes closer to ZERO than it is to 100 billion.

[–] corroded@lemmy.world 25 points 1 week ago

If you don't want offline maps, and you don't want to use data, what exactly are you looking for? The map has to come from somewhere.

[–] corroded@lemmy.world 149 points 1 week ago (3 children)

As the article mentions, this isn't a security "feature," it's anti-competetive. The worst part is that Nextcloud isn't even really in competition with Google. Setting up a Nextcloud server isn't hard, but it's not a trivial task. Sharing it outside your local network also requires a bit of skill, especially if done securely. That is to say, Nextcloud users probably tend to be more tech-savvy.

The people using Nextcloud aren't going to suddenly decide to switch over to Google Drive. I'll get it from FDroid before I downgrade to Google Drive. If that wasn't an option, I'd set up an FTP server or even WebDAV.

[–] corroded@lemmy.world 258 points 1 week ago (47 children)

This isn't an AI problem. This is a "most humans are assholes" problem. How hard is it to say "Oh, you don't have what I need? That's too bad. Can you please cancel my subscription?"

[–] corroded@lemmy.world 8 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

I'm not even sure what that could mean. Maybe using chopsticks instead of a fork? I've always just eaten food with whatever utensil is typically used for that type of cuisine. I think most people, Chinese or otherwise, eat Chinese food with chopsticks, don't they?

[–] corroded@lemmy.world 21 points 3 weeks ago (3 children)

Gas turbines produce a lot of power, as in 1MW or more per turbine. Is this a backup system, or is the facility using so much power that they literally need their own electric plant to sustain it?

[–] corroded@lemmy.world 45 points 4 weeks ago (1 children)

It's amazing how someone's job description can say "Scammer" in so many more words.

165
submitted 4 months ago* (last edited 4 months ago) by corroded@lemmy.world to c/homeassistant@lemmy.world
 

When I first started setting up my home automation, I decided on Zigbee, and I very much dove in head-first. I set up dozens of Zigbee devices, and some worked a lot better than others. I have a fairly stable Zigbee network with well over 100 devices, but many of those have been replaced over time. To save others the wasted time and money, I wanted to give a short breakdown of what I've noticed across brands.

  • SONOFF: My Zigbee controller is made by SONOFF, and it works well. As far as their motion sensors, not so much (I even made a post about how bad they were about a year ago). Their motion sensors give such unreliable results that they're borderline useless. Their plugs work generally okay, although they do drop off my network occasionally. Overall, they really wouldn't be my first choice.

  • Aquara: They make some very slick-looking devices, but they're horrible. Magnetic door sensors frequently just get stuck in an open or closed state, or just drop off the network completely. I used two of their leak sensors. One is still working well; the other just spontaneously decided to stop responding completely. I have a few of their pushbuttons; it took me at least a dozen tries to pair them, but they seem to work well after that. Overall, Aquara devices either quit responding or drop off the network more frequently than any other brand; I will never buy another Aquara device.

  • DOGAIN: I bought several of their plugs. So far, not a single issue. I assume they're a white-label brand, so I don't know who actually makes the hardware, but I have no complaints so far.

  • MHCOZY: Another white-label brand. I've purchased several of their relay switches. I haven't had a single problem with any of them, and I'm using quite a few.

  • Haozee: Probably another white-label brand. I have several of their mmWave sensors. Occasionally they get stuck in a "detected" state, but rarely. They have never dropped off my network. I'd buy more.

  • Phillips (Hue): They're exceptionally expensive, but for a reason. I have a lot of their smart bulbs, and a few outdoor motion sensors. They all work flawlessly. Don't use the Hue app or a Hue bridge, though, unless you want to be locked into their app; just pair your device with a third-party Zigbee controller.

  • Leviton: I have replaced every single in-wall switch in my home with a Leviton smart switch or smart dimmer. They're a well-known brand, so I would expect their products to work well, and they do. My only complaint is that occasionally one of the switches will drop and refuse to communicate unless I power it off (with a breaker); this is rare, though, and normally corresponds with a power outage.

  • Thirdreality: I saved Thirdreality for last because I have absolutely no complaints at all. They are my go-to for Zigbee devices. I have many of their temperature sensors, plugs, magnetic door sensors, motion sensors, soil moisture sensors, etc. I have never had a device drop off my network or stop working correctly. I have dozens of their devices, and my only issue was a climate sensor that got stuck at 99% humidity after I accidentally sprayed water into the case. That's my fault.

So, in general, if I was to re-build my Zigbee network from the ground up, I'd go for Thirdreality devices first. If they didn't make what I need, I'd go for Phillips Hue, and if I still couldn't find what I need, then that's what the list above is for.

I'm hoping to see some replies to this; what are your experiences with different Zigbee devices? Any brands you either trust or would never buy from?

Edit: As others have mentioned, your Zigbee integration (also also possibly your controller) may make a difference in reliability. I am using ZHA and a SONOFF controller. Your experience may be different.

 

I've been using HA for a while; having my home just "do things" for me without asking is fantastic. My lights turn on to exactly the levels I want when I enter a room, my grass and my plants get watered automatically, heating and cooling happens only when it needs to. There are lots of benefits. Plus, it's just a fun hobby.

One thing I didn't expect, though, is all the interesting things you can learn when you have sensors monitoring different aspects of you home or the environment.

  • I can always tell when someone is playing games or streaming video (provided they're transcoding the video) from one of my servers. There's a very significant spike in temperature in my server room, not to mention the increased power draw.
  • I have mmWave sensors in an out-building that randomly trigger at night, even though there's nobody there. Mice, maybe?
  • Outdoor temperatures always go up when it's raining. It's always felt this way, but now it's confirmed.
  • My electrical system always drops in voltage around 8AM. Power usage in my house remains constant, so maybe more demand on the grid when people are getting ready for work?
  • I have a few different animals that like to visit my property. They set off my motion sensors, and my cameras catch them on video. Sometimes I give them names.
  • A single person is enough to raise the temperature in an enclosed room. Spikes in temperature and humidity correspond with motion sensors being triggered.
  • Watering a lawn takes a lot more water than you might expect. I didn't realize just how much until I saw exactly how many gallons I was using. Fortunately, I irrigate with stored rain water, but it would make me think twice about wasting city water to maintain a lawn.
  • Traditional tank-style water heaters waste a lot of heat. My utility closet with my water heater is always several degrees hotter than the surrounding space.

What have you discovered as a result of your home automation? While the things I mentioned might not be particular useful, they're definitely interesting, at least to me.

 

At least in this post, I'm not advocating for any particular political position; I mean for this to be a more generalized discussion.

I have never understood what prompts people to attend political rallies. None of the current US political candidates 100% align with my views, but I am very confident that I made the right choice in who I voted for. That is to say, I'd consider myself a strong supporter of [name here].

To me, it feels like attending a political rally is like attending a college lecture. You have a person giving you information, but you don't gain anything by hearing it in-person as opposed to reading it or watching a recording. If I want to learn something, it's much more comfortable for me to read and article or watch a video in the comfort of my own home. If I want to understand what a political candidate stands for, I'd much rather watch a recording of a town-hall meeting or read something she (oops) wrote rather than taking the time to drive to a rally, get packed in with a bunch of other people, and simply stand and listen.

I understand concerts. Hearing live music sounds vastly different than listening to a recording. Same with movies; most of us don't have an IMAX theater at home. When you're trying to gather information, though, what's the draw in standing outside in a crowd at listening to it in person?

 

This is more "home networking" than "homelab," but I imagine the people here might be familiar with what in talking about.

I'm trying to understand the logic behind ISPs offering asymmetrical connections. From a usage standpoint, the vast majority of traffic goes to the end-user instead of from the end-user. From a technical standpoint, though, it seems like it would be more difficult and more expensive to offer an asymmetrical connection.

While consumers may be connected via fiber, cable, DSL, etc, I assume that the ISP has a number of fiber links to "the internet." Those links are almost surely some symmetrical standard (maybe 40 or 100Gb). So if they assume that they can support 1000 users at a certain download speed, what is the advantage of limiting the upload? If their incoming trunks can support 1000 users at 100Mb download, shouldn't it also support 1000 users at 100Mb upload since the trunks themselves are symmetrical?

Limiting the upload speed to a different rate than download seems like it would just add a layer of complexity. I don't see a financial benefit either; if their links are already saturated for download, reducing upload speed doesn't help them add additional users. Upload bandwidth doesn't magically turn into download bandwidth.

Obviously there's some reason for this, but I can't think of one.

 

I generally try to stay informed on current events. With the exception of what gets posted here, I normally get my news from CNN. I tend to lean left politically, but not always.

The problem I always run into is that every news site I read, regardless of where they stand on the political spectrum, is always filled with pointless bullshit. Specifically, sports, celebrity news, and product placement. "Some shitty pop singer is dating some shitty actor" or "These are our recommendations for the best mass-produced garbage-quality fast fashion from Temu" or "Some overpaid dickhead threw a ball faster than some other overpaid dickhead."

What I'd love to find is a news source that's just news that matters. No celebrity gossip, sports, opinion pieces, etc. Just real events that have an impact on some part of the world. Legislation, natural events, economic changes, wars, political changes, that kind of thing.

Does this exist, or is all journalism just entertainment?

 

A few months ago, I upgraded all my network switches. I have a 16-port SFP+ switch and a 1GB switch (LAGG to the SPF+ with two DACs). These work perfectly, and I'm really happy with the setup so far.

My main switch ties into a remote switch in another building over a 10Gb fiber line, and this switch ties into another switch of the same model (on a different floor) over a Cat6e cable. These switches are absolute garbage: https://www.amazon.com/gp/product/B084MH9P8Q

I should have known better than to buy a cheap off-brand switch, but I had hoped that Zyxel was a decent enough brand that I'd be okay. Well, you get what you pay for, and that's $360 down the toilett. I constantly have dropped connections, generally resulting in any attached devices completely losing network connectivity, or if I'm lucky, dropping down to dial-up speeds (I'm not exaggerating). The only way to fix it is to pull the power cable to the switch. Even under virtually no load, the switch gets so hot that it's painful to touch. Judging from the fact that my connection is far more stable when the switch is sitting directly in front of an air conditioner, that tells me just about all I need to know.

I'm trying to find a pair of replacement switches, but I'm really striking out. I have two ancient Dell PowerConnect switches that are rock solid, but they're massive, they sound like jet engines, and they use a huge amount of power. Since these are remote from my homelab and live in occupied areas, they just won't work. All I need is a switch that has:

  • At least 2 SFP+ ports (or 1 SFP+ port for fiber and a 10Gb copper port)
  • At least 4 1Gb ports (or SFP ports; I have a pile of old 1GB SFP adapters)
  • Management/VLAN capability Everything I find online is either Chinese white-label junk or is much larger than what I need. A 16-port SFP+ switch would work, but I'd never use most of the ports, and I'd be wasting a lot of money on overkill hardware. As an example, one of these switches is in my home office; it exists solely so I have a connection between my server rack, two PCs, and a single WAP. I am never going to need another LAN connection in my home office; any hardware is going to go in the server rack, but I do need 10GB connectivity on at least one of those PCs.

Does anyone have a suggestion for a small reliable switch that has a few SFP+ ports, is made by a reputable brand, and isn't a fire hazard?

 

I have been using the BlueIris NVR integration (from HACS) for quite some time, and it works great for triggering BI from HA. I've trying to do the opposite now: Fire off automations in HA whenever BI detects motion on one of my cameras.

I've never used MQTT before, so I'm learning as I go, but I think I have most of my setup configured properly. I've installed Mosquitto and the MQTT integration in HA. I've configured BI to connect to HA, and running "Test" in the "Edit MQTT Server" menu in BI shows a good connection and no errors. I've set my cameras to post an MQTT event when the alert is triggered (and I've verified that the alerts are in fact being triggered).

Nothing happens in HA, though. The "Motion" sensor for my camera in HA stays at "Clear." In fact, the history shows no change at all, ever.

I have the events in BI set up as follows: On Alert: MQTT Topic - BlueIris/&CAM/Status and Payload - { "type": "&TYPE", "trigger": "ON" } On Reset: Exactly the same, but change ON to OFF.

I've tried change the MQTT autodiscovery header in HA from "homeassistant" to "BlueIris," and it made no difference. The Mosquitto logs show a login from HA, so I feel like I'm close, but I'm not sure where else to look.

Edit: I installed MQTT explorer, and I've verified that the messages are making it to Mosquitto, and they appear to be correctly formatted.

UPDATE: I set the MQTT integration to listen to the MQTT messages coming from BI, and sure enough, they were coming through just fine. For some reason, the BI integration just wasn't seeing them. Digging through the system logs, I saw some errors "creating a binary sensor" coming from the BI integration. The only thing I can think is that because I didn't have MQTT set up when I first installed the BI integration, something went wrong with the config (although I had already rebooted the system several times). I re-downloaded the BI integration and re-installed it, and now everything works perfectly.

 

This isn't strictly "homelab" related, but I'm not sure if there's a better community to post it.

I'm curious what kind of real-world speeds everyone is getting over their wireless network. I was testing tonight, and I'm getting a max of 250Mbit down/up on my laptop. I have 4 Unifi APs, each set to 802.11ac/80Mhz, and my laptop supports 2x2 MIMO. Testing on my phone (Galaxy S23) gives basically the exact same result.

The radio spectrum around me is ideal for WiFi; on 5Ghz, there is no AP in close enough range for me to detect. With an 80Mhz channel width, I can space all 4 of my APs so that there's no interference (using a non-DFS channel for testing, btw).

Am I wasting my time trying to chase higher speeds with my current setup? What kind of speeds are you getting on your WiFi network?

23
submitted 11 months ago* (last edited 11 months ago) by corroded@lemmy.world to c/cpp@programming.dev
 

I have been programming in C++ for a very long time, and like a lot of us, I have an established workflow that hasn't really changed much over time. With the exception of bare-metal programming for embedded systems, though, I have been developing for Windows that entire time. With the recent "enshittification" of Windows 11, I'm starting to realize that it's going to be time to make the switch to Linux in the very near future. I've become very accustomed to (spoiled by?) Visual Studio, though, and I'm wondering about the Linux equivalent of features I probably take for granted.

  • Debugging: In VS, I can set breakpoints, step through my code line-by-line, pause and inspect the contents of variable on-the-fly, switch between threads, etc. My understanding of Linux programming is that it's mostly done in a code editor, then compiled on the command line. How exactly do you debug code when your build process is separate from your code editor? Having to compile my code, run it until I find a bug, then open it up in a debugger and start it all over sounds extremely inefficient.
  • Build System: I'm aware that cmake exists, and I've used it a bit, but I don't like it. VS lets me just drop a .h and .cpp file into the solution explorer and I'm good-to-go. Is there really no graphical alternative for Linux?

It seems like Linux development is very modular; each piece of the development process exists in its own application, many of which are command-line only. Part of what I like about VS is that it ties this all together into a nice package and allows interoperability between the functions. I can create a new header or source file, add some code, build it, run it, and debug it, all within the same IDE.

This might come across as a rant against Linux programming, but I don't intend it to. I guess what I'm really looking for is suggestions on how to make the transition from a Visual Studio user to a Linux programmer. How can I transition to Linux and still maintain an efficient workflow?

As a note, I am not new to Linux; I have used it extensively. However, the only programming I've done on Linux is bash scripting.

 

I've noticed recently that my network speed isn't what I would expect from a 10Gb network. For reference, I have a Proxmox server and a TrueNAS server, both connected to my primary switch with DAC. I've tested the speed by transferring files from the NAS with SMB and by using OpenSpeedTest running on a VM in Proxmox.

So far, this is what my testing has shown:

  • Using a Windows PC connected directly to my primary switch with CAT6: OpenSpeedTest shows around 2.5-3Gb to Proxmox, which is much slower than I'd expect. Transferring a file from my NAS hits a max of around 700-800MB (bytes, not bits), which is about what I'd expect given hard drive speed and overhead.
  • Using a Windows VM on Proxmox: OpenSpeedTest shows around 1.5-2Gb, which is much slower than I would expect. I'm using VirtIO network drivers, so I should realistically only be limited by CPU; it's all running internally in Proxmox. Transferring a file from my NAS hits a max of around 200-300MB, which is still unacceptably slow, even given the HDD bottleneck and SMB overhead.

The summary I get from this is:

  • The slowest transfer rate is between two VMs on my Proxmox server. This should be the fastest transfer rate.
  • Transferring from a VM to a bare-metal PC is significantly slower than expected, but better than between VMs.
  • Transferring from my NAS to a VM is faster than between two VMs, but still slower than it should be.
  • Transferring from my NAS to a bare-metal PC gives me the speeds I would expect.

Ultimately, this shows that the bottleneck is Proxmox. The more VMs involved in the transfer, the slower it gets. I'm not really sure where to look next, though. Is there a setting in Proxmox I should be looking at? My server is old (two Xeon 2650v2); is it just too slow to pass the data across the Linux network bridge at an acceptable rate? CPU usage on the VMs themselves doesn't get past 60% or so, but maybe Proxmox itself is CPU-bound?

The bulk of my network traffic is coming in-and-out of the VMs on Proxmox, so it's important that I figure this out. Any suggestions for testing or for a fix are very much appreciated.

 

In c++17, std::any was added to t he standard library. Boost had their own version of "any" for quite some time before that.

I've been trying to think of a case where std::any is the best solution, and I honestly can't think of one. std::any can hold a variable of any type at runtime, which seems incredibly useful until you consider that at some point, you will need to actually use the data in std::any. This is accomplished by calling std::any_cast with a template argument that corresponds to the correct type held in the std::any object.

That means that although std::any can hold a type of any object, the list of valid objects must be known at the point that the variable is any_cast out of the std::any object. While the list of types that can be assigned to the object is unlimited, the list of types that can be extracted from the object is still finite.

That being said, why not just use a std::variant that can hold all the possible types that could be any_cast out of the object? Set a type alias for the std::variant, and there is no more boilerplate code than you would have otherwise. As an added benefit, you ensure type safety.

 

I'm looking for a portable air conditioner (the kind with 1 or 2 hoses that go to outside air). The problem I'm running into is that every single one I find has some kind of "smart" controller built in. The ones with no WiFi connectivity still have buttons to start/stop the AC, meaning that a simple Zigbee outlet switch won't work. I could switch the AC off, but it would require a button-press to switch it back on. The ones with WiFi connectivity all require "cloud" access; my IoT devices all connect to a VLAN with no internet access, and I plan to keep it that way.

I suppose I could hack a relay in place of the "start" button, but I'd really rather just have something I can plug in and use.

I can't use a window AC; the room has no windows. I'll need to route intake/exhaust through the wall. So far, I can't find any "portable" AC that will work for me.

What I'm looking for is a portable AC that either:

  • Connects to WiFi and integrates with HA locally.
  • Has no connectivity but uses "dumb" controls so I can switch it with a Zigbee outlet switch.

Any ideas?

view more: next ›