That’s a three year history of accessibility incompetence from the OpenAI team. From the same company asking authors to use ARIA to better slurp / steal their content.
💀
That’s a three year history of accessibility incompetence from the OpenAI team. From the same company asking authors to use ARIA to better slurp / steal their content.
💀
Visual Studio provides some kind of AI even without Copilot.
Inline (single line) completions - I not always but regularly find quite useful
Repeated edits continuation - I haven't seen them in a while, but have use them on maybe two or three occasions. I am very selective about these because they're not deterministic like refractorings and quick actions, which I can be confident in correctness even when doing those across many files and lines. For example invert if changes many line indents; if an LLM does that change you can't be sure it didn't change any of those lines.
Multi-line completions/suggestions - I disabled those because it offsets/moves away the code and context I want to see around it, as well as noisy movement, for - in my limited experience - marginal if any use[fulness].
In my company we're still in selective testing phase regarding customer agreements and then source code integration into AI providers. My team is not part of that yet. So I don't have practical experience regarding any analysis, generating, or chat functionality with project context. I'm skeptical but somewhat interested.
I did do private projects, I guess one, a Nushell plugin in Rust, which is largely unfamiliar to me, and tried to make use of Copilot generating methods for me etc. It felt very messy and confusing. Generated code was often not correct or sound.
I use Phind and more recently more ChatGPT for research/search queries. I'm mindful of the type of queries I use and which provider or service I use. In general, I'm a friend of ref docs, which is the only definite source after all. I'm aware of and mindful of the environmental impact of indirectly costly free AI search/chat. Often, AI can have a quicker response to my questions than searching via search ending and on and in upstream docs. Especially when I am familiar with the tech, and can relatively quickly be reminded, or guide the AI when it responds bullshit or suboptimal or questionable stuff, or also relatively quickly disregard the entire AI when it doesn't seem capable to respond to what I am looking for.
demo login says invalid username or password. Is it possible someone changed the password on the demo account?

The entire SDK is programmed in CMake! 😱
… okay, it's git submodules
submodules screenshot

cdrewind Rewind CDROMs before ejection.
lol wut
One of the two associations is in power and actively dismantling society. The other develops a technical product and runs a Lemmy instance many people and other instances have blocked.
Handling or concluding them a bit differently seems quite fine to me.
That being said, I've seen plenty of Lemmy dev connection criticism on this platform. I can't say the same about FUTO.
No Gotos, All Subs
That's sub-optimal
😏
I don't think Microsoft will hold your hand. It's the local IT or usage support.
In my eyes the main issue is the decision makers falling for familiarity and marketing/sales pushing.
Which makes it even more absurd/ironic that after the switch investment, they invest again into a switch into something that is not really better.
Either way, this time though, there's a lot more relevance and pressure to make a change, and a lasting change. The environment is not the same as before.
I diffusely remember reading about two/twice. But I can't provide sources either.
What is the vulnerability, what is the attack vector, and how does it work? The technical context from the linked source Edera
This vulnerability is a desynchronization flaw that allows an attacker to "smuggle" additional archive entries into TAR extractions. It occurs when processing nested TAR files that exhibit a specific mismatch between their PAX extended headers and ustar headers.
The flaw stems from the parser's inconsistent logic when determining file data boundaries:
- A file entry has both PAX and ustar headers.
- The PAX header correctly specifies the actual file size (size=X, e.g., 1MB).
- The ustar header incorrectly specifies zero size (size=0).
- The vulnerable tokio-tar parser incorrectly advances the stream position based on the ustar size (0 bytes) instead of the PAX size (X bytes).
By advancing 0 bytes, the parser fails to skip over the actual file data (which is a nested TAR archive) and immediately encounters the next valid TAR header located at the start of the nested archive. It then incorrectly interprets the inner archive's headers as legitimate entries belonging to the outer archive.
This leads to:
- File overwriting attacks within extraction directories.
- Supply chain attacks via build system and package manager exploitation.
- Bill-of-materials (BOM) bypass for security scanning.
The attack surface is the flaw. The chain of trust is the flaw/risk.
Who's behind the project? Who has control? How's the release handled? What are the risks and vulnerabilities of the entirely product delivery?
It's much more obvious and established/vetted with Mozilla. With any other fork product, you first have to evaluate it yourself.
Quite regularly, tbh.