this post was submitted on 04 Jan 2024
75 points (100.0% liked)

Programming

20721 readers
157 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

cross-posted from: https://programming.dev/post/8121843

~n (@nblr@chaos.social) writes:

This is fine...

"We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group."

[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?

top 21 comments
sorted by: hot top controversial new old
[–] Daxtron2@startrek.website 50 points 1 year ago* (last edited 1 year ago) (3 children)

I think this is extremely important:

Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities.

Bad programmers + AI = bad code

Good programmers + AI = good code

[–] ericjmorey@programming.dev 26 points 1 year ago

LLMs amplify biases by design, so this tracks.

[–] abhibeckert@lemmy.world 11 points 1 year ago* (last edited 1 year ago) (2 children)

This. As an experienced developer I've released enough bugs to miss-trust my own work and spend as much time as I can afford in the budget on my own personal QA process. So it's no burden at all to have to do that with AI code. And of course, a well structured company has further QA outside of that.

If anything, I find it easier to do that with code I didn't write myself. Just yesterday I merged a commit with a ridiculous mistake that I should have seen. A colleague noticed it instantly when I was stuck and frustrated enough to reach out for a second opinion. I probably would've noticed if an AI had written it.

Also - in hindsight - an AI code audit would have also picked it up.

[–] hunger@programming.dev 2 points 1 year ago

The quote above covered exactly what you just said: "yet were also more likely to rate their insecure answers as secure compared to those in our control group" at work :-)

[–] Daxtron2@startrek.website -3 points 1 year ago

I find that the people who complain the most about AI code aren't professional programmers. Everyone at my company and my friends who are in the industry are all very positive towards it

[–] TootSweet@lemmy.world 4 points 1 year ago (1 children)

Good programmers + AI = extra, unnecessary work just to end up with equal quality code

[–] Daxtron2@startrek.website 2 points 1 year ago

Not even close to true but ok

[–] cyclohexane@lemmy.ml 25 points 1 year ago* (last edited 1 year ago) (1 children)

A worrying number of my colleagues use AI blindly. Like the kind where you just press tab and not even look. Those who look spend a second before moving on.

They call me anti-AI, even though I've used chatGPT since day 1. Those LLMs are great tools, but I am just paranoid to use it in that manner. I rather it explain to me how to do the thing instead of doing the thing (at which it is even better).

EDIT: Typo

[–] Spzi@lemm.ee 5 points 1 year ago

Those LLMs are great fools, but I am just paranoid to use it in that manner.

Exquisite typo. I also agree to everything else you said.

[–] qaz@lemmy.world 10 points 1 year ago* (last edited 1 year ago) (1 children)

ChatGPT can be surprisingly good at some things, but can also produce good-looking nonsense. The problem is that spotting those cases requires a certain level of knowledge of the subject, which makes the use of it kind of pointless. I personally use it for subjects where my knowledge is significantly below average, such as learning new frameworks / languages (e.g. React). It often gets stuck with more complex questions (e.g. questions related to x86 Assembly) or obscure subjects. I rely more on its ability to reproduce information than its problem-solving ability. I think the next development is adding LSP integration to the AI assistants and other tools to check its output.

However, I think most people don't use it the way I just described. A lot of people seem to mistake its ability to write code for an ability to understand code. It also sometimes uses older functions deprecated for security reasons, especially when using C. So yes, I think it will increase the amount of insecure code.

[–] quackers@lemmy.blahaj.zone 4 points 1 year ago

Not even knowledge, attentiveness. It's so easy to overlook issues with AI written code vs writing it yourself and having to come up with the process. Just today i had this happen, cost me a day of extra work because i missed something in chatgpt's great looking code.

[–] vhstape@lemmy.sdf.org 8 points 1 year ago

In a shock to literally nobody... Jokes aside, I am looking forward to reading this paper

[–] CCMan1701A@startrek.website 5 points 1 year ago (4 children)

I'm not even sure how to utilize AI to help me write code.

[–] ericjmorey@programming.dev 4 points 1 year ago (1 children)

There are lots of services to facilitate it. Copilot is one of them.

[–] Assian_Candor@hexbear.net 1 points 1 year ago* (last edited 1 year ago) (1 children)

Is it really helpful / does it save a lot of time? I’m the worlds #1 LLM hater (don’t trust it and think it’s lazy) but if it’s a very good tool I might have to come around

[–] ericjmorey@programming.dev 3 points 1 year ago

I haven't been using it much, so I don't know if I'm a good judge. But I see it as an oversized autosuggestion tool that sometimes feels like an annoying interuption but sometimes feels like it helped me mover faster without breaking my train of thought.

By "it", I mean I've tried several different ways to have an integrated LLM assistant integrated into my dev environment, none of which I was initially satisfied with in terms of workflow. But that's kinda true for every change I've made to my dev environment and workflows. It takes me a while to settle on anything new.

I recommend none in particular, but I recommend that you take time to at least check it out. They have potential.

[–] pkill@programming.dev 4 points 1 year ago

Also one really good practice from pre-Copilot era still holds, that many new users of copilot, my past self included might forget: don't write a single line of code without knowing it's purpose. Another thing is that while it can save a lot of time on boilerplate, you need to stop and think whenever it's using your current buffer's contents to generate several lines of very similar code whether it wouldn't be wiser to extract the repetitive code into a method. Because while it's usually algorithmically correct, good design still remains largely up to humans.

[–] Spzi@lemm.ee 3 points 1 year ago

There's a very naive, but working approach: Ask it how :D

Or pretend it's a colleague, and discuss the next steps with it.

You can go further and ask it to write a specific snippet for a defined context. But as others already said, the results aren't always satisfactory. Having a conversation about the topic, on the other hand, is pretty harmless.

[–] Auzy@beehaw.org 1 points 1 year ago

Copilot or Tabnine are the two major ones.

They're awesome for some things (especially error handling). But no.. AI will not take over the world anytime soon

[–] mdhughes@lemmy.ml 2 points 1 year ago

Good programmers - AI = best code.

[–] pkill@programming.dev 2 points 1 year ago* (last edited 1 year ago)