SMB and zero days, name a more iconic duo
"Temperature" is the input value to tweak for stochastic behavior for LLMs, if I recall. It's hidden in most chat bots. Reducing temperature reduces variability in responses, but it also makes responses less "creative." Increasing temperature increases hallucination, but can get the model out of ruts. It's like adding noise to gradient descent analysis. Too little noise and you get stuck in local minima and don't get the true best solution. Too much noise and you can find false minima or skip right over the true solution.
Honestly I'm not surprised that zero days are so readily & easily found by AI because it had already been the case that automated tooling found zero days by the thousands in large code bases without even using AI but by using relatively simple heuristics, fuzzing or even simpler methods like bounds checking algorithms.
wow, AI now makes and trains AI to make bugs and vulnerabilities for AI to train off of and make more bugs that ai then finds and solves. AI is just a circle jerk of companies at this point
This reminds me of something I heard a while back about using AI to screen medical scans, which might well be outdated by now: they said that while the AI generated more false positives than a human it generated far fewer false negatives. In other words it was spotting edge cases that a human might miss. They said that this was worth having a human scan the candidates and weed out the false positives as this actually cut down the workload by quite a margin. The feeling back then was that having an AI and a human working together was better than either working along, because they covered each other's failings: the human is better at screening out false positives but the AI is better at spotting candidates in the tall grass.
Before watching I'm just going to say, no, it wasn't just using o3, it was o3+experience.
"[the prompt] is more like me saying a prayer vs anything scientific" true for all prompting tbh
i’m shocked that ai was able to find a vulnerability in one of the most notoriously vulnerable pieces of software ever made
I remember something similar where people have been using AI to "find" bugs to claim bounties, but all they are really doing is making busy work, since the devs need to figure out if what the AI said was even valid in the first place.
So many people are going to skim over this like: AI… zero-day… ok im becoming bug hunter, let me tile 16 windows of chatgpt and cook.
Once a human finds an error in a codebase, you can set an AI to look for more errors of the same type in the same codebase. Give it some Bayesian focus, with the reasonable assumption that errors are not random: whatever the probability you gave to finding Memory Issue X before, you can now bump it up for similar functions written by the same person or in the same architecture.
14:07 It is primarily because of performance reasons (and others), when you get the packets from the network, you can skip a lot of virtual addresses and work directly in the kernel space mappings. Essentially zero-copies (you can directly process and work with the data) + plus you get all the security things keyrings and capabilities for permissions for multi user access or kereberos. And you avoid kernel to userspace context switches and vice-versa. Along with the ability to use VFS (SMB is for remote) and tightly couple it (so no need for priviledge escalation and so on) so faster and with less attack surface.
16:55 It's also wild to me how many devs set their temp to 0.0 thinking they want deterministic output, then complain when the bot runs in circles doing the same deterministically incorrect output.
I know your content is largely based on memory corruption attacks, but I do think logic bugs are also their own branch of vulnerabilities, which themselves can often be just as dangerous as memory corruption vulnerabilities.
There is a clearly defined score function here. If we got more samples and a lot more review, this would enable someone to go and train an AI or fine tune an AI to be better at security research. We don't need our AI's to know Shakespeare. So I suspect if given time and effort, this would get a lot better.
Given how there's been a lot of talk recently about open source projects trying to deal with the inundation of LLM-generated "security" reports that waste people's time just to try and scam for bug bounties, it's gonna be hard to separate any wheat from the mounds of chaff with this.
Staying subscribed in case i ever need to remind myself how stupid i am by not understanding anything that comes out of his mouth.
Thanks for not burying the leade. Using a metal detector in a sand box at a venue where there have just been 5 sequential weddings is a galaxy away from looking on your hands and knees on a random path on a well hiked trail.
This is what the AI Cyber challenge is all about -- creating machines that use AI to discover vulnerabilities in code autonomously. We're working really hard on it, and it's looking like it's going to be an awesome competition.
@LowLevelTV