4 Comments

Not a problem, Devon, I will give it a read when it’s out. It’s an interesting topic to keep up to date with, given all the developments in industry. Have a good week ahead!

Expand full comment

Hey Mark, I posted the new article on LLM security much longer than I realized in the next article I will be shorter in any event here is the link if you liked to read it https://open.substack.com/pub/divinedigitaldialogues/p/unveiling-the-dangers-of-direct-manipulation

Expand full comment

Enjoyed the read, Devon. LLMs are particularly interesting so far as their rapid development and use is concerned. I wasn't aware that vulnerabilities could emerge without malicious/intentional attacks. Could you elaborate on this a bit more for me? Is it related to bugs, spaghetti code, of the LLM learning incorrectly? Curious also on your opinions regarding AI in general and how human's are planning to safeguard their capabilities... While I know regulations are on the rise, I haven't seen much similar to Isaac Asimov's 3 laws of robotics?

Expand full comment

Hey Mark, sorry for the late reply and thanks for diving into the article! And you're right, the vulnerabilities in LLMs are a whole different ball game. Unlike standard code where you can pinpoint and patch a bug, fixing issues in a trained model like an LLM isn’t straightforward, which definitely ramps up the challenge. More eyes and minds need to focus on this, no doubt.

As for more details, I've actually been deep in writing mode this weekend, working on the next piece in the series. I didn't send over this link because it was just an outline of what’s to come, but here’s a peek at what’s coming, focusing next on Direct Manipulation Attacks: https://divinedigitaldialogues.substack.com/p/back-on-guard-exploring-the-depths Happy Sunday to you

Expand full comment