My 2 cents is that it's definitely an interesting idea.
I _think_ this hidden instructions anyway will be passed to the LLM input and archestra should detect them when trying to evaluate an unathorized tool call, but extra layer of protection before sounds solid anyway.
Matvey Kukuy (archestra team) —
white text on a white background is not the biggest issue, there may be just a plain text prompt injection in the middle of 60-pages doc 😞
Matvey Kukuy (archestra team) —
But idea is nice!
joey (archestra team) —
nice to see you Dieter 🙂
yeah multi-modal security is.. tricky, If you're bringing it up as a hackathon idea - I think it's a great/ambitious idea!
Dieter_be —
It's just where my brain went when I saw the "look how easily matvey can hack clawdbot" post
Dieter_be —
especially since OCR is a solved problem now with several models doing this well (I think ?)
joey (archestra team) —
The Dual LLM pattern, w/ OCR as you mentioned, _might_ be one way to approach this here:
• archestra.ai/docs/platform-dual-llm
• archestra.ai/blog/dual-llm
• simonwillison.net/2025/Jun/13/prompt-injection-design-patterns/#the-dual-llm-pattern
Hey everyone, I'm a final year CS undergrad and a backend developer, really excited to be a part of this hackathon and this community as well. Hoping to contribute and add some value. :high_brightness:
the docs didn't mentioned that API keys will be provided rather it says API keys are supported that means you have to provide your API maybe not sure....
shortlisted some crazy ideas! time to implement them, hope so Me and my team is able to complete in time, they're kinda complex but thats where the fun is
Innokentii Konstantinov (archestra team)6:02 PMOpen in Slack
My 2 cents is that it's definitely an interesting idea.
I think this hidden instructions anyway will be passed to the LLM input and archestra should detect them when trying to evaluate an unathorized tool call, but extra layer of protection before sounds solid anyway.