This session will explore the challenges of trusting GenAI apps in a rapidly evolving landscape. GenAI Apps are being spun up faster than the LLM models they are based on, but what is hard to discern is how much trust can be placed in a GenAI app. What guardrails do we need to ensure that the GenAI app is trusted or not? Most of these apps are black boxes to the consumers. There is no understanding of the impact of the code or content being produced by these GenAI apps. Will this process be self correcting or do we need a set of GenAI principles to be applied to protect consumers? This session is as much about asking the right questions as it is about presenting answers. The ultimate goal of the presentation will be to show through practical examples how LLMs can be maliciously trained and how to identify such malice.
The goal: Providing practical insights to make informed decisions in an AI-driven world.
Key Takeaways:
- Critical Trust Evaluation: Understand trust nuances in GenAI apps, going beyond black-box assumptions.
- Practical Awareness: Detect and respond to malicious training with real-world insights, emphasizing proactive security measures.
- Build a GenAI Trust Model: Gain actionable steps for creating a robust trust model, safeguarding consumers from potential risks.
