Guardian Models
Guardian Models are specialized validation models integrated within Trusted Execution Environments (TEEs) in the Data AI network. They serve as an oversight mechanism, continuously evaluating the performance of Personal AIs to ensure that they accurately reflect user data and preferences.
Key Functions of Guardian Models:
Performance Evaluation: Regularly test Personal AIs to verify that they maintain an accurate and consistent understanding of the user.
User Feedback: Provide users with insights on how to refine and enhance their Personal AI by integrating more relevant data sources.
Agent Ecosystem Development: Lay the foundation for a robust Agent Ecosystem by ensuring Personal AIs are continuously updated and optimized.
Integrity Assurance: Detect and penalize any attempts to manipulate data, as well as protect against adversarial behaviors and synthetic data injections, maintaining the integrity of Personal AIs.
Guardian Model Evaluation Framework:
Initialization: When a Personal AI is deployed, it records metadata such as verified data sources (emails, social media), user interaction history, and behavioral patterns.
Periodic Queries: The Guardian Model evaluates the effectiveness of the Personal AI through randomized queries that cover aspects like social media knowledge, transaction history, and interactions with IoT devices.
Response Verification: The Personal AI's responses are compared against verified data logs to determine accuracy.
Score Adjustment: Based on the consistency and accuracy of responses, the Guardian Model assigns a "knowledge score" to the Personal AI, which can affect its future performance.
Scoring Parameters for Guardian Models:
Agentic User Request Coverage: Measures the range of user requests that the Personal AI can handle.
Consistency: Assesses the alignment of responses across different data sources.
Temporal Accuracy: Ensures that responses are time-sensitive and reflect the most recent data.
Confidence Levels: Evaluates the certainty of the Personal AI's responses, adjusting the score accordingly.
Privacy and Security:
Interactions between Personal AIs and Guardian Models are kept private and secure. Only the user and the respective Guardian Model have access to these evaluations, ensuring that sensitive data remains protected.
Last updated