Open framework for confidential AI
-
Updated
Feb 14, 2026 - Rust
Open framework for confidential AI
Reading list for adversarial perspective and robustness in deep reinforcement learning.
Build secure mcp infrastructure to audit and control every data access by AI agents with minimal effort
Let AI agents like ChatGPT & Claude use real-world local/remote tools you approve via browser extension + optional MCP server
This project integrates Hyperledger Fabric with machine learning to enhance transparency and trust in data-driven workflows. It outlines a blockchain-based strategy for data traceability, model auditability, and secure ML deployment across consortium networks.
Secure Computing in the AI age
A living map of the AI agent security ecosystem.
IntentusNet - Deterministic execution infrastructure for agent and distributed systems, enabling reproducible workflows, reliable intent routing, transport abstraction, and transparent operational control.
Project Agora: MVP of the Concordia framework. An ethical, symbiotic AI designed to foster and protect human flourishing.
Production-grade MCP server for autonomous code risk analysis. Built for AI agents + CI/CD gates—fast, deterministic checks with x402 pay-per-call on Base (USDC) and optional on-chain verification.
Secure Python Chatbot with PANW AIRS protection and Claude API
Secure Python Chatbot with PANW AIRS protection and OpenAI API
Behavior-driven cognitive experimentation toolkit with BCE (Behavioral Consciousness Engine) regularization, telemetry, and plug-and-play integrators for language-model training and evaluation.
💻🔒 A local-first full-stack app to analyze medical PDFs with an AI model (Apollo2-2B), ensuring privacy & patient-friendly insights — no external APIs or cloud involved.
💻🔒 A local-first full-stack app to analyze medical PDFs with an AI model (Apollo2-2B), ensuring privacy & patient-friendly insights — no external APIs or cloud involved.
airlock is a cryptographic handshake protocol for verifying AI model identity at runtime. It enables real-time attestation of model provenance, environment integrity, and agent authenticity - without relying on vendor trust or static manifests.
A zero-trust encrypted transport layer for AI agents and tools, with AES-GCM encryption, HMAC signing, and identity-aware JSON-RPC messaging.
A security runtime that sits inside AI agents to block unauthorized actions, enforce accountability, and prevent misuse in real time
A self-hosted AI chatbot for privacy-conscious users. Runs locally with Ollama, ensuring data never leaves your device. Built with SvelteKit for performance and flexibility. No external dependencies—your AI, your rules. 🚀
Add a description, image, and links to the secure-ai topic page so that developers can more easily learn about it.
To associate your repository with the secure-ai topic, visit your repo's landing page and select "manage topics."