Product Introduction
- Definition: Fabric by Carmel Labs is a distributed compute platform for AI and data workloads, categorized as a serverless infrastructure solution. It executes containerized tasks like embeddings, ML training, and web scraping without user-managed servers.
- Core Value Proposition: Fabric eliminates cloud infrastructure management while reducing costs by 80% versus AWS/Azure. It enables developers to deploy AI, data processing, and CI/CD workloads globally via SDK/dashboard with zero cold starts and auto-retry capabilities.
Main Features
- Distributed Workload Orchestration: Splits tasks (e.g., ML training batches) across a global device network using isolated sandboxed containers. Proprietary orchestration algorithms dynamically allocate workloads to idle devices (iOS/Android/desktop) for parallel processing.
- Pay-Per-Use Pricing Engine: Charges granularly per operation (e.g., $0.0001/text for embeddings, $0.001/file for transcription). Real-time cost tracking compares expenses against AWS Lambda/SageMaker.
- Unified Workload SDK: Python-based
fabric-sdkand web dashboard support 50+ pre-optimized workloads, including bioinformatics sequence alignment, image preprocessing, and GitHub Actions runners. Integrates with Docker for custom container deployment.
Problems Solved
- Pain Point: Eliminates cloud cost overruns and cold-start delays in serverless functions. Solves infrastructure scalability bottlenecks for bursty AI workloads.
- Target Audience: DevOps engineers managing CI/CD pipelines; data scientists running ML training/embeddings; startups needing cost-efficient transcription/web scraping; bioinformatics researchers processing FASTQ files.
- Use Cases:
- Replacing GitHub Actions runners with Fabric’s macOS/iOS build clusters
- Distributed inference for AI chatbots using global device network
- Real-time sports analytics via parallelized motion data processing
Unique Advantages
- Differentiation: 80% lower cost than AWS while offering broader workload support (physics simulations, agent-based modeling) vs. specialized platforms like Colab. Outperforms Lambda with persistent warm instances.
- Key Innovation: Patented "device sourcing" algorithm aggregates underutilized consumer devices (phones/tablets) into secure compute nodes. End-to-end encryption and workload isolation prevent data leakage.
Frequently Asked Questions (FAQ)
- How does Fabric achieve 80% cost savings vs. AWS? By leveraging idle consumer devices instead of dedicated data centers, bypassing cloud markup fees while maintaining enterprise-grade encryption.
- Can Fabric handle sensitive data for bioinformatics workloads? Yes. All workloads run in hardware-isolated sandboxes with encrypted data transit/storage, compliant for HIPAA-ready processing.
- What happens if a device fails during ML training? Fabric’s auto-retry system checkpoints progress and redeploys failed segments to other nodes within 15 seconds, ensuring job continuity.
- Is there a free tier for testing Fabric? The Basic plan ($6/month) includes 10K free inference operations and 100 CI/CD build minutes for evaluation.
