Hello everyone,
I’m very interested in building a discussion around scalable infrastructure for edge computing combined with cloud servers. Lately, I’ve been sketching an architecture to support distributed compute workloads: data-intensive tasks running on edge servers, with backup and heavy compute offloaded to centralized cloud servers. The idea is to have a hybrid setup: low-latency edge processing, connected via fibre-channel switches and SAN storage, while AI-server workloads (e.g., on AMD EPYC or GPU-enabled rackmount systems) run in a centralized cloud / data center.
I’d love to hear your thoughts on:
Best practices for designing edge-to-cloud networks (especially using storage area networks and fibre channel)
Real-world use-cases where this architecture is in production
Recommendations for open-source tools or hardware to prototype such a system
Looking forward to learning from your experience.
Thanks!
Top comments (0)