Curvature uses cookies to enhance your user experience. By continuing to browse this site, you are giving your consent for us to set cookies.

Live Chat

Email

Call

Request a Quote

Dell EMC Isilon Archive Series

Dell EMC offers two versions within the Isilon Archive product line which address the needs of growing file-based unstructured data. The A200 and A2000 use the Isilon OneFS O/S and modular architecture to provide a simple scale-out platform to manage massive amounts of unstructured data.

Dell EMC Isilon Archive Series

Dell EMC offers two versions within the Isilon Archive product line which address the needs of growing file-based unstructured data. The A200 and A2000 use the Isilon OneFS O/S and modular architecture to provide a simple scale-out platform to manage massive amounts of unstructured data.

A200
Capacity 120 – 480TB per chassis scaling to 17PB in a single Isilon cluster.

Uses OneFS 8.1 or OneFS 8.1.0.1 or later for SED drives

2TB HDD – 120TB using 60 x 3.5” SATA drives, or 60x SED 2TB SATA drives

4TB HDD – 240TB using 60 x 3.5” SATA drives, or 60x SED 4TB SATA drives

8TB HDD – 480TB using 60 x 3.5” SATA drives, or 60x SED 8TB SATA drives

Number of Nodes per Chassis 4

CPU type per Node – Intel Pentium Processor D1508

ECC memory per Node 16GB

Cache per Node using 400GB SSD Drives – 1 or 2 drives

Front-end Networking per Node 2 x 10GbE (SFP)

Infrastructure Networking per Node – 2 InfiniBand connections supporting QDR links or 2 x 10GbE (SFP)

Number of Chassis 1 to 36

Number of Nodes 4 to 144

Cluster Capacity 120TB to 17.3PB

Rack Units 4 to 144

 

A2000

Capacity 800TB per chassis scaling to 28.8PB in a single Isilon cluster.

Uses OneFS 8.1 or OneFS 8.1.0.1 or later for SED drives

10TB HDD – 800TB using 80 x 3.5” SATA drives per chassis or 80 x SED DATA drives

Number of Nodes per Chassis 4

CPU Type per Node Intel Pentium Processor D1508

ECC memory per Node 16GB

Cache per Node 400GB SSD – 1or 2 drives

Front-end networking per Node 2x 10 GbE (SFP+)

Infrastructure Networking per Node 2 InfiniBand connections supporting QDR links or 2 x 10GbE (SFP+)

Number of Chasses 1 to 36

Number of Nodes 4 to 144

Cluster Capacity 800TB to 28.8PB

Rack Units 4 to 144

Twitter Facebook LinkedIn
2810 Coliseum Centre Drive Suite 600 28217 Charlotte, NC
+1(704)921-1620 usasales@curvature.com