Russ Miller
UB Distinguished Professor

Dept of Computer Science & Engineering
State University of New York at Buffalo

CI Equipment

Main
Biography
Photos/Videos
Media Coverage
Research
Major Results
Shake-and-Bake
Music/Philosophy
Publications
Presentations
CI Lab
Projects
Equipment
Publications
News
CCR
Teaching
Personal Info
Contact Info

CI Lab Equipment
(Decommissioned in 2012)

Miller's Cyberinfrastructure Laboratory (MCIL) is located in 215 Furnas Hall on SUNY-Buffalo's North Campus. Equipment located within this laboratory is dedicated to experimentation to advance cyber-enabled discovery and innovation. Such equipment is available to students and staff affiliated with the laboratory and is typically used to test middleware in the event of interruptions of service, including failures of networks, nodes, storage, and so forth.

This experimental laboratory is currently being installed. Equipment that has been acquired to date includes the following.

MCIL Systems (approx. 57.5 TFlops; 22 TB Storage System; 4 TB Internal Storage; 156 Traditional Cores; 15 nVidia Tesla GPGPUs)

  • Production System (Magic)
    • nVidia Tesla/Intel Xeon Cluster (Note that this cluster is in production on our NYS Grid and available to users world-wide. It is managed by the Computer Science and Engineering IT Staff in consultation with the staff from the Center for Computational Research.)
      • Head Node: Dell PE1950, with two Dual Core Xeon Processors 5148LV, each with 4MB Cache, 2.33GHz, 1333MHz FSB, 16 GB Memory, 263 GB Disk
      • Worker Nodes:
        • Eight Dell PE1950s, each with one Dual Core Xeon X5260 Processor, 6MB Cache, 3.33GHz, 1333MHz FSB, 219 GB Disk
        • Five Dell PE1950s, each with two Quad Core Xeon E5430 Processors, 2x6MB Cache, 2.66GHz, 1333MHz FSB, 160 GB Disk
      • Attached Nodes:
        • nVidia Tesla S1070s
      • Storage: Dell PowerVault MD3000/MD1000 with 15TB of storage space.
      • Availability: Through NYS Grid and Open Science Grid.
      • Maintained by CSE IT (Kevin Cleary) with cooperation from CCR (Jon Bednasz).
  • Experimental Systems (used by students to configure/reconfigure both hardware and software for fundamental research in software and algorithms)
    • nVidia Tesla/Intel Xeon Test Cluster
      • Head Node: Dell PE1950 with two Quad Core Xeon E5430 Processors, 2x6MB Cache, 2.66GHz, 1333MHz FSB, 160 GB Disk
      • Worker Nodes: Two Dell PE1950s, each with two Quad Core Xeon E5430 Processors, 2x6MB Cache, 2.66GHz, 1333MHz FSB, 160 GB Disk
      • Attached Nodes:
      • nVidia Tesla S870s
    • AMD Cluster
      • Head Node: Dell PE1950, with two Dual Core Xeon Processors 5148LV, each with 4MB Cache, 2.33GHz, 1333MHz FSB, 16 GB Memory, 146 GB Disk
      • Worker Nodes: Eight Dell PowerEdge SC1435s, each with two Dual Core Opteron Processors, 2x1MB Cache, 1.8GHz 1Ghz HyperTransport, 160GB Disk
    • Virtual Machine
      • Head Node: Dell PE1950, with two Dual Core Xeon Processors 5148LV, each with 4MB Cache, 2.33GHz, 1333MHz FSB, 16 GB Memory, 146 GB Disk
      • VM Nodes: Two PowerEdge R900s, each with 4 quad core X7350 Xeon, 2.93GHz, 8M Cache, 64GB Memory, and 2x300GB 15K SATA Drives
    • Storage
      • Two storage systems, each consisting of a Dell NX 1950 with Quad Core Xeon E5430 Processor2x6MB Cache, 2.66GHz and a Dell PowerVault MD3000 external RAID array with 11 TB of Disk.
    • Networking
      • The clusters are interconnected with both GigE (Dell PowerConnect 6248, 48 GbE PortManaged Switch, 2xDell PowerConnect 3424 24 Port FE with 2 GbE Copper Ports and 2 GbE Fiber SFP Ports) and Infiniband (Dell 24-Port Internally Managed 9024 DDR InfiniBand Edge Switch) switches and cards.
    • Condor Flock: 35 PCs
      • 10 Lenovo 3000 J200 Type 9690, Celeron 420, 1GB, 80GB Disk
      • 15 Lenovo 3000 J200 Type 9690, Celeron 420, 2GB, 80GB Disk
      • 10 Lenovo ThinkCenter A61e Type 6417, AMD S LE 1150, 1GB, 80GB Disk

  • Stable Student/Staff Systems (PCs, Printer, 15 TB Storage System)

    • In addition, the laboratory contains 5 stable Dell workstations for use by students/staff and a stable Dell NX1950 Quad Core Xeon node coupled to a Dell PowerVault MD3000/MD1000 with 15 TB of storage for students/staff (shared with the Production Dell/NVIDIA cluster). The lab also contains a networked hp LaserJet 4250DTN duplex printer.

    Documentation

    Acknowledgment

    This material is based upon work supported by the National Science Foundation under Grant Nos. 0454114 and 0204918. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.