Site Map
 
   
I
 
   
Contact Us
 
        A woman-owned, HUB-certified and DIR contract  company  dedicated to helping our customers meet their Information Technology requirements.
 
Home
 
          
I
 
          
About Us
 
          
I
 
          
Services
 
          
I
 
          
Products
 
          
I
 
          
Partners
 
          
I
 
          
Support and Maintenance
 
        

 

 

 

 

Storage Applications Inc (SAI) now provides support and maintenance services for Linux Networx (LNXI) Evolocity and LS series clusters!

 

 

 

 

Storage Applications, Inc., [ SAI ] is a privately-held, Houston-based firm that has been focused on the meeting the compute needs of the Oil & Gas business. The company has been building HPC clusters and the associated high-end engineering workstations for the exploration and seismic processing requirements of our clients for over thirty years.

The company recently expanded its large systems expertise by hiring engineers from the former Linux Networx ( LNXI ).  They bring with them the proven experience in deploying large and complex systems at a variety of high performance computing centers.  These centers include the Army Research Laboratory, Los Alamos National Laboratory, Lawrence Livermore National Laboratory, ECMWF, Aeronautical Systems Command, The Boeing Company, Northrop Grumman Corporation, and ATK Corporation, among others.  In fact, many of the SAI team designed and commissioned the last large LNXI system delivered to the Army Research Lab.

SAI's breadth of experience in deploying Linux clusters, our ability to readily incorporate best-of-breed technologies, and our dedicated singular focus on Linux high performance computing gives us an advantage over other vendors.  SAI offers custom integration services to design, develop, optimize and support systems that are specifically tailored to customers' needs. We use open source software solutions with leading edge, high performance commodity hardware complemented by SAI enhancements that are specific to the high performance computing environment.  The integration approach provides for a cost effective combination of reliability, ease of use, and proven performance.

Our senior engineers, from the former Linux Networx, can administer and maintain your LNXI systems to full production standards, provide hardware and software upgrades to leverage the full value of your investment, and plan the orderly transition of your applications to a new SAI cluster. SAI also offers an Asset Recovery Program for EOL (End-of-life) systems and trade-in allowances for LNXI gear against new system purchases.

SAI support options are designed to provide the best possible price to the full-range of HPC users - from the self-sufficient to those that only want to submit jobs to the queue.  We also offer flexible support options and payment plans, so customers can tailor a program to their exact needs.   See all of SAI's HPC Support and Maintenance Offerings on our website.

With the demise of Linux Networx and the continuing financial problems of SGI (NASDAQ Delisting & SGI Layoffs) , we realize that you may be reluctant to risk your production codes to an unsupported LNXI system. SAI is a privately-held Houston-based firm that has been building HPC clusters and visualization systems for the exploration and seismic processing needs of the Oil & Gas industry for over thirty years.  Our customers have relied on us to support them for those 30 years.  SAI is not a "here today, gone tomorrow" solution provider.  Let us help.  Give us a call!

 

Critical Linux Networx Expertise

The following paragraphs provide some additional Information on the LNXI system deployments referenced above.  Former LNXI employees now at SAI, were integral to the integration and deployment of all the systems listed below.

LLNL 1152-Node MCR System

MCR was a 2,304-processor cluster supercomputer built for Lawrence Livermore National Laboratory, using Intel Xeon 2.4 GHz processors.  At the time, it was the 5th fastest supercomputer in the world.  It was then the fastest Linux cluster in the world, and the first Linux cluster to achieve Top 10 standing and the largest Quadrics Linux cluster.  With additional tuning, it was resubmitted for the June 2003 Top 500 list and was #3.  The MCR cluster was built using LNXI Evolocity II .8U compute nodes, and LNXI Iceboxes to facilitate cluster management. The storage subsystem included DDN and Blue Arc Storage devices and used the Lustre File System.  The system interconnect was Quadrics Elan3.  MCR had a theoretical peak of 11 Teraflops.  The system was removed from service in January 2007.

Lawrence Berkeley National Laboratory /  NERSC Jacquard System

The Jacquard system has 722 AMD Operton processors, Model 248 with 640 processors devoted to computation, and the rest used for I/O, interactive work, testing and interconnect management. Jacquard has a peak performance of 3.1 trillion floating point operations per second (teraflop/s). Storage from DataDirect Networks provides 30 terabytes of globally available formatted storage.  The file system was GPFS, the first non-IBM system to use GPFS.   At the time of deployment, Jacquard was one of the largest production InfiniBand-based Linux cluster systems. Jacquard uses 12X InfiniBand uplinks in its fat-tree interconnect, reducing network hot spots and improving reliability by dramatically reducing the number of cables required.

This system provides computational resources to scientists from DOE national laboratories, universities, and other research institutions to support a wide range of scientific disciplines, including client modeling, fusion energy, nanotechnology, combustion, astrophysics and life sciences.

Los Alamos National Laboratory Lightning System

Lightning includes 2,816 AMD Opteron™ processors, making it the largest AMD Opteron processor-based system delivered in 2003 and the first 64-bit Linux supercomputer in the ASCI program. Lightning, with a theoretical peak of 11.26 Tflop/s, supports the Advanced Simulation and Computing program, or ASCI, which helps ensure the safety and reliability of the nation's nuclear weapons stockpile in the absence of underground testing.

ASCI is a collaboration among Los Alamos and its sister national laboratories - Lawrence Livermore and Sandia - and the Defense Programs office of the National Nuclear Security Administration that creates the leading-edge computational modeling and simulation capabilities essential for maintaining the safety, reliability and performance of the US nuclear stockpile and reducing the nuclear danger.

Lightning is used primarily for smaller, more numerous computing jobs in the Stockpile Stewardship workload such as weapons code development, verification and validation.

Army Research Laboratory (ARL)

For the DoD High Performance Computing Modification Office procurement for TI-06, several systems were implemented by LNXI for ARL, including Humvee and MJM. 

The Humvee system was deployed in 2006.  It is a classified system with 3,336 mid-voltage 3.2 GHz Intel Dempsey cores for computation. This system increased ARL's computational capability by more than 28.7 Teraflops. The system also has 112 3.46 GHz cores (28 nodes) for login, storage, and administration with 9.4 TB of memory and 262 TB (raw) of disk. All nodes will communicate via a 4x DDR (20 Gbps) InfiniBand network with 10 GigE uplink capability.

The MJM system was deployed in 2007.  It is an unclassified system with 4,416 3.0 GHz Intel Woodcrest cores for computation. This system will increase the ARL's computational capability by more than 28.7 Teraflops. The system also has 112 3.0 GHz cores (28 nodes) for login, storage, and administration with 9.4 TB of memory and 264 TB (raw) of disk. All nodes will communicate via a 4x DDR (20 Gbps) InfiniBand network with 10 GigE uplink capability.

 
Printer Friendly Format Printer Friendly Format    Send to a Friend Send to a Friend

 

Storage Applications, Inc. - 6610 Gant Road, Houston, TX 77066 - Toll Free (800) 635-0909 - Phone (713) 973-1500 - Fax (713) 973-8969